首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This work studies a proportional hazards model for survival data with "long-term survivors",in which covariates are subject to linear measurement error.It is well known that the naive estimators from both partial and full likelihood methods are inconsistent under this measurement error model.For measurement error models,methods of unbiased estimating function and corrected likelihood have been proposed in the literature.In this paper,we apply the corrected partial and full likelihood approaches to estimate the model and obtain statistical inference from survival data with long-term survivors.The asymptotic properties of the estimators are established.Simulation results illustrate that the proposed approaches provide useful tools for the models considered.  相似文献   

2.
This paper investigates several strategies for consistently estimating the so-called Hurst parameter H responsible for the long-memory correlations in a linear class of ARCH time series, known as LARCH(∞) models, as well as in the continuous-time Gaussian stochastic process known as fractional Brownian motion (fBm). A LARCH model’s parameter is estimated using a conditional maximum likelihood method, which is proved to have good stability properties. A local Whittle estimator is also discussed. The article further proposes a specially designed conditional maximum likelihood method for estimating the H which is closer in spirit to one based on discrete observations of fBm. In keeping with the popular financial interpretation of ARCH models, all estimators are based only on observation of the “returns” of the model, not on their “volatilities”.  相似文献   

3.
In order to solve a quadratic 0/1 problem, some techniques, consisting in deriving a linear integer formulation, are used. Those techniques, called “linearization”, usually involve a huge number of additional variables. As a consequence, the exact resolution of the linear model is, in general, very difficult. Our aim, in this paper, is to propose “economical” linear models. Starting from an existing linearization (typically the so-called “classical linearization”), we find a new linearization with fewer variables. The resulting model is called “Miniaturized” linearization. Based on this approach, we propose a new linearization scheme for which numerical tests have been performed.  相似文献   

4.
In statistics of extremes, inference is often based on the excesses over a high random threshold. Those excesses are approximately distributed as the set of order statistics associated to a sample from a generalized Pareto model. We then get the so-called “maximum likelihood” estimators of the tail index γ. In this paper, we are interested in the derivation of the asymptotic distributional properties of a similar “maximum likelihood” estimator of a positive tail index γ, based also on the excesses over a high random threshold, but with a trial of accommodation of bias in the Pareto model underlying those excesses. We next proceed to an asymptotic comparison of the two estimators at their optimal levels. An illustration of the finite sample behaviour of the estimators is provided through a small-scale Monte Carlo simulation study. Research partially supported by FCT/POCTI and POCI/FEDER.  相似文献   

5.
Analysis of rounded data from dependent sequences   总被引:1,自引:0,他引:1  
Observations on continuous populations are often rounded when recorded due to the precision of the recording mechanism. However, classical statistical approaches have ignored the effect caused by the rounding errors. When the observations are independent and identically distributed, the exact maximum likelihood estimation (MLE) can be employed. However, if rounded data are from a dependent structure, the MLE of the parameters is difficult to calculate since the integral involved in the likelihood equation is intractable. This paper presents and examines a new approach to the parameter estimation, named as “short, overlapping series” (SOS), to deal with the α-mixing models in presence of rounding errors. We will establish the asymptotic properties of the SOS estimators when the innovations are normally distributed. Comparisons of this new approach with other existing techniques in the literature are also made by simulation with samples of moderate sizes.  相似文献   

6.
We develop a simple influence measure to assess whether Bayesian estimators in multivariate extreme value problems are sensitive to outliers. The proposed measure is easy to compute by importance sampling and successfully captures two effects on the functional: the “data effect” and the “parameter uncertainty effect”. We also propose a new Bayesian estimator which is easy to implement and is robust. The methods are tested and illustrated using simulated data and then applied to stock market data.  相似文献   

7.
In this paper, we present a general method which can be used in order to show that the maximum likelihood estimator (MLE) of an exponential mean θ is stochastically increasing with respect to θ under different censored sampling schemes. This propery is essential for the construction of exact confidence intervals for θ via “pivoting the cdf” as well as for the tests of hypotheses about θ. The method is shown for Type-I censoring, hybrid censoring and generalized hybrid censoring schemes. We also establish the result for the exponential competing risks model with censoring.  相似文献   

8.
The score tests of independence in multivariate extreme values derived by Tawn (Tawn, J.A., “Bivariate extreme value theory: models and estimation,” Biometrika 75, 397–415, 1988) and Ledford and Tawn (Ledford, A.W. and Tawn, J.A., “Statistics for near independence in multivariate extreme values,” Biometrika 83, 169–187, 1996) have non-regular properties that arise due to violations of the usual regularity conditions of maximum likelihood. Two distinct types of regularity violation are encountered in each of their likelihood frameworks: independence within the underlying model corresponding to a boundary point of the parameter space and the score function having an infinite second moment. For applications, the second form of regularity violation has the more important consequences, as it results in score statistics with non-standard normalisation and poor rates of convergence. The corresponding tests are difficult to use in practical situations because their asymptotic properties are unrepresentative of their behaviour for the sample sizes typical of applications, and extensive simulations may be needed in order to evaluate adequately their null distribution. Overcoming this difficulty is the primary focus of this paper. We propose a modification to the likelihood based approaches used by Tawn (Tawn, J.A., “Bivariate extreme value theory: models and estimation,” Biometrika 75, 397–415, 1988) and Ledford and Tawn (Ledford, A.W. and Tawn, J.A., “Statistics for near independence in multivariate extreme values,” Biometrika 83, 169–187, 1996) that provides asymptotically normal score tests of independence with regular normalisation and rapid convergence. The resulting tests are straightforward to implement and are beneficial in practical situations with realistic amounts of data. AMS 2000 Subject Classification Primary—60G70 Secondary—62H15  相似文献   

9.
We consider the variational inequality problem formed by a general set-valued maximal monotone operator and a possibly unbounded “box” in , and study its solution by proximal methods whose distance regularizations are coercive over the box. We prove convergence for a class of double regularizations generalizing a previously-proposed class of Auslender et al. Using these results, we derive a broadened class of augmented Lagrangian methods. We point out some connections between these methods and earlier work on “pure penalty” smoothing methods for complementarity; this connection leads to a new form of augmented Lagrangian based on the “neural” smoothing function. Finally, we computationally compare this new kind of augmented Lagrangian to three previously-known varieties on the MCPLIB problem library, and show that the neural approach offers some advantages. In these tests, we also consider primal-dual approaches that include a primal proximal term. Such a stabilizing term tends to slow down the algorithms, but makes them more robust. This author was partially supported by CNPq, Grant PQ 304133/2004-3 and PRONEX-Optimization.  相似文献   

10.
In the previous article “Hearts of twin cotorsion pairs on exact categories. J. Algebra, 394, 245–284 (2013)”, we introduced the notion o  相似文献   

11.
In this article, likelihood ratio tests (LRTs) are developed for detecting that stochastic trends of binary responses are ordered between 2×k contingency tables. We provide a simple iterative algorithm for the maximum likelihood estimators under the order restriction and construct the LRTs using those estimators. All the distributional results of these tests are based on the large sampling theory. The finite-sample behaviors of these tests are investigated through a simulation study. As an illustration of these tests, we analyze a set of data on wheeziness of smoking coalminers.  相似文献   

12.
We prove exact formulas for measure theoretic entropy of plane billiards systems with absolutely-focusing boundaries with non-vanishing Lyapunov exponents. In particular, our formulas hold for the billiards introduced by Wojtkowski, Markarian, Donnay and Bunimovich. As an illustration, we calculate the entropy of a “perturbation” of the boundary of a polygon by absolutely focusing “ripples”.  相似文献   

13.
In this paper, an “intelligent” isolated intersection control system was developed. The developed “intelligent” system makes “real time” decisions as to whether to extend (and how much) current green time. The model developed is based on the combination of the dynamic programming and neural networks. Many tests show that the outcome (the extension of the green time) of the proposed neural network is nearly equal to the best solution. Practically negligible CPU times were achieved, and were thus absolutely acceptable for the “real time” application of the developed algorithm.  相似文献   

14.
In this paper, the dimensional-free Harnack inequalities are established on infinite-dimensional spaces. More precisely, we establish Harnack inequalities for heat semigroup on based loop group and for Ornstein-Uhlenbeck semigroup on the abstract Wiener space. As an application, we establish the HWI inequality on the abstract Wiener space, which contains three important quantities in one inequality, the relative entropy “H”, Wasserstein distance “W”, and Fisher information “I”.  相似文献   

15.
There exists an overall negative assessment of the performance of the simulated maximum likelihood algorithm in the statistics literature, founded on both theoretical and empirical results. At the same time, there also exist a number of highly successful applications. This paper explains the negative assessment by the coupling of the algorithm with “simple importance samplers”, samplers that are not explicitly parameter dependent. The successful applications in the literature are based on explicitly parameter dependent importance samplers. Simple importance samplers may efficiently simulate the likelihood function value, but fail to efficiently simulate the score function, which is the key to efficient simulated maximum likelihood. The theoretical points are illustrated by applying Laplace importance sampling in both variants to the classic salamander mating model.  相似文献   

16.
This paper presents a method of estimation of an “optimal” smoothing parameter (window width) in kernel estimators for a probability density. The obtained estimator is calculated directly from observations. By “optimal” smoothing parameters we mean those parameters which minimize the mean integral square error (MISE) or the integral square error (ISE) of approximation of an unknown density by the kernel estimator. It is shown that the asymptotic “optimality” properties of the proposed estimator correspond (with respect to the order) to those of the well-known cross-validation procedure [1, 2]. Translated fromStatisticheskie Metody Otsenivaniya i Proverki Gipotez, pp. 67–80, Perm, 1990.  相似文献   

17.
Several techniques for resampling dependent data have already been proposed. In this paper we use missing values techniques to modify the moving blocks jackknife and bootstrap. More specifically, we consider the blocks of deleted observations in the blockwise jackknife as missing data which are recovered by missing values estimates incorporating the observation dependence structure. Thus, we estimate the variance of a statistic as a weighted sample variance of the statistic evaluated in a “complete” series. Consistency of the variance and the distribution estimators of the sample mean are established. Also, we apply the missing values approach to the blockwise bootstrap by including some missing observations among two consecutive blocks and we demonstrate the consistency of the variance and the distribution estimators of the sample mean. Finally, we present the results of an extensive Monte Carlo study to evaluate the performance of these methods for finite sample sizes, showing that our proposal provides variance estimates for several time series statistics with smaller mean squared error than previous procedures.  相似文献   

18.
For multidimensional equations of flow of thin capillary films with nonlinear diffusion and convection, we prove the existence of a strong nonnegative generalized solution of the Cauchy problem with initial function in the form of a nonnegative Radon measure with compact support. We determine the exact upper estimate (global in time) for the rate of propagation of the support of this solution. The cases where the degeneracy of the equation corresponds to the conditions of “strong” and “weak” slip are analyzed separately. In particular, in the case of “ weak” slip, we establish the exact estimate of decrease in the L 2-norm of the gradient of solution. It is well known that this estimate is not true for the initial functions with noncompact supports. __________ Translated from Ukrains’kyi Matematychnyi Zhurnal, Vol. 58, No. 2, pp. 250–271, February, 2006.  相似文献   

19.
This paper introduces the “piggyback bootstrap.” Like the weighted bootstrap, this bootstrap procedure can be used to generate random draws that approximate the joint sampling distribution of the parametric and nonparametric maximum likelihood estimators in various semiparametric models, but the dimension of the maximization problem for each bootstrapped likelihood is smaller. This reduction results in significant computational savings in comparison to the weighted bootstrap. The procedure can be stated quite simply. First obtain a valid random draw for the parametric component of the model. Then take the draw for the nonparametric component to be the maximizer of the weighted bootstrap likelihood with the parametric component fixed at the parametric draw. We prove the procedure is valid for a class of semiparametric models that includes frailty regression models airsing in survival analysis and biased sampling models that have application to vaccine efficacy trials. Bootstrap confidence sets from the piggyback, and weighted bootstraps are compared for biased sampling data from simulated vaccine efficacy trials.  相似文献   

20.
On the exactness of a class of nondifferentiable penalty functions   总被引:1,自引:0,他引:1  
In this paper, we consider a class of nondifferentiable penalty functions for the solution of nonlinear programming problems without convexity assumptions. Preliminarily, we introduce a notion of exactness which appears to be of relevance in connection with the solution of the constrained problem by means of unconstrained minimization methods. Then, we show that the class of penalty functions considered is exact, according to this notion. This research was partially supported by the National Research Program on “Modelli e Algoritmi per l'Ottimizzazione,” Ministero della Pubblica, Istruzione, Roma, Italy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号