首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Distributions with unimodal densities are among the most commonly used in practice. However, for many unimodal distribution families the likelihood functions may be unbounded, thereby leading to inconsistent estimates. The maximum product of spacings (MPS) method, introduced by Cheng and Amin and independently by Ranneby, has been known to give consistent and asymptotically normal estimators in many parametric situations where the maximum likelihood method fails. In this paper, strong consistency theorems for the MPS method are obtained under general conditions which are comparable to the conditions of Bahadur and Wang for the maximum likelihood method. The consistency theorems obtained here apply to both parametric models and some nonparametric models. In particular, in any unimodal distribution family the asymptotic MPS estimator of the underlying unimodal density is shown to be universally L1 consistent without any further conditions (in parametric or nonparametric settings).  相似文献   

2.
This paper deals with maximum likelihood estimation of linear or nonlinear functional relationships assuming that replicated observations have been made on p variables at n points. The joint distribution of the pn errors is assumed to be multivariate normal. Existing results are extended in two ways: first, from known to unknown error covariance matrix; second, from the two variate to the multivariate case.For the linear relationship it is shown that the maximum likelihood point estimates are those obtained by the method of generalized least squares. The present method, however, has the advantage of supplying estimates of the asymptotic covariances of the structural parameter estimates.  相似文献   

3.
This paper presents a method of estimation of an “optimal” smoothing parameter (window width) in kernel estimators for a probability density. The obtained estimator is calculated directly from observations. By “optimal” smoothing parameters we mean those parameters which minimize the mean integral square error (MISE) or the integral square error (ISE) of approximation of an unknown density by the kernel estimator. It is shown that the asymptotic “optimality” properties of the proposed estimator correspond (with respect to the order) to those of the well-known cross-validation procedure [1, 2]. Translated fromStatisticheskie Metody Otsenivaniya i Proverki Gipotez, pp. 67–80, Perm, 1990.  相似文献   

4.
This article considers the estimation of parameters of Weibull distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood method under step-stress partially accelerated test model. The maximum likelihood estimates (MLEs) of the unknown parameters are obtained by Newton–Raphson algorithm. Also, the approximate Fisher information matrix is obtained for constructing asymptotic confidence bounds for the model parameters. The biases and mean square errors of the maximum likelihood estimators are computed to assess their performances through a Monte Carlo simulation study.  相似文献   

5.
In this paper we derive second- and third-order bias-corrected maximum likelihood estimates in general uniparametric models. We compare the corrected estimates and the usual maximum likelihood estimate in terms of their mean squared errors. We also obtain closed-form expressions for bias-corrected estimates in one-parameter exponential family models. Our results cover many important and commonly used distributions. Simulation results are also given.  相似文献   

6.

Multiple linear regression model based on normally distributed and uncorrelated errors is a popular statistical tool with application in various fields. But these assumptions of normality and no serial correlation are hardly met in real life. Hence, this study considers the linear regression time series model for series with outliers and autocorrelated errors. These autocorrelated errors are represented by a covariance-stationary autoregressive process where the independent innovations are driven by shape mixture of skew-t normal distribution. The shape mixture of skew-t normal distribution is a flexible extension of the skew-t normal with an additional shape parameter that controls skewness and kurtosis. With this error model, stochastic modeling of multiple outliers is possible with an adaptive robust maximum likelihood estimation of all the parameters. An Expectation Conditional Maximization Either algorithm is developed to carryout the maximum likelihood estimation. We derive asymptotic standard errors of the estimators through an information-based approximation. The performance of the estimation procedure developed is evaluated through Monte Carlo simulations and real life data analysis.

  相似文献   

7.
The aim of this paper is to present the generalized biparabolic distribution (GBP) as a good candidate to be utilized as the distribution underlying to PERT methodology (Malcolm et al. in Oper. Res. 7:646–669, 1959). To do this and following the criteria established by Taha (Investigación de Operaciones, 1981) and Herrerías (Estudios de Economía Aplicada, pp. 89–112, 1989), we will compare the mean and variance estimates derived from each proposed density function, viz beta, two-sided power (TSP) and GBP distributions. Also we will compare the estimates contributed by the mesokurtic and of constant variance families of the aforementioned distributions. The main conclusion is that the GBP distribution is the most convenient to be used in the PERT methodology because its mean is almost as moderate as that of trapezoidal and its variance is much higher than that of the rest of distributions. As a consequence, it can be stated that the GBP distribution is an alternative to the other four-parameter distributions.  相似文献   

8.
In this paper we introduce three families of multivariate and matrixl 1-norm symmetric distributions with location and scale parameters and discuss their maximum likelihood estimates and likelihood ratio criteria. It is shown that under certain condition sthey have the same form as those for independent exponential variates.Projects supported by the science Fund of the Chinese Academy of Sciences.  相似文献   

9.
Abstract

Maximum likelihood estimation with nonnormal error distributions provides one method of robust regression. Certain families of normal/independent distributions are particularly attractive for adaptive, robust regression. This article reviews the properties of normal/independent distributions and presents several new results. A major virtue of these distributions is that they lend themselves to EM algorithms for maximum likelihood estimation. EM algorithms are discussed for least Lp regression and for adaptive, robust regression based on the t, slash, and contaminated normal families. Four concrete examples illustrate the performance of the different methods on real data.  相似文献   

10.
The Pearson-type VII distributions (containing the Student's tt distributions) are becoming increasing prominent and are being considered as competitors to the normal distribution. Motivated by real examples in decision sciences, Bayesian statistics, probability theory and Physics, a new Pearson-type VII distribution is introduced by taking the product of two Pearson-type VII pdfs. Various structural properties of this distribution are derived, including its cdf, moments, mean deviation about the mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood estimates and the Fisher information matrix. Finally, an application to a Bayesian testing problem is illustrated.  相似文献   

11.
We consider the problem of making statistical inference about the mean of a normal distribution based on a random sample of quantized (digitized) observations. This problem arises, for example, in a measurement process with errors drawn from a normal distribution and with a measurement device or process with a known resolution, such as the resolution of an analog-to-digital converter or another digital instrument. In this paper we investigate the effect of quantization on subsequent statistical inference about the true mean. If the standard deviation of the measurement error is large with respect to the resolution of the indicating measurement device, the effect of quantization (digitization) diminishes and standard statistical inference is still valid. Hence, in this paper we consider situations where the standard deviation of the measurement error is relatively small. By Monte Carlo simulations we compare small sample properties of the interval estimators of the mean based on standard approach (i.e. by ignoring the fact that the measurements have been quantized) with some recently suggested methods, including the interval estimators based on maximum likelihood approach and the fiducial approach. The paper extends the original study by Hannig et al. (2007).  相似文献   

12.
This paper deals with the problem of predicting thesth record value based on the firstm record values (s>m) when the observations are from the exponential distribution. Various estimates for thesth record value are obtained and their mean square errors are compared.  相似文献   

13.
Summary Let (Ω,A) be a measurable space, let Θ be an open set inR k , and let {P θ; θ∈Θ} be a family of probability measures defined onA. Let μ be a σ-finite measure onA, and assume thatP θ≪μ for each θ∈Θ. Let us denote a specified version ofdP θ /d μ byf(ω; θ). In many large sample problems in statistics, where a study of the log-likelihood is important, it has been convenient to impose conditions onf(ω; θ) similar to those used by Cramér [2] to establish the consistency and asymptotic normality of maximum likelihood estimates. These are of a purely analytical nature, involving two or three pointwise derivatives of lnf(ω; θ) with respect to θ. Assumptions of this nature do not have any clear probabilistic or statistical interpretation. In [10], LeCam introduced the concept of differentially asymptotically normal (DAN) families of distributions. One of the basic properties of such a family is the form of the asymptotic expansion, in the probability sense, of the log-likelihoods. Roussas [14] and LeCam [11] give conditions under which certain Markov Processes, and sequences of independent identically distributed random variables, respectively, form DAN families of distributions. In both of these papers one of the basic assumptions is the differentiability in quadratic mean of a certain random function. This seems to be a more appealing type of assumption because of its probabilistic nature. In this paper, we shall prove a theorem involving differentiability in quadratic mean of random functions. This is done in Section 2. Then, by confining attention to the special case when the random function is that considered by LeCam and Roussas, we will be able to show that the standard conditions of Cramér type are actually stronger than the conditions of LeCam and Roussas in that they imply the existence of the necessary quadratic mean derivative. The relevant discussion is found in Section 3. This research was supported by the National Science Foundation, Grant GP-20036.  相似文献   

14.
This article proposes a three-step procedure to estimate portfolio return distributions under the multivariate Gram–Charlier (MGC) distribution. The method combines quasi maximum likelihood (QML) estimation for conditional means and variances and the method of moments (MM) estimation for the rest of the density parameters, including the correlation coefficients. The procedure involves consistent estimates even under density misspecification and solves the so-called ‘curse of dimensionality’ of multivariate modelling. Furthermore, the use of a MGC distribution represents a flexible and general approximation to the true distribution of portfolio returns and accounts for all its empirical regularities. An application of such procedure is performed for a portfolio composed of three European indices as an illustration. The MM estimation of the MGC (MGC-MM) is compared with the traditional maximum likelihood of both the MGC and multivariate Student’s t (benchmark) densities. A simulation on Value-at-Risk (VaR) performance for an equally weighted portfolio at 1 and 5 % confidence indicates that the MGC-MM method provides reasonable approximations to the true empirical VaR. Therefore, the procedure seems to be a useful tool for risk managers and practitioners.  相似文献   

15.
Direct importance estimation for covariate shift adaptation   总被引:2,自引:0,他引:2  
A situation where training and test samples follow different input distributions is called covariate shift. Under covariate shift, standard learning methods such as maximum likelihood estimation are no longer consistent—weighted variants according to the ratio of test and training input densities are consistent. Therefore, accurately estimating the density ratio, called the importance, is one of the key issues in covariate shift adaptation. A naive approach to this task is to first estimate training and test input densities separately and then estimate the importance by taking the ratio of the estimated densities. However, this naive approach tends to perform poorly since density estimation is a hard task particularly in high dimensional cases. In this paper, we propose a direct importance estimation method that does not involve density estimation. Our method is equipped with a natural cross validation procedure and hence tuning parameters such as the kernel width can be objectively optimized. Furthermore, we give rigorous mathematical proofs for the convergence of the proposed algorithm. Simulations illustrate the usefulness of our approach.  相似文献   

16.
In this paper, we propose a local Whittle likelihood estimator for spectral densities of non-Gaussian processes and a local Whittle likelihood ratio test statistic for the problem of testing whether the spectral density of a non-Gaussian stationary process belongs to a parametric family or not. Introducing a local Whittle likelihood of a spectral density f θ (λ) around λ, we propose a local estimator [^(q)] = [^(q)] (l){\hat{\theta } = \hat{\theta } (\lambda ) } of θ which maximizes the local Whittle likelihood around λ, and use f[^(q)] (l) (l){f_{\hat{\theta } (\lambda )} (\lambda )} as an estimator of the true spectral density. For the testing problem, we use a local Whittle likelihood ratio test statistic based on the local Whittle likelihood estimator. The asymptotics of these statistics are elucidated. It is shown that their asymptotic distributions do not depend on non-Gaussianity of the processes. Because our models include nonlinear stationary time series models, we can apply the results to stationary GARCH processes. Advantage of the proposed estimator is demonstrated by a few simulated numerical examples.  相似文献   

17.
The present paper contains some results of application of a method of statistical data processing based on the use of the signs of deviations. The basis of this method is the sign technique developed in [1] for the linear regression. Interest in this method is due to the following reasons: for an almost arbitrary (and unknown to the experimenter) law of distribution of the observation errors, one succeeds in constructing distribution-free rules of testing hypotheses and interval estimation. Under very weak constraints on the properties of the initial distribution, these statistical criteria are in some sense optimal [2]. In addition, the sign methods are stable with respect to outliers and other noise. Translated fromStatisticheskie Metody Otsenivaniya i Proverki Gipotez, pp. 131–142, Perm, 1990.  相似文献   

18.
This paper focuses on the question of specification of measurement error distribution and the distribution of true predictors in generalized linear models when the predictors are subject to measurement errors. The standard measurement error model typically assumes that the measurement error distribution and the distribution of covariates unobservable in the main study are normal. To make the model flexible enough we, instead, assume that the measurement error distribution is multivariate t and the distribution of true covariates is a finite mixture of normal densities. Likelihood–based method is developed to estimate the regression parameters. However, direct maximization of the marginal likelihood is numerically difficult. Thus as an alternative to it we apply the EM algorithm. This makes the computation of likelihood estimates feasible. The performance of the proposed model is investigated by simulation study.  相似文献   

19.
Summary This paper presents the maximum likelihood estimators (MLEs) of the Lorenz curve and Gini index of the exponential distribution, their exact distributions and moments. All these MLEs are shown to converge almost surely and in therth mean. Further their asymptotic distributions are obtained. Here we use only very simple arguments to derive certain results that are very useful in statistical study of ‘inequality’.  相似文献   

20.
Finite mixtures of Markov processes with densities belonging to exponential families are introduced. Quasi-likelihood and maximum likelihood methods are used to estimate the parameters of the mixing distributions and of the component distributions. The E-M algorithm is used to compute the ML estimates. Mixture of Autoregressive processes and of two-state Markov chains are discussed as specific examples. Simulation results on the comparison of quasi-likelihood and ML estimates are reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号