首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The author addresses two previously unresolved issues in maximum likelihood estimation (MLE) for multidimensional scaling (MDS). First, a theoretically consistent error model for nonmetric MLDMS is proposed. In particular, theoretical arguments are given that the disturbance should be multiplicative with distance when a stochastic choice model is used on rank-ordered similarity data. This assumption implies that the systematic component of similarity in the rank order is a logarithmic function of distances between stimuli. Second, a problem with identification condition of the maximum likelihood estimators is raised. The author provides a set of constraints that guarantees the identification in MLE, and produces more desirable asymptotic confidence regions that are parameter independent. An example using perception of business schools illustrates these ideas and demonstrates the computational tractability of the MLE approach to MDS.  相似文献   

2.
Summary We consider consistency and asymptotic normality of maximum likelihood estimators (MLE) for parameters of a Lévy process of the discontinuous type. The MLE are based on a single realization of the process on a given interval [0,t]. Depending on properties of the Lévy measure we either consider the MLE corresponding to jumps of size greater than ε and, keepingt fixed, we let ε tend to 0, or we consider the MLE corresponding to the complete information of the realization of the process on [0,t] and lett tend to ∞. The results of this paper improve in both generality and rigor previous asymptotic estimation results for such processes.  相似文献   

3.
In this paper we deal with maximum likelihood estimation (MLE) of the parameters of a Pareto mixture. Standard MLE procedures are difficult to apply in this setup, because the distributions of the observations do not have common support. We study the properties of the estimators under different hypotheses; in particular, we show that, when all the parameters are unknown, the estimators can be found maximizing the profile likelihood function. Then we turn to the computational aspects of the problem, and develop three alternative procedures: an EM-type algorithm, a Simulated Annealing and an algorithm based on Cross-Entropy minimization. The work is motivated by an application in the operational risk measurement field: we fit a Pareto mixture to operational losses recorded by a bank in two different business lines. Under the assumption that each population follows a Pareto distribution, the appropriate model is a mixture of Pareto distributions where all the parameters have to be estimated.  相似文献   

4.
We consider some inference problems concerning the drift parameters of multi‐factors Vasicek model (or multivariate Ornstein–Uhlebeck process). For example, in modeling for interest rates, the Vasicek model asserts that the term structure of interest rate is not just a single process, but rather a superposition of several analogous processes. This motivates us to develop an improved estimation theory for the drift parameters when homogeneity of several parameters may hold. However, the information regarding the equality of these parameters may be imprecise. In this context, we consider Stein‐rule (or shrinkage) estimators that allow us to improve on the performance of the classical maximum likelihood estimator (MLE). Under an asymptotic distributional quadratic risk criterion, their relative dominance is explored and assessed. We illustrate the suggested methods by analyzing interbank interest rates of three European countries. Further, a simulation study illustrates the behavior of the suggested method for observation periods of small and moderate lengths of time. Our analytical and simulation results demonstrate that shrinkage estimators (SEs) provide excellent estimation accuracy and outperform the MLE uniformly. An over‐ridding theme of this paper is that the SEs provide powerful extensions of their classical counterparts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
本文提出了一个在价格限制即涨跌停板制度存在的情况下,股票日收益率可能遵循的时间序列模型一双限制Tobit自回归GARCH模型,建立了此模型的最大似然估计法(MLE),用Monte Carlo实验研究了最大似然估计量性质.作为此模型应用,我们对一个上海股市的股票日收益率模型参数进行了估计。  相似文献   

6.
We provide in this paper asymptotic theory for the multivariate GARCH(p,q) process. Strong consistency of the quasi-maximum likelihood estimator (MLE) is established by appealing to conditions given by Jeantheau (Econometric Theory14 (1998), 70) in conjunction with a result given by Boussama (Ergodicity, mixing and estimation in GARCH models, Ph.D. Dissertation, University of Paris 7, 1998) concerning the existence of a stationary and ergodic solution to the multivariate GARCH(p,q) process. We prove asymptotic normality of the quasi-MLE when the initial state is either stationary or fixed.  相似文献   

7.
定数截尾两参数指数——威布尔分布形状参数的Bayes估计   总被引:2,自引:0,他引:2  
在不同的损失函数下,本文研究了两参数指数—威布尔分布(EWD)形状参数的Bayes估计问题.基于定数截尾试验,当其中一个形状参数α已知时,给出了另一个形状参数θ在三种不同损失函数下的Bayes估计表达式,并求得了可靠度函数的Bayes点估计.最后运用随机模拟方法,将Bayes估计和极大似然估计进行了比较.结果表明,LINEX损失下Bayes估计的精度比极大似然估计高.  相似文献   

8.
Abstract

Maximum pseudo-likelihood estimation has hitherto been viewed as a practical but flawed alternative to maximum likelihood estimation, necessary because the maximum likelihood estimator is too hard to compute, but flawed because of its inefficiency when the spatial interactions are strong. We demonstrate that a single Newton-Raphson step starting from the maximum pseudo-likelihood estimator produces an estimator which is close to the maximum likelihood estimator in terms of its actual value, attained likelihood, and efficiency, even in the presence of strong interactions. This hybrid technique greatly increases the practical applicability of pseudo-likelihood-based estimation. Additionally, in the case of the spatial point processes, we propose a proper maximum pseudo-likelihood estimator which is different from the conventional one. The proper maximum pseudo-likelihood estimator clearly shows better performance than the conventional one does when the spatial interactions are strong.  相似文献   

9.
This paper is intended as an investigation of parametric estimation for the randomly right censored data. In parametric estimation, the Kullback-Leibler information is used as a measure of the divergence of a true distribution generating a data relative to a distribution in an assumed parametric model M. When the data is uncensored, maximum likelihood estimator (MLE) is a consistent estimator of minimizing the Kullback-Leibler information, even if the assumed model M does not contain the true distribution. We call this property minimum Kullback-Leibler information consistency (MKLI-consistency). However, the MLE obtained by maximizing the likelihood function based on the censored data is not MKLI-consistent. As an alternative to the MLE, Oakes (1986, Biometrics, 42, 177–182) proposed an estimator termed approximate maximum likelihood estimator (AMLE) due to its computational advantage and potential for robustness. We show MKLI-consistency and asymptotic normality of the AMLE under the misspecification of the parametric model. In a simulation study, we investigate mean square errors of these two estimators and an estimator which is obtained by treating a jackknife corrected Kaplan-Meier integral as the log-likelihood. On the basis of the simulation results and the asymptotic results, we discuss comparison among these estimators. We also derive information criteria for the MLE and the AMLE under censorship, and which can be used not only for selecting models but also for selecting estimation procedures.  相似文献   

10.
Variance components estimation and mixed model analysis are central themes in statistics with applications in numerous scientific disciplines. Despite the best efforts of generations of statisticians and numerical analysts, maximum likelihood estimation (MLE) and restricted MLE of variance component models remain numerically challenging. Building on the minorization–maximization (MM) principle, this article presents a novel iterative algorithm for variance components estimation. Our MM algorithm is trivial to implement and competitive on large data problems. The algorithm readily extends to more complicated problems such as linear mixed models, multivariate response models possibly with missing data, maximum a posteriori estimation, and penalized estimation. We establish the global convergence of the MM algorithm to a Karush–Kuhn–Tucker point and demonstrate, both numerically and theoretically, that it converges faster than the classical EM algorithm when the number of variance components is greater than two and all covariance matrices are positive definite. Supplementary materials for this article are available online.  相似文献   

11.

We introduce a new type of point process model to describe the incidence of contagious diseases. The model incorporates the premise that when a disease occurs at low frequency in the population, such as in the primary stages of an outbreak, then anyone with the disease is likely to have a high rate of transmission to others, whereas when the disease is prevalent, the transmission rate is lower due to prevention measures and a relatively high percentage of previous exposure in the population. The model is said to be recursive, in the sense that the conditional intensity at any time depends on the productivity associated with previous points, and this productivity in turn depends on the conditional intensity at those points. Basic properties of the model are derived, estimation and simulation are discussed, and the recursive model is shown to fit well to California Rocky Mountain Spotted Fever data.

  相似文献   

12.
A space-time random set is defined and methods of its parameters estimation are investigated. The evolution in discrete time is described by a state-space model. The observed output is a planar union of interacting discs given by a probability density with respect to a reference Poisson process of discs. The state vector is to be estimated together with auxiliary parameters of transitions caused by a random walk. Three methods of parameters estimation are involved, first of which is the maximum likelihood estimation (MLE) for individual outputs at fixed times. In the space-time model the state vector can be estimated by the particle filter (PF), where MLE serves to the estimation of auxiliary parameters. In the present paper the aim is to compare MLE and PF with particle Markov chain Monte Carlo (PMCMC). From the group of PMCMC methods we use specially the particle marginal Metropolis-Hastings (PMMH) algorithm which updates simultaneously the state vector and the auxiliary parameters. A simulation study is presented in which all estimators are compared by means of the integrated mean square error. New data are then simulated repeatedly from the model with parameters estimated by PMMH and the fit with the original model is quantified by means of the spherical contact distribution function.  相似文献   

13.
Analysis of rounded data from dependent sequences   总被引:1,自引:0,他引:1  
Observations on continuous populations are often rounded when recorded due to the precision of the recording mechanism. However, classical statistical approaches have ignored the effect caused by the rounding errors. When the observations are independent and identically distributed, the exact maximum likelihood estimation (MLE) can be employed. However, if rounded data are from a dependent structure, the MLE of the parameters is difficult to calculate since the integral involved in the likelihood equation is intractable. This paper presents and examines a new approach to the parameter estimation, named as “short, overlapping series” (SOS), to deal with the α-mixing models in presence of rounding errors. We will establish the asymptotic properties of the SOS estimators when the innovations are normally distributed. Comparisons of this new approach with other existing techniques in the literature are also made by simulation with samples of moderate sizes.  相似文献   

14.
Estimating financial risk is a critical issue for banks and insurance companies. Recently, quantile estimation based on extreme value theory (EVT) has found a successful domain of application in such a context, outperforming other methods. Given a parametric model provided by EVT, a natural approach is maximum likelihood estimation. Although the resulting estimator is asymptotically efficient, often the number of observations available to estimate the parameters of the EVT models is too small to make the large sample property trustworthy. In this paper, we study a new estimator of the parameters, the maximum Lq-likelihood estimator (MLqE), introduced by Ferrari and Yang (Estimation of tail probability via the maximum Lq-likelihood method, Technical Report 659, School of Statistics, University of Minnesota, 2007 ). We show that the MLqE outperforms the standard MLE, when estimating tail probabilities and quantiles of the generalized extreme value (GEV) and the generalized Pareto (GP) distributions. First, we assess the relative efficiency between the MLqE and the MLE for various sample sizes, using Monte Carlo simulations. Second, we analyze the performance of the MLqE for extreme quantile estimation using real-world financial data. The MLqE is characterized by a distortion parameter q and extends the traditional log-likelihood maximization procedure. When q→1, the new estimator approaches the traditional maximum likelihood estimator (MLE), recovering its desirable asymptotic properties; when q ≠ 1 and the sample size is moderate or small, the MLqE successfully trades bias for variance, resulting in an overall gain in terms of accuracy (mean squared error).   相似文献   

15.
Wong and Yu [Generalized MLE of a joint distribution function with multivariate interval-censored data, J. Multivariate Anal. 69 (1999) 155-166] discussed generalized maximum likelihood estimation of the joint distribution function of a multivariate random vector whose coordinates are subject to interval censoring. They established uniform consistency of the generalized MLE (GMLE) of the distribution function under the assumption that the random vector is independent of the censoring vector and that both of the vector distributions are discrete. We relax these assumptions and establish consistency results of the GMLE under a multivariate mixed case interval censorship model. van der Vaart and Wellner [Preservation theorems for Glivenko-Cantelli and uniform Glivenko-Cantelli class, in: E. Gine, D.M. Mason, J.A. Wellner (Eds.), High Dimensional Probability, vol. II, Birkhäuser, Boston, 2000, pp. 115-133] and Yu [Consistency of the generalized MLE with multivariate mixed case interval-censored data, Ph.D Dissertation, Binghamton University, 2000] independently proved strong consistency of the GMLE in the L1(μ)-topology, where μ is a measure derived from the joint distribution of the censoring variables. We establish strong consistency of the GMLE in the topologies of weak convergence and pointwise convergence, and eventually uniform convergence under appropriate distributional assumptions and regularity conditions.  相似文献   

16.
This is a review of some recent results on parameter estimation by the continuous time observations for two models of observations. The first one is the so called signal in white Gaussian noise and the second is inhomogeneous Poisson process. The main question in all statements is: what are the properties of the MLE if there is a misspecification in the regularity conditions? We consider three types of regularity: smooth signals, signals with cusp-type singularity and discontinuous signals. We suppose that the statistician assumes one type of regularity/singularity, but the real observations contain signals with different type of singularity/regularity. For example, the theoretical (assumed) model has a discontinuous signal, but the real observed signal has cusp-type singularity. We describe the asymptotic behavior of the MLE in such situations.  相似文献   

17.
Maximum likelihood estimation (MLE) of hyperparameters in Gaussian process regression as well as other computational models usually and frequently requires the evaluation of the logarithm of the determinant of a positive-definite matrix (denoted by C hereafter). In general, the exact computation of is of O(N3) operations where N is the matrix dimension. The approximation of could be developed with O(N2) operations based on power-series expansion and randomized trace estimator. In this paper, the accuracy and effectiveness of using uniformly distributed seeds for approximation are investigated. The research shows that uniform-seed based approximation is an equally good alternative to Gaussian-seed based approximation, having slightly better approximation accuracy and smaller variance. Gaussian process regression examples also substantiate the effectiveness of such a uniform-seed based log-det approximation scheme.  相似文献   

18.
We consider the constructive approximation of a non-linear operator that is known on a bounded but not necessarily compact set. Our main result can be regarded as an extension of the classical Stone-Weierstrass Theorem and also shows that the approximation is stable to small

disturbances.

This problem arises in the modelling of real dynamical systems where an input-output mapping is known only on some bounded subset of the input space. In such cases it is desirable to construct a model of the real system with a complete input-output map that preserves, in some approximate sense, the known mapping. The model is normally constructed from an algebra of elementary continuous functions.

We will assume that the input space is a separable Hilbert space. To solve the problem we introduce a special weak topology and show that uniform continuity of the given operator in the weak topology provides an alternative compactness condition that is sufficient to justify the desired approximation.  相似文献   

19.
We discuss some inference problems associated with the fractional Ornstein–Uhlenbeck (fO–U) process driven by the fractional Brownian motion (fBm). In particular, we are concerned with the estimation of the drift parameter, assuming that the Hurst parameter $H$ is known and is in $[1/2, 1)$ . Under this setting we compute the distributions of the maximum likelihood estimator (MLE) and the minimum contrast estimator (MCE) for the drift parameter, and explore their distributional properties by paying attention to the influence of $H$ and the sampling span $M$ . We also deal with the ordinary least squares estimator (OLSE) and examine the asymptotic relative efficiency. It is shown that the MCE is asymptotically efficient, while the OLSE is inefficient. We also consider the unit root testing problem in the fO–U process and compute the power of the tests based on the MLE and MCE.  相似文献   

20.
In traditional works on numerical schemes for solving stochastic differential equations (SDEs), the globally Lipschitz assumption is often assumed to ensure different types of convergence. In practice, this is often too strong a condition. Brownian motion driven SDEs used in applications sometimes have coefficients which are only Lipschitz on compact sets, but the paths of the SDE solutions can be arbitrarily large. In this paper, we prove convergence in probability and a weak convergence result under a less restrictive assumption, that is, locally Lipschitz and with no finite time explosion. We prove if a numerical scheme converges in probability uniformly on any compact time set (UCP) with a certain rate under a global Lipschitz condition, then the UCP with the same rate holds when a globally Lipschitz condition is replaced with a locally Lipschitz plus no finite explosion condition. For the Euler scheme, weak convergence of the error process is also established. The main contribution of this paper is the proof of n weak convergence of the normalized error process and the limit process is also provided. We further study the boundedness of the second moments of the weak limit process and its running supremum under both global Lipschitz and locally Lipschitz conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号