首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
This paper investigates the generalized least squares estimation and the maximum likelihood estimation of the parameters in a multivariate polychoric correlations model, based on data from a multidimensional contingency table. Asymptotic properties of the estimators are discussed. An iterative procedure based on the Gauss-Newton algorithm is implemented to produce the generalized least squares estimates and the standard errors estimates. It is shown that via an iteratively reweighted method, the algorithm produces the maximum likelihood estimates as well. Numerical results on the finite sample behaviors of the methods are reported.  相似文献   

2.
Maximum likelihood methods are important for system modeling and parameter estimation. This paper derives a recursive maximum likelihood least squares identification algorithm for systems with autoregressive moving average noises, based on the maximum likelihood principle. In this derivation, we prove that the maximum of the likelihood function is equivalent to minimizing the least squares cost function. The proposed algorithm is different from the corresponding generalized extended least squares algorithm. The simulation test shows that the proposed algorithm has a higher estimation accuracy than the recursive generalized extended least squares algorithm.  相似文献   

3.
We present a general framework for treating categorical data with errors of observation. We show how both latent class models and models for doubly sampled data can be treated as exponential family nonlinear models. These are extended generalized linear models with the link function substituted by an observationwise defined non-linear function of the model parameters. The models are formulated in terms of structural probabilities and conditional error probabilities, thus allowing natural constraints when modelling errors of observation. We use an iteratively reweighted least squares procedure for obtaining maximum likelihood estimates. This is faster than the traditionally used EM algorithm and the computations can be made in GLIM.1 As examples we analyse three sets of categorical data with errors of observation which have been analysed before by Ashford and Sowden,2 Goodman3 and Chen,4 respectively.  相似文献   

4.
Joint latent class modeling of disease prevalence and high-dimensional semicontinuous biomarker data has been proposed to study the relationship between diseases and their related biomarkers. However, statistical inference of the joint latent class modeling approach has proved very challenging due to its computational complexity in seeking maximum likelihood estimates. In this article, we propose a series of composite likelihoods for maximum composite likelihood estimation, as well as an enhanced Monte Carlo expectation–maximization (MCEM) algorithm for maximum likelihood estimation, in the context of joint latent class models. Theoretically, the maximum composite likelihood estimates are consistent and asymptotically normal. Numerically, we have shown that, as compared to the MCEM algorithm that maximizes the full likelihood, not only the composite likelihood approach that is coupled with the quasi-Newton method can substantially reduce the computational complexity and duration, but it can simultaneously retain comparative estimation efficiency.  相似文献   

5.
Linear mixed models and penalized least squares   总被引:1,自引:0,他引:1  
Linear mixed-effects models are an important class of statistical models that are used directly in many fields of applications and also are used as iterative steps in fitting other types of mixed-effects models, such as generalized linear mixed models. The parameters in these models are typically estimated by maximum likelihood or restricted maximum likelihood. In general, there is no closed-form solution for these estimates and they must be determined by iterative algorithms such as EM iterations or general nonlinear optimization. Many of the intermediate calculations for such iterations have been expressed as generalized least squares problems. We show that an alternative representation as a penalized least squares problem has many advantageous computational properties including the ability to evaluate explicitly a profiled log-likelihood or log-restricted likelihood, the gradient and Hessian of this profiled objective, and an ECME update to refine this objective.  相似文献   

6.
In off‐line quality control, the settings that minimize the variance of a quality characteristic are unknown and must be determined based on an estimated dual response model of mean and variance. The present paper proposes a direct measure of the efficiency of any given design‐estimation procedure for variance minimization. This not only facilitates the comparison of different design‐estimation procedures, but may also provide a guideline for choosing a better solution when the estimated dual response model suggests multiple solutions. Motivated by the analysis of an industrial experiment on spray painting, the present paper also applies a class of link functions to model process variances in off‐line quality control. For model fitting, a parametric distribution is employed in updating the variance estimates used in an iteratively weighted least squares procedure for mean estimation. In analysing combined array experiments, Engel and Huele (Technometrics, 1996; 39:365) used log‐link to model process variances and considered an iteratively weighted least squares leading to the pseudo‐likelihood estimates of variances as discussed in Carroll and Ruppert (Transformation and Weighting in Regression, Chapman & Hall: New York). Their method is a special case of the approach considered in this paper. It is seen for the spray paint data that the log‐link may not be satisfactory and the class of link functions considered here improves substantially the fit to process variances. This conclusion is reached with a suggested method of comparing ‘empirical variances’ with the ‘theoretical variances’ based on the assumed model. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

7.
This paper proposes a robust procedure for solving multiphase regression problems that is efficient enough to deal with data contaminated by atypical observations due to measurement errors or those drawn from heavy-tailed distributions. Incorporating the expectation and maximization algorithm with the M-estimation technique, we simultaneously derive robust estimates of the change-points and regression parameters, yet as the proposed method is still not resistant to high leverage outliers we further suggest a modified version by first moderately trimming those outliers and then implementing the new procedure for the trimmed data. This study sets up two robust algorithms using the Huber loss function and Tukey's biweight function to respectively replace the least squares criterion in the normality-based expectation and maximization algorithm, illustrating the effectiveness and superiority of the proposed algorithms through extensive simulations and sensitivity analyses. Experimental results show the ability of the proposed method to withstand outliers and heavy-tailed distributions. Moreover, as resistance to high leverage outliers is particularly important due to their devastating effect on fitting a regression model to data, various real-world applications show the practicability of this approach.  相似文献   

8.
This paper studies asymptotic properties of the quasi maximum likelihood and weighted least squares estimates (QMLE and WLSE) of the conditional variance slope parameters of a strictly unstable ARCH model with periodically time varying coefficients (PARCH in short). The model is strictly unstable in the sense that its parameters lie outside the strict periodic stationarity domain and its boundary. Obtained from the regression form of the PARCH, the WLSE is a variant of the least squares method weighted by the square of the conditional variance evaluated at any fixed value in the parameter space. In calculating the QMLE and WLSE, the conditional variance intercepts are set to any arbitrary values not necessarily the true ones. The theoretical finding is that the QMLE and WLSE are consistent and asymptotically Gaussian with the same asymptotic variance irrespective of the fixed conditional variance intercepts and the weighting parameters. So because of its numerical complexity, the QMLE may be dropped in favor of the WLSE which enjoys closed form.  相似文献   

9.
The latent class mixture-of-experts joint model is one of the important methods for jointly modelling longitudinal and recurrent events data when the underlying population is heterogeneous and there are nonnormally distributed outcomes. The maximum likelihood estimates of parameters in latent class joint model are generally obtained by the EM algorithm. The joint distances between subjects and initial classification of subjects under study are essential to finding good starting values of the EM algorithm through formulas. In this article, separate distances and joint distances of longitudinal markers and recurrent events are proposed for classification purposes, and performance of the initial classifications based on the proposed distances and random classification are compared in a simulation study and demonstrated in an example.  相似文献   

10.
Log-linear modeling is a popular statistical tool for analysing a contingency table. This presentation focuses on an alternative approach to modeling ordinal categorical data. The technique, based on orthogonal polynomials, provides a much simpler method of model fitting than the conventional approach of maximum likelihood estimation, as it does not require iterative calculations nor the fitting and refitting to search for the best model. Another advantage is that quadratic and higher order effects can readily be included, in contrast to conventional log-linear models which incorporate linear terms only.

The focus of the discussion is the application of the new parameter estimation technique to multi-way contingency tables with at least one ordered variable. This will also be done by considering singly and doubly ordered two-way contingency tables. It will be shown by example that the resulting parameter estimates are numerically similar to corresponding maximum likelihood estimates for ordinal log-linear models.  相似文献   

11.
We consider one-way analysis of covariance (ANCOVA) model with a single covariate when the distribution of error terms are short-tailed symmetric. The maximum likelihood (ML) estimators of the parameters are intractable. We, therefore, employ a simple method known as modified maximum likelihood (MML) to derive the estimators of the model parameters. The method is based on linearization of the intractable terms in likelihood equations. Incorporating these linearizations in the maximum likelihood, we get the modified likelihood equations. Then the MML estimators which are the solutions of these modified equations are obtained. Computer simulations were performed to investigate the efficiencies of the proposed estimators. The simulation results show that the proposed estimators are remarkably efficient compared with the conventional least squares (LS) estimators.  相似文献   

12.
Finding the “best-fitting” circle to describe a set of points in two dimensions is discussed in terms of maximum likelihood estimation. Several combinations of distributions are proposed to describe the stochastic nature of points in the plane, as the points are considered to have a common, typically unknown center, a random radius, and random angular orientation. A Monte Carlo search algorithm over part of the parameter space is suggested for finding the maximum likelihood parameter estimates. Examples are presented, and comparisons are drawn between circles fit by this proposed method, least squares, and other maximum likelihood methods found in the literature.  相似文献   

13.
Latent trait models such as item response theory (IRT) hypothesize a functional relationship between an unobservable, or latent, variable and an observable outcome variable. In educational measurement, a discrete item response is usually the observable outcome variable, and the latent variable is associated with an examinee’s trait level (e.g., skill, proficiency). The link between the two variables is called an item response function. This function, defined by a set of item parameters, models the probability of observing a given item response, conditional on a specific trait level. Typically in a measurement setting, neither the item parameters nor the trait levels are known, and so must be estimated from the pattern of observed item responses. Although a maximum likelihood approach can be taken in estimating these parameters, it usually cannot be employed directly. Instead, a method of marginal maximum likelihood (MML) is utilized, via the expectation-maximization (EM) algorithm. Alternating between an expectation (E) step and a maximization (M) step, the EM algorithm assures that the marginal log likelihood function will not decrease after each EM cycle, and will converge to a local maximum. Interestingly, the negative of this marginal log likelihood function is equal to the relative entropy, or Kullback-Leibler divergence, between the conditional distribution of the latent variables given the observable variables and the joint likelihood of the latent and observable variables. With an unconstrained optimization for the M-step proposed here, the EM algorithm as minimization of Kullback-Leibler divergence admits the convergence results due to Csiszár and Tusnády (Statistics & Decisions, 1:205–237, 1984), a consequence of the binomial likelihood common to latent trait models with dichotomous response variables. For this unconstrained optimization, the EM algorithm converges to a global maximum of the marginal log likelihood function, yielding an information bound that permits a fixed point of reference against which models may be tested. A likelihood ratio test between marginal log likelihood functions obtained through constrained and unconstrained M-steps is provided as a means for testing models against this bound. Empirical examples demonstrate the approach.  相似文献   

14.
Abstract

We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the log-likelihood. We present a simple implementation using the Newton-Raphson algorithm with the understanding that other optimization methods may be used in more complicated problems. The method can be applied to families of distributions with unknown normalizing constants and can be extended to least squares fitting in the case that the number of moments observed exceeds the number of parameters in the model. The method can be further generalized to allow “moments” that are any function of data and parameters, including as a special case maximum likelihood for models with unknown normalizing constants or missing data. In addition to being used for estimation, our method may be useful for setting the parameters of a Bayes prior distribution by specifying moments of a distribution using prior information. We present two examples—specification of a multivariate prior distribution in a constrained-parameter family and estimation of parameters in an image model. The former example, used for an application in pharmacokinetics, motivated this work. This work is similar to Ruppert's method in stochastic approximation, combines Monte Carlo simulation and the Newton-Raphson algorithm as in Penttinen, uses computational ideas and importance sampling identities of Gelfand and Carlin, Geyer, and Geyer and Thompson developed for Monte Carlo maximum likelihood, and has some similarities to the maximum likelihood methods of Wei and Tanner.  相似文献   

15.
The semiparametric proportional odds model for survival data is useful when mortality rates of different groups converge over time. However, fitting the model by maximum likelihood proves computationally cumbersome for large datasets because the number of parameters exceeds the number of uncensored observations. We present here an alternative to the standard Newton-Raphson method of maximum likelihood estimation. Our algorithm, an example of a minorization-maximization (MM) algorithm, is guaranteed to converge to the maximum likelihood estimate whenever it exists. For large problems, both the algorithm and its quasi-Newton accelerated counterpart outperform Newton-Raphson by more than two orders of magnitude.  相似文献   

16.
We consider a multiple autoregressive model with non-normal error distributions, the latter being more prevalent in practice than the usually assumed normal distribution. Since the maximum likelihood equations have convergence problems (Puthenpura and Sinha, 1986) [11], we work out modified maximum likelihood equations by expressing the maximum likelihood equations in terms of ordered residuals and linearizing intractable nonlinear functions (Tiku and Suresh, 1992) [8]. The solutions, called modified maximum estimators, are explicit functions of sample observations and therefore easy to compute. They are under some very general regularity conditions asymptotically unbiased and efficient (Vaughan and Tiku, 2000) [4]. We show that for small sample sizes, they have negligible bias and are considerably more efficient than the traditional least squares estimators. We show that our estimators are robust to plausible deviations from an assumed distribution and are therefore enormously advantageous as compared to the least squares estimators. We give a real life example.  相似文献   

17.
Image data is often collected by a charge coupled device (CCD) camera. CCD camera noise is known to be well-modeled by a Poisson distribution. If this is taken into account, the negative-log of the Poisson likelihood is the resulting data-fidelity function. We derive, via a Taylor series argument, a weighted least squares approximation of the negative-log of the Poisson likelihood function. The image deblurring algorithm of interest is then applied to the problem of minimizing this weighted least squares function subject to a nonnegativity constraint. Our objective in this paper is the development of stopping rules for this algorithm. We present three stopping rules and then test them on data generated using two different true images and an accurate CCD camera noise model. The results indicate that each of the three stopping rules is effective. AMS subject classification (2000)  65F20, 65F30  相似文献   

18.
We discuss generalized least squares (GLS) and maximum likelihood (ML) estimation for structural equations models (SEM), when the sample moment matrices are possibly singular. This occurs in several instances, for example, for panel data when there are more panel waves than independent replications or for time series data where the number of time points is large, but only one unit is observed. In previous articles, it was shown that ML estimation of the SEM is possible by using a correct Gaussian likelihood function. In this article, the usual GLS fit function is modified so that it is also defined for singular sample moment matrices S. In large samples, GLS and ML estimation perform similarly, and the modified GLS approach is a good alternative when S becomes nearly singular. Both GLS approaches do not work for N = 1, since here S = 0 and the modified GLS approach yields biased estimates. In conclusion, ML estimation (and pseudo ML under misspecification) is recommended for all sample sizes including N = 1.  相似文献   

19.
This paper deals with maximum likelihood estimation of linear or nonlinear functional relationships assuming that replicated observations have been made on p variables at n points. The joint distribution of the pn errors is assumed to be multivariate normal. Existing results are extended in two ways: first, from known to unknown error covariance matrix; second, from the two variate to the multivariate case.For the linear relationship it is shown that the maximum likelihood point estimates are those obtained by the method of generalized least squares. The present method, however, has the advantage of supplying estimates of the asymptotic covariances of the structural parameter estimates.  相似文献   

20.
A general class of parameter estimation methods for stochastic dynamical systems is studied. The class contains the least squares method, output-error methods, the maximum likelihood method and several other techniques. It is shown that the class of estimates so obtained are asymptotically normal and expressions for the resulting asymptotic covariance matrices are given. The regularity conditions that are imposed to obtain these results, are fairly weak. It is, for example, not assumed that the true system can be described within the chosen model set, and, as a consequence, the results in this paper form a part of the so-called approximate modeling approach to system identification. It is also noteworthy that arbitrary feedback from observed system outputs to observed system inputs is allowed and stationarity is not required  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号