首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article discusses maximum likelihood estimation of proportional covariance matrices under normality assumptions. An algorithm for solving the likelihood equations and the likelihood ratio statistic for testing the hypothesis of proportionality are given. The method is illustrated by a numerical example.  相似文献   

2.
This article proposes a new approach for Bayesian and maximum likelihood parameter estimation for stationary Gaussian processes observed on a large lattice with missing values. We propose a Markov chain Monte Carlo approach for Bayesian inference, and a Monte Carlo expectation-maximization algorithm for maximum likelihood inference. Our approach uses data augmentation and circulant embedding of the covariance matrix, and provides likelihood-based inference for the parameters and the missing data. Using simulated data and an application to satellite sea surface temperatures in the Pacific Ocean, we show that our method provides accurate inference on lattices of sizes up to 512 × 512, and is competitive with two popular methods: composite likelihood and spectral approximations.  相似文献   

3.
In medicine and industry, small sample size often arises owing to the high test cost. Then exact confidence inference is important. Buehler confidence limit is a kind of exact confidence limit for the function of parameters in a model. It can be always defined if the order in sample space is given. But the computing problem is often difficult, especially for the cases with high dimension parameter or with incomplete data. This paper presents an algorithm to compute the Buehler confidence limits by EM algorithm. This is the first-time usage of EM algorithm on Buehler confidence limits, but the algorithm is often used for maximum likelihood estimate in literatures. Three computation examples are given to illustrate the method.  相似文献   

4.
This paper proposes a Metropolis–Hastings algorithm based on Markov chain Monte Carlo sampling, to estimate the parameters of the Abe–Ley distribution, which is a recently proposed Weibull-Sine-Skewed-von Mises mixture model, for bivariate circular-linear data. Current literature estimates the parameters of these mixture models using the expectation-maximization method, but we will show that this exhibits a few shortcomings for the considered mixture model. First, standard expectation-maximization does not guarantee convergence to a global optimum, because the likelihood is multi-modal, which results from the high dimensionality of the mixture’s likelihood. Second, given that expectation-maximization provides point estimates of the parameters only, the uncertainties of the estimates (e.g., confidence intervals) are not directly available in these methods. Hence, extra calculations are needed to quantify such uncertainty. We propose a Metropolis–Hastings based algorithm that avoids both shortcomings of expectation-maximization. Indeed, Metropolis–Hastings provides an approximation to the complete (posterior) distribution, given that it samples from the joint posterior of the mixture parameters. This facilitates direct inference (e.g., about uncertainty, multi-modality) from the estimation. In developing the algorithm, we tackle various challenges including convergence speed, label switching and selecting the optimum number of mixture components. We then (i) verify the effectiveness of the proposed algorithm on sample datasets with known true parameters, and further (ii) validate our methodology on an environmental dataset (a traditional application domain of Abe–Ley mixtures where measurements are function of direction). Finally, we (iii) demonstrate the usefulness of our approach in an application domain where the circular measurement is periodic in time.  相似文献   

5.
The pseudo likelihood method of Besag (1974) has remained a popular method for estimating Markov random field on a very large lattice, despite various documented deficiencies. This is partly because it remains the only computationally tractable method for large lattices. We introduce a novel method to estimate Markov random fields defined on a regular lattice. The method takes advantage of conditional independence structures and recursively decomposes a large lattice into smaller sublattices. An approximation is made at each decomposition. Doing so completely avoids the need to compute the troublesome normalizing constant. The computational complexity is O(N), where N is the number of pixels in the lattice, making it computationally attractive for very large lattices. We show through simulations, that the proposed method performs well, even when compared with methods using exact likelihoods. Supplementary material for this article is available online.  相似文献   

6.
张晓华  沈建国 《数学季刊》2009,24(2):252-257
This paper is devoted to the discussion of filters in residuated lattices. The lattice structure of filters in residuated lattice was established. It is proved that the set of all filters forms a distributive lattice. Also, the concept of prime filter in residuated lattice was proposed and some equivalent conditions about prime filter were given.  相似文献   

7.
This article concerns the computational problem of counting the lattice points inside convex polytopes, when each point must be counted with a weight associated to it. We describe an efficient algorithm for computing the highest degree coefficients of the weighted Ehrhart quasi-polynomial for a rational simple polytope in varying dimension, when the weights of the lattice points are given by a polynomial function h. Our technique is based on a refinement of an algorithm of A.?Barvinok in the unweighted case (i.e., h≡1). In contrast to Barvinok’s method, our method is local, obtains an approximation on the level of generating functions, handles the general weighted case, and provides the coefficients in closed form as step polynomials of the dilation. To demonstrate the practicality of our approach, we report on computational experiments which show that even our simple implementation can compete with state-of-the-art software.  相似文献   

8.
The rank and invariants of a general lattice rule are conventionally defined in terms of the group-theoretic properties of the rule. Here we give a constructive definition of the rank and invariants using integer matrices. This underpins a nonabstract algorithm set in matrix algebra for obtaining the Sylow p-decomposition of a lattice rule. This approach is particularly useful when it is not known whether the form in which the lattice rule is specified is canonical or even repetitive. A new set of necessary and sufficient conditions for recognizing a canonical form is given.  相似文献   

9.
Latent trait models such as item response theory (IRT) hypothesize a functional relationship between an unobservable, or latent, variable and an observable outcome variable. In educational measurement, a discrete item response is usually the observable outcome variable, and the latent variable is associated with an examinee’s trait level (e.g., skill, proficiency). The link between the two variables is called an item response function. This function, defined by a set of item parameters, models the probability of observing a given item response, conditional on a specific trait level. Typically in a measurement setting, neither the item parameters nor the trait levels are known, and so must be estimated from the pattern of observed item responses. Although a maximum likelihood approach can be taken in estimating these parameters, it usually cannot be employed directly. Instead, a method of marginal maximum likelihood (MML) is utilized, via the expectation-maximization (EM) algorithm. Alternating between an expectation (E) step and a maximization (M) step, the EM algorithm assures that the marginal log likelihood function will not decrease after each EM cycle, and will converge to a local maximum. Interestingly, the negative of this marginal log likelihood function is equal to the relative entropy, or Kullback-Leibler divergence, between the conditional distribution of the latent variables given the observable variables and the joint likelihood of the latent and observable variables. With an unconstrained optimization for the M-step proposed here, the EM algorithm as minimization of Kullback-Leibler divergence admits the convergence results due to Csiszár and Tusnády (Statistics & Decisions, 1:205–237, 1984), a consequence of the binomial likelihood common to latent trait models with dichotomous response variables. For this unconstrained optimization, the EM algorithm converges to a global maximum of the marginal log likelihood function, yielding an information bound that permits a fixed point of reference against which models may be tested. A likelihood ratio test between marginal log likelihood functions obtained through constrained and unconstrained M-steps is provided as a means for testing models against this bound. Empirical examples demonstrate the approach.  相似文献   

10.
The numerical stability of the lattice algorithm for least-squares linear prediction problems is analysed. The lattice algorithm is an orthogonalization method for solving such problems and as such is in principle to be preferred to normal equations approaches. By performing a first-order analysis of the method and comparing the results with perturbation bounds available for least-squares problems, it is argued that the lattice algorithm is stable and in fact comparable in accuracy to other known stable but less efficient methods for least-squares problems.Dedicated to Germund Dahlquist on the occasion of his 60th birthday.This work was partially supported by NSF Grant MCS-8003364 and contracts AFOSR 82-0210, ARO DAAG29-82-K-0082.  相似文献   

11.
This article proposes a new method for fitting frailty models to clustered survival data that is intermediate between the fully parametric and nonparametric maximum likelihood estimation approaches. A parametric form is assumed for the baseline hazard, but only for the purpose of imputing the unobserved frailties. The regression coefficients are then estimated by solving an estimating equation that is the average of the partial likelihood score with respect to the conditional distribution of frailties given the observed data. We prove consistency and asymptotic normality of the resulting estimators and give associated closedform estimators of their variance. The algorithm is easy to implement and reduces to the ordinary Cox partial likelihood approach when the frailties have a degenerate distribution. Simulations indicate high efficiency and robustness of the resulting estimates. We apply our new approach to a study with clustered survival data on asthma in children in east Boston.  相似文献   

12.
孙旭 《东北数学》2005,21(2):175-180
This paper deals with estimating parameters under simple order when samples come from location models. Based on the idea of Hodges and Lehmann estimator (H-L estimator), a new approach to estimate parameters is proposed, which is difference with the classical L1 isotonic regression and L2 isotonic regression. An algorithm to compute estimators is given. Simulations by the Monte-Carlo method is applied to compare the likelihood functions with respect to L1 estimators and weighted isotonic H-L estimators.  相似文献   

13.
Parameters of Gaussian multivariate models are often estimated using the maximum likelihood approach. In spite of its merits, this methodology is not practical when the sample size is very large, as, for example, in the case of massive georeferenced data sets. In this paper, we study the asymptotic properties of the estimators that minimize three alternatives to the likelihood function, designed to increase the computational efficiency. This is achieved by applying the information sandwich technique to expansions of the pseudo-likelihood functions as quadratic forms of independent normal random variables. Theoretical calculations are given for a first-order autoregressive time series and then extended to a two-dimensional autoregressive process on a lattice. We compare the efficiency of the three estimators to that of the maximum likelihood estimator as well as among themselves, using numerical calculations of the theoretical results and simulations.  相似文献   

14.
This paper considers the efficient construction of a nonparametric family of distributions indexed by a specified parameter of interest and its application to calculating a bootstrap likelihood for the parameter. An approximate expression is obtained for the variance of log bootstrap likelihood for statistics which are defined by an estimating equation resulting from the method of selecting the first-level bootstrap populations and parameters. The expression is shown to agree well with simulations for artificial data sets based on quantiles of the standard normal distribution, and these results give guidelines for the amount of aggregation of bootstrap samples with similar parameter values required to achieve a given reduction in variance. An application to earthquake data illustrates how the variance expression can be used to construct an efficient Monte Carlo algorithm for defining a smooth nonparametric family of empirical distributions to calculate a bootstrap likelihood by greatly reducing the inherent variability due to first-level resampling.  相似文献   

15.
The empirical likelihood is a general nonparametric inference procedure with many desirable properties. Recently, theoretical results for empirical likelihood with certain censored/truncated data have been developed. However, the computation of empirical likelihood ratios with censored/truncated data is often nontrivial. This article proposes a modified self-consistent/EM algorithm to compute a class of empirical likelihood ratios for arbitrarily censored/truncated data with a mean type constraint. Simulations show that the chi-square approximations of the log-empirical likelihood ratio perform well. Examples and simulations are given in the following cases: (1) right-censored data with a mean parameter; and (2) left-truncated and right-censored data with a mean type parameter.  相似文献   

16.
先给出了广义逆指数分布在双边定时截尾样本下形状参数的最大似然估计,并不能得到估计的显式表达式,但证明了参数在(0,+∞)上最大似然估计是唯一存在的.其次提出用EM算法求出形状参数的估计且该估计具有良好的收敛性,还给出了形状参数的EM估计的渐近方差和近似置信区间;最后通过数值模拟,对形状参数的最大似然估计和EM估计的效果进行了比较,说明了用EM算法求形状参数的估计是可行的,并且模拟效果相对比较好.  相似文献   

17.
Generators and lattice properties of the poset of complete homomorphisic images of a completely distributive lattice are exploited via the localic methods. Some intrinsic and extrinsic conditions about this poset to be a completely distributive lattice are given. It is shown that the category of completely distributive lattices is co-well-powered,and complete epimorphisms on completely distributive lattice are not necessary to be surjections. Finally, some conditions about complete epimorphisms to be surjections are given.  相似文献   

18.
Sergei O. Kuznetsov 《Order》2001,18(4):313-321
The problem of determining the size of a finite concept lattice is shown to be #P-complete. Since any finite lattice can be represented as a concept lattice, the problem of determining the size of a lattice given by the ordered sets of its irreducibles is also #P-complete. Some results about NP-completeness or polynomial tractability of decision problems related to concepts with bounded extent, intent, and the sum of both are given. These problems can be reformulated as decision problems about lattice elements generated by a certain amount of irreducibles.  相似文献   

19.
In this paper, we continue our investigations6 on the iterative maximum likelihood reconstruction method applied to a special class of integral equations of the first kind, where one of the essential assumptions is the positivity of the kernel and the given right-hand side. Equations of this type often occur in connection with the determination of density functions from measured data. There are certain relations between the directed Kullback–Leibler divergence and the iterative maximum likelihood reconstruction method some of which were already observed by other authors. Using these relations, further properties of the iterative scheme are shown and, in particular, a new short and elementary proof of convergence of the iterative method is given for the discrete case. Numerical examples have already been given in References 6. Here, an example is considered which can be worked out analytically and which demonstrates fundamental properties of the algorithm.  相似文献   

20.
形式概念分析在数据分析以及机器学习领域得到了广泛的应用,作为核心数据结构的概念格的构造算法一直是形式概念分析领域的研究热点.根据概念外延的补集性质,给出并证明了概念的生成定理和超概念的生成定理,并以此为基础提出了一种新的概念格的增量维护算法,包括概念的生成和序关系建立算法,并给出了一个应用示例.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号