首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
It is well known that specifying a covariance matrix is difficult in the quantile regression with longitudinal data. This paper develops a two step estimation procedure to improve estimation efficiency based on the modified Cholesky decomposition. Specifically, in the first step, we obtain the initial estimators of regression coefficients by ignoring the possible correlations between repeated measures. Then, we apply the modified Cholesky decomposition to construct the covariance models and obtain the estimator of within-subject covariance matrix. In the second step, we construct unbiased estimating functions to obtain more efficient estimators of regression coefficients. However, the proposed estimating functions are discrete and non-convex. We utilize the induced smoothing method to achieve the fast and accurate estimates of parameters and their asymptotic covariance. Under some regularity conditions, we establish the asymptotically normal distributions for the resulting estimators. Simulation studies and the longitudinal progesterone data analysis show that the proposed approach yields highly efficient estimators.  相似文献   

2.
In this paper, we study the problem of estimating the covariance matrix Σ and the precision matrix Ω (the inverse of the covariance matrix) in a star-shape model with missing data. By considering a type of Cholesky decomposition of the precision matrix Ω=ΨΨ, where Ψ is a lower triangular matrix with positive diagonal elements, we get the MLEs of the covariance matrix and precision matrix and prove that both of them are biased. Based on the MLEs, unbiased estimators of the covariance matrix and precision matrix are obtained. A special group G, which is a subgroup of the group consisting all lower triangular matrices, is introduced. By choosing the left invariant Haar measure on G as a prior, we obtain the closed forms of the best equivariant estimates of Ω under any of the Stein loss, the entropy loss, and the symmetric loss. Consequently, the MLE of the precision matrix (covariance matrix) is inadmissible under any of the above three loss functions. Some simulation results are given for illustration.  相似文献   

3.
0引言关于实对称矩阵的广义Cholesky分解和扰动问题是矩阵计算的重要问题,可参考文献[1-2].本文首先介绍已有的采用加法扰动的角度得到的广义Cholesky分解的一阶相对  相似文献   

4.
The performance of Markov chain Monte Carlo (MCMC) algorithms like the Metropolis Hastings Random Walk (MHRW) is highly dependent on the choice of scaling matrix for the proposal distributions. A popular choice of scaling matrix in adaptive MCMC methods is to use the empirical covariance matrix (ECM) of previous samples. However, this choice is problematic if the dimension of the target distribution is large, since the ECM then converges slowly and is computationally expensive to use. We propose two algorithms to improve convergence and decrease computational cost of adaptive MCMC methods in cases when the precision (inverse covariance) matrix of the target density can be well-approximated by a sparse matrix. The first is an algorithm for online estimation of the Cholesky factor of a sparse precision matrix. The second estimates the sparsity structure of the precision matrix. Combining the two algorithms allows us to construct precision-based adaptive MCMC algorithms that can be used as black-box methods for densities with unknown dependency structures. We construct precision-based versions of the adaptive MHRW and the adaptive Metropolis adjusted Langevin algorithm and demonstrate the performance of the methods in two examples. Supplementary materials for this article are available online.  相似文献   

5.
许凯  何道江 《数学学报》2016,59(6):783-794
在缺失数据机制是可忽略的假设下,导出了有单调缺失数据的条件独立正态模型中协方差阵和精度阵的Cholesky分解的最大似然估计和无偏估计.通过引入一类特殊的变换群并在更广义的损失下,获得了其最优同变估计.这表明最大似然估计和无偏估计是非容许的.最后,通过数值模拟验证了相关结果的有效性.  相似文献   

6.
In this work, we introduce an algebraic operation between bounded Hessenberg matrices and we analyze some of its properties. We call this operation m-sum and we obtain an expression for it that involves the Cholesky factorization of the corresponding Hermitian positive definite matrices associated with the Hessenberg components.This work extends a method to obtain the Hessenberg matrix of the sum of measures from the Hessenberg matrices of the individual measures, introduced recently by the authors for subnormal matrices, to matrices which are not necessarily subnormal.Moreover, we give some examples and we obtain the explicit formula for the m-sum of a weighted shift. In particular, we construct an interesting example: a subnormal Hessenberg matrix obtained as the m-sum of two not subnormal Hessenberg matrices.  相似文献   

7.
Problem of solving the strictly convex, quadratic programming problem is studied. The idea of conjugate directions is used. First we assume that we know the set of directions conjugate with respect to the hessian of the goal function. We apply n simultaneous directional minimizations along these conjugate directions starting from the same point followed by the addition of the directional corrections. Theorem justifying that the algorithm finds the global minimum of the quadratic goal function is proved. The way of effective construction of the required set of conjugate directions is presented. We start with a vector with zero value entries except the first one. At each step new vector conjugate to the previously generated is constructed whose number of nonzero entries is larger by one than in its predecessor. Conjugate directions obtained by means of the above construction procedure with appropriately selected parameters form an upper triangular matrix which in exact computations is the Cholesky factor of the inverse of the hessian matrix. Computational cost of calculating the inverse factorization is comparable with the cost of the Cholesky factorization of the original second derivative matrix. Calculation of those vectors involves exclusively matrix/vector multiplication and finding an inverse of a diagonal matrix. Some preliminary computational results on some test problems are reported. In the test problems all symmetric, positive definite matrices with dimensions from \(14\times 14\) to \(2000\times 2000\) from the repository of the Florida University were used as the hessians.  相似文献   

8.
Abstract

One way to estimate variance components is by restricted maximum likelihood. The log-likelihood function is fully defined by the Cholesky factor of a matrix that is usually large and sparse. In this article forward and backward differentiation methods are developed for calculating the first and second derivatives of the Cholesky factor and its functions. These differentiation methods are general and can be applied to either a full or a sparse matrix. Moreover, these methods can be used to calculate the derivatives that are needed for restricted maximum likelihood, resulting in substantial savings in computation.  相似文献   

9.
In this paper, we consider the problem of estimating a high dimensional precision matrix of Gaussian graphical model. Taking advantage of the connection between multivariate linear regression and entries of the precision matrix, we propose Bayesian Lasso together with neighborhood regression estimate for Gaussian graphical model. This method can obtain parameter estimation and model selection simultaneously. Moreover, the proposed method can provide symmetric confidence intervals of all entries of the precision matrix.  相似文献   

10.
Newton-type methods for unconstrained optimization problems have been very successful when coupled with a modified Cholesky factorization to take into account the possible lack of positive-definiteness in the Hessian matrix. In this paper we discuss the application of these method to large problems that have a sparse Hessian matrix whose sparsity is known a priori. Quite often it is difficult, if not impossible, to obtain an analytic representation of the Hessian matrix. Determining the Hessian matrix by the standard method of finite-differences is costly in terms of gradient evaluations for large problems. Automatic procedures that reduce the number of gradient evaluations by exploiting sparsity are examined and a new procedure is suggested. Once a sparse approximation to the Hessian matrix has been obtained, there still remains the problem of solving a sparse linear system of equations at each iteration. A modified Cholesky factorization can be used. However, many additional nonzeros (fill-in) may be created in the factors, and storage problems may arise. One way of approaching this problem is to ignore fill-in in a systematic manner. Such technique are calledpartial factorization schemes. Various existing partial factorization are analyzed and three new ones are developed. The above algorithms were tested on a set of problems. The overall conclusions were that these methods perfom well in practice.  相似文献   

11.
For a normal distribution the sample covariance matrix S provides an unbiased estimator of the population covariance matrix Σ. We address the problem of finding an unbiased estimator of the lower triangular matrix Ψ defined by the Cholesky decomposition Σ = ΨΨ′.  相似文献   

12.
In this paper, we study the problem of estimating a multivariate normal covariance matrix with staircase pattern data. Two kinds of parameterizations in terms of the covariance matrix are used. One is Cholesky decomposition and another is Bartlett decomposition. Based on Cholesky decomposition of the covariance matrix, the closed form of the maximum likelihood estimator (MLE) of the covariance matrix is given. Using Bayesian method, we prove that the best equivariant estimator of the covariance matrix with respect to the special group related to Cholesky decomposition uniquely exists under the Stein loss. Consequently, the MLE of the covariance matrix is inadmissible under the Stein loss. Our method can also be applied to other invariant loss functions like the entropy loss and the symmetric loss. In addition, based on Bartlett decomposition of the covariance matrix, the Jeffreys prior and the reference prior of the covariance matrix with staircase pattern data are also obtained. Our reference prior is different from Berger and Yang’s reference prior. Interestingly, the Jeffreys prior with staircase pattern data is the same as that with complete data. The posterior properties are also investigated. Some simulation results are given for illustration.  相似文献   

13.
Two recent approaches (Van Overschee, De Moor, N4SID, Automatica 30 (1) (1994) 75; Verhaegen, Int. J. Control 58(3) (1993) 555) in subspace identification problems require the computation of the R factor of the QR factorization of a block-Hankel matrix H, which, in general has a huge number of rows. Since the data are perturbed by noise, the involved matrix H is, in general, full rank. It is well known that, from a theoretical point of view, the R factor of the QR factorization of H is equivalent to the Cholesky factor of the correlation matrix HTH, apart from a multiplication by a sign matrix. In Sima (Proceedings Second NICONET Workshop, Paris-Versailles, December 3, 1999, p. 75), a fast Cholesky factorization of the correlation matrix, exploiting the block-Hankel structure of H, is described. In this paper we consider a fast algorithm to compute the R factor based on the generalized Schur algorithm. The proposed algorithm allows to handle the rank–deficient case.  相似文献   

14.
Partitioning a sparse matrix A is a useful device employed by a number of sparse matrix techniques. An important problem that arises in connection with some of these methods is to determine the block structure of the Cholesky factor L of A, given the partitioned A. For the scalar case, the problem of determining the structure of L from A, so-called symbolic factorization, has been extensively studied. In this paper we study the generalization of this problem to the block case. The problem is interesting because an assumption relied on in the scalar case no longer holds; specifically, the product of two nonzero scalars is always nonzero, but the product of two nonnull sparse matrices may yield a zero matrix. Thus, applying the usual symbolic factorization techniques to a partitioned matrix, regarding each submatrix as a scalar, may yield a block structure of L which is too full. In this paper an efficient algorithm is provided for determining the block structure of the Cholesky factor of a partitioned matrix A, along with some bounds on the execution time of the algorithm.  相似文献   

15.
The problem of estimating the precision matrix of a multivariate normal distribution model is considered with respect to a quadratic loss function. A number of covariance estimators originally intended for a variety of loss functions are adapted so as to obtain alternative estimators of the precision matrix. It is shown that the alternative estimators have analytically smaller risks than the unbiased estimator of the precision matrix. Through numerical studies of risk values, it is shown that the new estimators have substantial reduction in risk. In addition, we consider the problem of the estimation of discriminant coefficients, which arises in linear discriminant analysis when Fisher's linear discriminant function is viewed as the posterior log-odds under the assumption that two classes differ in mean but have a common covariance matrix. The above method is also adapted for this problem in order to obtain improved estimators of the discriminant coefficients under the quadratic loss function. Furthermore, a numerical study is undertaken to compare the properties of a collection of alternatives to the “unbiased” estimator of the discriminant coefficients.  相似文献   

16.
王继霞  苗雨 《数学杂志》2012,32(4):637-643
本文研究了一个二元广义Weibull分布模型,其边缘分布分别是一元广义Weibull分布.利用EM算法,得到了未知参数的极大似然估计和观测Fisher信息矩阵.  相似文献   

17.
In this short note we consider the differentiable evaluation of the objective function of the sampling design optimization problem based on the inverse of the Fisher information matrix, and where the integer design variables have been converted into real variables using a relaxation technique. To ensure differentiability and cover the full range of the variables, and thus improve the convergence behavior of derivative-based optimization algorithms, we propose applying a Cholesky decomposition on the Fisher information matrix, but using a special higher precision floating point arithmetic to ensure stability. While each evalu-ation of the functional becomes slower, the algorithm is much simpler, amenable to be used with automatic differentiation directly, and can be shown to be very stable. For many practical situations, this is a valuable trade-off. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consist of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior that proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Materials available online.  相似文献   

19.
In this paper, we put non-concave penalty on the local conditional likelihood. We obtain the oracle property and asymptotic normal distribution property of the parameters in Ising model. With a union band, we obtain the sign consistence for the estimator of parameter matrix, and the convergence speed under the matrix $L_1$ norm. The results of the simulation studies and a real data analysis show that the non-concave penalized estimator has larger sensitivity.  相似文献   

20.
In this paper, we obtain criteria for the indeterminacy of the Stieltjes matrix moment problem. We obtain explicit formulas for Stieltjes parameters and study the multiplicative structure of the resolvent matrix. In the indeterminate case, we study the analytic properties of the resolvent matrix of the moment problem. We describe the set of all matrix functions associated with the indeterminate Stieltjes moment problem in terms of linear fractional transformations over Stieltjes pairs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号