首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
一般正态线性模型中可估函数的线性Minimax估计   总被引:3,自引:0,他引:3  
对于一般正态线性模型y~N(Xβ,σ2V),这里X和V≥0是已知矩阵,β∈Rp和σ2>0是未知参数,在二次损失下我们研究了可估函数DXβ的线性估计在一切估计类中的Minimax性,得到了DXβ的唯一线性Minimax估计(有关唯一性在几乎处处意义下理解).  相似文献   

2.
正态线性模型中可估函数的Minimax估计   总被引:4,自引:0,他引:4  
对于正态线性模型Y~N(Xβ,б2V),在二次损失L(б,DXβ)=下,本文利用可容许性理论,证明了可估函数DXβ的一个线性估计在一切估计类中是DXβ的唯一Minimax估计。  相似文献   

3.
该文在一般正态随机效应线性模型中研究了随机回归系数和参数的估计问题. 在二次损失下,得到了线性可估函数在一切估计类中的唯一Minimax估计.  相似文献   

4.
二次损失下任意秩有限总体中的线性Minimax预测   总被引:3,自引:0,他引:3  
喻胜华 《数学年刊A辑》2004,25(4):485-496
本文对通常的二次损失作了适当的修改,在此基础上研究了一个预测在齐次线性预测函数类中的极大极小性.得到了任意秩有限总体中线性可预测变量的唯一线性Minimax预测(有关唯一性在几乎处处意义下理解).  相似文献   

5.
本文对通常的二次损失作了适当的修改,在此基础上研究了一个预测在齐次线性预测函数类中的极大极小性。得到了任意秩有限总体中线性可预测变量的唯一线性Minimax预测(有关唯一性在几乎处处意义下理解)。  相似文献   

6.
二次损失下增长曲线模型参数阵的线性Minimax可容许估计   总被引:3,自引:0,他引:3  
刘郁文 《经济数学》2000,17(4):44-50
本文在二次损失函数下,给出了增长曲线模型参数阵的线性估计在给定的线性估计类中是Minimax可容许估计的充要条件.  相似文献   

7.
对于任意秩有限总体,在二次损失下,有关文献已给出了线性可预测变量在齐次线性预测类中的唯一线性Minimax预测.本文在正态假设下,证明了这个线性Minimax预测也是线性可预测变量在一切预测类中的唯一Minimax预测.  相似文献   

8.
正态分布下任意秩有限总体中的Minimax预测   总被引:1,自引:0,他引:1  
对于任意秩有限总体,在二次损失下,有关文献已给出了线性可预测变量在齐次线性预测类中的唯一线性Minimax预测.本文在正态假设下,证明了这个线性Minimax预测也是线性可预测变量在一切预测类中的唯一Minimax预测.  相似文献   

9.
一般Gauss-Markov模型中可估函数的线性Minimax估计   总被引:5,自引:0,他引:5  
设Y是具有均值Xβ和协方差阵σ2V的n维随机向量,Sβ是线性可估函数,这里X,S和V≥0是已知矩阵,β∈Rp和σ2>0是未知参数.本文分别在给定的矩阵损失和二次损失下研究了线性估计的Minimax性.在适当的假设下,得到了Sβ的唯一线性Minimax估计(有关唯一性在几乎处处意义下理解).  相似文献   

10.
本文在矩阵损失厂给出了多元回归系数线性估计在线性估计类中是Minimax可容许估计的充要条件.  相似文献   

11.
In this paper, we consider the following minimax linear programming problem: min z = max1 ≤ jn{CjXj}, subject to Ax = g, x ≥ 0. It is well known that this problem can be transformed into a linear program by introducing n additional constraints. We note that these additional constraints can be considered implicitly by treating them as parametric upper bounds. Based on this approach we develop two algorithms: a parametric algorithm and a primal—dual algorithm. The parametric algorithm solves a linear programming problem with parametric upper bounds and the primal—dual algorithm solves a sequence of related dual feasible linear programming problems. Computation results are also presented, which indicate that both the algorithms are substantially faster than the simplex algorithm applied to the enlarged linear programming problem.  相似文献   

12.
13.
Hadamard's determinant theorem is used to obtain an upper bound for the modulus of the determinant of the sum of two normal matrices in terms of their eigenvalues. This bound is compared with another given by M. E. Miranda.  相似文献   

14.
15.
A solution to the minimax linear-quadratic problem of control of an operator system on a semi-infinite time interval is presented. The solution is based on the abstract maximum principle, Willems’ behavioral approach, the direct method of basic operators, and a small gain theorem.  相似文献   

16.
We will consider the problem of determining a linear, mean-square optimal estimate of the transformation of a stationary random sequence (k) with density f() from observations of the sequence (k) + n(k) withk0, where (k) is a stationary sequence not correlated with (k) with density g(). The least favorable spectral densities and minimax (robust) spectral characteristics of an optimal estimate A for different classes of densities are found.Translated from Ukrainskii Matematicheskii Zhurnal, Vol. 43, No. 1, pp. 92–99, January, 1991.  相似文献   

17.
We consider an approach yielding a minimax estimator in the linear regression model with a priori information on the parameter vector, e.g., ellipsoidal restrictions. This estimator is computed directly from the loss function and can be motivated by the general Pitman nearness criterion. It turns out that this approach coincides with the projection estimator which is obtained by projecting an initial arbitrary estimate on the subset defined by the restrictions.  相似文献   

18.
Assume X = (X1, …, Xp)′ is a normal mixture distribution with density w.r.t. Lebesgue measure, , where Σ is a known positive definite matrix and F is any known c.d.f. on (0, ∞). Estimation of the mean vector under an arbitrary known quadratic loss function Q(θ, a) = (a − θ)′ Q(a − θ), Q a positive definite matrix, is considered. An unbiased estimator of risk is obatined for an arbitrary estimator, and a sufficient condition for estimators to be minimax is then achieved. The result is applied to modifying all the Stein estimators for the means of independent normal random variables to be minimax estimators for the problem considered here. In particular the results apply to the Stein class of limited translation estimators.  相似文献   

19.
20.
In three or more dimensions it is well known that the usual point estimator for the mean of a multivariate normal distribution is minimax but not admissible with respect to squared Euclidean distance loss. This paper gives sufficient conditions on the prior distribution under which the Bayes estimator has strictly lower risk than the usual estimator. Examples are given for which the posterior density is useful in the formation of confidence sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号