首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 62 毫秒
1.
系数为梯形模糊数的模糊回归分析的最小二乘法   总被引:1,自引:0,他引:1  
由于模糊数往往可以用梯形模糊数来逼近,因此对梯形模糊数的模糊回归模型的研究就有一定的实用价值.采用最小二乘的方法,针对输入为精确数、输出和回归系数都是梯形模糊数的模糊线性回归模型,讨论了该模型回归系数的最小二乘估计及误差项的估计,实例说明了提出的参数估计的拟合度比较好.  相似文献   

2.
自Tanaka等1982年提出模糊回归概念以来,该问题已得到广泛的研究。作为主要估计方法之一的模糊最小二乘估计以其与统计最小二乘估计的密切联系更受到人们的重视。本文依据适当定义的两个模糊数之间的距离,提出了模糊线性回归模型的一个约束最小二乘估计方法,该方法不仅能使估计的模糊参数的宽度具有非负性而且估计的模糊参数的中心线与传统的最小二乘估计相一致。最后,通过数值例子说明了所提方法的具体应用。  相似文献   

3.
系数为LR-型模糊数的模糊线性最小二乘回归   总被引:2,自引:2,他引:0  
针对输入、输出以及系数为LR-型模糊数的情况,建立模糊线性回归模型,提出该模型的最小二乘估计以及模型性能评价方法。当输入、输出以及系数都退化为精确值时,该估计退化为经典的最小二乘估计。该方法不仅适用于三角模糊数,也适用于其它LR-型模糊数(如指数型模糊数)。数值模拟表明,该方法的拟合效果较好。  相似文献   

4.
广义压缩最小二乘估计   总被引:13,自引:1,他引:12  
本文引进了线性模型中回归系数的一个估计类。许多常用的估计,例如岭回归估计、主成分估计、压缩最小二乘估计以及迭代估计都属于这个估计类。本文讨论该估计类中估计的容许性问题以及矩阵均方误差准则下估计的比较问题。  相似文献   

5.
关于广义压缩最小二乘估计的注记   总被引:1,自引:0,他引:1  
赵泽茂 《应用数学》1995,8(1):90-95
本文研究了广义压缩最小二乘估计(GSLSE)的一些性质,给出了它的均方误差(MSE)的一个无偏估计量(UE),采用极小该UE的方法确定了GSLSE的参数选取公式,并把这个统一化的方法应用于广义岭估计,岭估计、Massy主成分估计、Stein型压缩估计以及根方有偏估计等,从而得到了它们的一种选取参数的方法,最后,结合Hald实例进行比较分析,结果表明,本文的方法是实用的,有效的。  相似文献   

6.
在具有模糊观测数据的线性回归问题中,通过定义模糊序指标实现模糊数的排序,借助经典最小二乘法原理,给出了使平方误差和在此排序方法下达到最小的模糊回归系数最小二乘序估计方法。三个例子的结果表明,文中的最小二乘方法能很好的对输入和输出为模糊数,回归系数为精确值的回归模型进行估计,更重要的是,此方法不仅对三角模糊数适用,对其他类型的模糊观测数据也适用。  相似文献   

7.
杨复兴 《数学年刊A辑》2004,25(2):263-268
对线性回归Yi=x'iβ+ei,i=1,……,n,……,其中x1,x2,………为已知p维向量,e1,e2,………为随机误差.本文证明了如果e1,e2,………独立,每一个非退化,则Sn-1=(∑ni=1 xix'i)→0是β的最小二乘估计相合的必要条件,注意此处对ei的期望和方差没有施加任何条件.  相似文献   

8.
对线性回归Yi=x'iβ+ei,i=1,…,n,…,其中x1,x2,…为已知P维向量,e1,e2,…为随机误差.本文证明了:如果e1,e2,…独立,每一个非退化,则是β的最小二乘估计相合的必要条件,注意此处对ei的期望和方差没有施加任何条件.  相似文献   

9.
权回归模型中最小二乘估计的相对效率   总被引:4,自引:0,他引:4  
对于线性加权回归模型,本文得到了未知参数的最小二乘估计相对于最佳线性无偏估计的四种相对效率的下界,并建立了相对效率与广义相关系数的联系。  相似文献   

10.
本文给出了一个拟合数值输入模糊数输出数据的线性回归模型,证明了模型的解存在且唯一,并得到了解的表达式。  相似文献   

11.
对文献[1]提出的基于对称三角模糊数的模糊最小一乘线性回归进行修正和扩展,给出模糊最小一乘线性回归模型的三种不同形式,并将其转化为线性规划或非线性规划问题进行求解。最后,给出几个数值实例,通过计算和比较,结果表明三种模糊最小一乘线性回归模型都具有非常好的拟合性。  相似文献   

12.
We consider the problem of estimating regression models of two-dimensional random fields. Asymptotic properties of the least squares estimator of the linear regression coefficients are studied for the case where the disturbance is a homogeneous random field with an absolutely continuous spectral distribution and a positive and piecewise continuous spectral density. We obtain necessary and sufficient conditions on the regression sequences such that a linear estimator of the regression coefficients is asymptotically unbiased and mean square consistent. For such regression sequences the asymptotic covariance matrix of the linear least squares estimator of the regression coefficients is derived.  相似文献   

13.
本文考虑纵向数据半参数回归模型:Yij=XiTjβ+g(Tij)+iεj,基于最小二乘法和局部线性拟合的方法建立了模型中参数分量β,回归函数g(.)和误差方差σ2的估计,在适当条件下给出了估计量的相合性,通过模拟研究说明了该方法在有限样本情况下具有良好的性质。  相似文献   

14.
在NA样本下讨论了线性回归模型最小二乘估计的r阶矩相合性,并在02时推广了相关文献之结论.  相似文献   

15.
Consider a repeated measurement partially linear regression model with an unknown vector parameter β, an unknown function g(.), and unknown heteroscedastic error variances. In order to improve the semiparametric generalized least squares estimator (SGLSE) of β, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE. These results are generalizations of those in [2] to the case of semiparametric regressions.  相似文献   

16.
Given a finite partially-ordered set with a positive weighting function defined on its points, it is well known that any real-valued function defined on the set has a unique best order-preserving approximation in the weighted least squares sense. Many algorithms have been given for the solution of this isotonic regression problem. Most such algorithms either are not polynomial or they are of unknown time complexity. Recently, it has become clear that the general isotonic regression problem can be solved as a network flow problem in time O(n4) with a space requirement of O(n2), where n is the number of points in the set. Building on the insights at the basis of this improvement, we show here that, in the case of a general two-dimensional partial ordering, the problem can be solved in O(n3) time and, when the two-dimensional set is restricted to a grid, the time can be further improved to O(n2).  相似文献   

17.
18.
Numerical and computational aspects of direct methods for largeand sparseleast squares problems are considered. After a brief survey of the most oftenused methods, we summarize the important conclusions made from anumerical comparison in matlab. Significantly improved algorithms haveduring the last 10-15 years made sparse QR factorization attractive, andcompetitive to previously recommended alternatives. Of particular importanceis the multifrontal approach, characterized by low fill-in, dense subproblemsand naturally implemented parallelism. We describe a Householder multifrontalscheme and its implementation on sequential and parallel computers. Availablesoftware has in practice a great influence on the choice of numericalalgorithms. Less appropriate algorithms are thus often used solely because ofexisting software packages. We briefly survey softwarepackages for the solution of sparse linear least squares problems. Finally,we focus on various applications from optimization, leading to the solution oflarge and sparse linear least squares problems. In particular, we concentrateon the important case where the coefficient matrix is a fixed general sparsematrix with a variable diagonal matrix below. Inner point methods forconstrained linear least squares problems give, for example, rise to suchsubproblems. Important gains can be made by taking advantage of structure.Closely related is also the choice of numerical method for these subproblems.We discuss why the less accurate normal equations tend to be sufficient inmany applications.  相似文献   

19.
We consider the solution of weighted linear least squares problems by Householder transformations with implicit scaling, that is, with the weights stored separately. By holding inverse weights, the constrained case can be accommodated. The error analysis of the weighted and unconstrained case is readily extended and we show that iterative refinement may be applied.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号