首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
本文用PLS过程建立多因变量的偏最小二乘回归模型 ,并用具体例子对最小二乘回归(MLR)、主成分回归 (PCK)和偏最小二乘回归 (PLS)进行比较  相似文献   

2.
针对丹东市采暖期SO2污染的实际情况及气象因子的关系,建立了逐步回归、偏最小二乘回归、主成分回归和BP神经网络等4种常用的大气污染预报模式,并在实际预报中进行了模拟、试报和应用,结果发现,各个模式模拟值与实际值的变化趋势基本一致,BP神经网络方程和偏最小二乘回归方程的预报值与实际值的接近程度要好于逐步回归方程和主成分回归方程.  相似文献   

3.
偏最小二乘回归的应用效果分析   总被引:2,自引:0,他引:2  
本文介绍了偏最小二乘回归 (PLS)的建模方法 ,比较了PLS与普通最小二乘回归 (OLS)及主成分回归的应用效果 ,并总结了PLS回归的基本特点 .  相似文献   

4.
偏最小二乘logistic回归在鄱阳湖洪涝灾害预测中的应用   总被引:3,自引:0,他引:3  
偏最小二乘logistic回归模型是一种新型的多元分析方法,它在自变量之间存在强多重共线性、或者当样本量偏小以及样本中有缺失值的情况下,可以较好地解决普通logistic回归模型的计算结果不稳定的问题.本文利用偏最小二乘logistic回归算法,根据鄱阳湖地区1953~1998年观测的水文数据,分析各月连续最大五天降水量和长江各月最大流量对鄱阳湖洪涝灾害的影响,建立了预测洪涝灾害程度的发生概率的判别模型.研究结果表明,偏最小二乘logistic回归模型在相关领域的研究中具有很好的适用性.  相似文献   

5.
本文通过例子介绍多元线性回归中自变量共线性的诊断以及使用 SAS/SATA( 6.12 )软件中的 REG等过程的增强功能处理回归变量共线性的一些方法。包括筛选变量法 ,岭回归分析法 ,主成分回归法和偏最小二乘回归法  相似文献   

6.
两个多重相关变量组的统计分析(2)   总被引:13,自引:0,他引:13  
本文介绍用最小二乘准则建立多对多线性回归模型 (MLR) ,以及建立主成分回归 (PCR) ,并通过例子比较之。  相似文献   

7.
应用SAS解非线性回归问题   总被引:2,自引:0,他引:2  
.应用SAS/STAT估计非线性回归模型中的参数.首先,通过变量代换,把可以线性化的非线性回归模型化为线性回归模型,并用普通最小二乘法、主成分分析法和偏最小二乘法求模型中的参数和回归模型.其次,通过改良的高斯—牛顿迭代法来估计Logistic模型和Compertz模型中的参数.  相似文献   

8.
助推偏最小二乘法(BPLS)及其应用   总被引:2,自引:0,他引:2  
在生物统计以及数据挖掘中,分类预测是最基本的任务之一。本文将探讨一种新的方法-助推偏最小二乘法(BPLS)。它结合了一系列收缩的偏最小二乘模型,每个模型只有一个主成分。这种新方法和传统的偏最小二乘方法不同,它不需要选择一系列的偏最小二乘成分。只需要确定两个参数即可。通过对真实数据的训练,得以证明这种新方法比传统的偏最小二乘法在防止过度拟合方面有更好的表现,同时能够保证精确度。  相似文献   

9.
运用时间序列多维自回归模型的定阶方法 ,解决了偏最小二乘回归模型中自变量的选择问题 .通过对我国财政收入的预报分析表明 ,这两种统计模型的结合使用 ,较大程度地提高了预报精度  相似文献   

10.
多项式回归的建模方法比较研究   总被引:18,自引:0,他引:18  
在实际工作中,人们在采用回归模型解释因果变量间的相关关系时,经常会遇到自变量之间存在幂乘关系的情况。在这种情况下,多项式回归模型成为一个合理的选择。由于多项式回归模型中自变量之间存在较强的相关关系,采用普通最小二乘回归方法来估计变量的回归系数,则会存在较大的误差。在本文中,为了提高多项式回归模型的预测准确性和可靠性,提出使用主成分分析、偏最小二乘回归建模,并采用仿真数据来比较它们的异同。  相似文献   

11.
神经网络汛期降水短期气候预测模型   总被引:1,自引:0,他引:1  
用1960-2005年每年5-9月安徽宣城7县(市)平均降水量作为预报对象,在对应降水前期逐月74项大气环流特征量资料、500hPa月平均高度场和月平均海温场资料中选取因子.采用主分量分析方法构造网络学习矩阵,降低矩阵维数,提高网络预测模型泛化性能.建立的神经网络汛期降水短期气候预测模型对历年样本拟合精度高,试报效果较好,可在气候预测业务中使用.  相似文献   

12.
Prediction of significant wave height (SWH) field is carried out in the Bay of Bengal (BOB) using a combination of empirical orthogonal function (EOF) analysis and genetic algorithm (GA). EOF analysis is performed on 4 years (2005–2008) of numerical wave model generated SWH field, and analyzed fields of zonal (U) and meridional (V) winds. This is to decompose the space-time distributed data into spatial modes ranked by their temporal variances. Two different variants of GA are tested. In the first one, univariate GA is applied to the time series of the first principal component (PC) of SWH in the training dataset after a filtering with singular spectrum analysis (SSA) for effecting noise reduction. The generated equations are used to carry out forecast of SWH field with various lead times. In the second method, multivariate GA is applied to the SSA filtered time series of the first PC of SWH, and time- lagged first PCs of U and V and again forecast equations are generated. Once again the forecast of SWH is carried out with same lead times. The quality of forecast is evaluated in terms of root mean square error of forecast. The results are also compared with buoy data at a location. It is concluded that the method can serve as a cost-effective alternate prediction technique in the BOB.  相似文献   

13.
We use the information in intraday data to forecast the volatility of crude oil at a horizon of 1–66 days using a variety of models relying on the decomposition of realized variance in its positive or negative (semivariances) part and its continuous or discontinuous part (jumps). We show the importance of these decompositions in predictive (in-sample) regressions using a number of specifications. Nevertheless, an important empirical finding comes from an out-of-sample analysis which unambiguously shows the limited interest of considering these components. Overall, our results indicates that a simple autoregressive specification mimicking long memory and using past realized variances as predictors does not perform significantly worse than more sophisticated models which include the various components of realized variance.  相似文献   

14.
Adaptive principal component analysis is prohibitively expensive when a large‐scale data matrix must be updated frequently. Therefore, we consider the truncated URV decomposition that allows faster updates to its approximation to the singular value decomposition while still producing a good enough approximation to recover principal components. Specifically, we suggest an efficient algorithm for the truncated URV decomposition when a rank 1 matrix updates the data matrix. After the algorithm development, the truncated URV decomposition is successfully applied to the template tracking problem in a video sequence proposed by Matthews et al. [IEEE Trans. Pattern Anal. Mach. Intell., 26:810‐815 2004], which requires computation of the principal components of the augmented image matrix at every iteration. From the template tracking experiments, we show that, in adaptive applications, the truncated URV decomposition maintains a good approximation to the principal component subspace more efficiently than other procedures.  相似文献   

15.
In this paper we review various approaches to the decomposition of total strains into elastic and nonelastic (plastic) components in the multiplicative representation of the deformation gradient tensor. We briefly describe the kinematics of finite deformations and arbitrary plastic flows. We show that products of principal values of distortion tensors for elastic and plastic deformations define principal values of the distortion tensor for total deformations. We describe two groups of methods for decomposing deformations and their rates into elastic and nonelastic components. The methods of the first group additively decompose specially built tensors defined in a common basis (initial, current, or “intermediate”). The second group implies a certain relation connecting tensors that describe elastic and plastic deformations. We adduce an example of constructing constitutive relations for elastoplastic continuums at large deformations from thermodynamic equations.  相似文献   

16.
This paper built a hybrid decomposition-ensemble model named VMD-ARIMA-HGWO-SVR for the purpose of improving the stability and accuracy of container throughput prediction. The latest variational mode decomposition (VMD) algorithm is employed to decompose the original series into several modes (components), then ARIMA models are built to forecast the low-frequency components, and the high-frequency components are predicted by SVR models which are optimized with a recently proposed swarm intelligence algorithm called hybridizing grey wolf optimization (HGWO), following this, the prediction results of all modes are ensembled as the final forecasting result. The error analysis and model comparison results show that the VMD is more effective than other decomposition methods such as CEEMD and WD, moreover, adopting ARIMA models for prediction of low-frequency components can yield better results than predicting all components by SVR models. Based on the results of empirical study, the proposed model has good prediction performance on container throughput data, which can be used in practical work to provide reference for the operation and management of ports to improve the overall efficiency and reduce the operation costs.  相似文献   

17.
We compute first variation formulas for the complex components of the Bakry-Emery-Ricci endomorphism along Kähler structures. Our formulas show that the principal parts of the variations are quite standard complex differential operators with particular symmetry properties on the complex decomposition of the variation of the Kähler metric. We show as application that the Soliton-Kähler-Ricci flow generated by the Soliton-Ricci flow represents a complex strictly parabolic system of the complex components of the variation of the Kähler metric.  相似文献   

18.
Solving large scale linear systems efficiently plays an important role in a petroleum reservoir simulator, and the key part is how to choose an effective parallel preconditioner. Properly choosing a good preconditioner has been beyond the pure algebraic field. An integrated preconditioner should include such components as physical background, characteristics of PDE mathematical model, nonlinear solving method, linear solving algorithm, domain decomposition and parallel computation. We first discuss some parallel preconditioning techniques, and then construct an integrated preconditioner, which is based on large scale distributed parallel processing, and reservoir simulation-oriented. The infrastructure of this preconditioner contains such famous preconditioning construction techniques as coarse grid correction, constraint residual correction and subspace projection correction. We essentially use multi-step means to integrate totally eight types of preconditioning components in order to give out the final preconditioner. Million-grid cell scale industrial reservoir data were tested on native high performance computers. Numerical statistics and analyses show that this preconditioner achieves satisfying parallel efficiency and acceleration effect.  相似文献   

19.
Solving large scale linear systems efficiently plays an important role in a petroleum reservoir simulator, and the key part is how to choose an effective parallel preconditioner. Properly choosing a good preconditioner has been beyond the pure algebraic field. An integrated preconditioner should include such components as physical background, characteristics of PDE mathematical model, nonlinear solving method, linear  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号