首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
为了对比支持向量回归(SVR)和核岭回归(KRR)预测血糖值的效果,本文选择人工智能辅助糖尿病遗传风险的相关数据进行实证分析.首先对数据进行预处理,将处理后的数据导入Python.其次,为了使SVR和KRR的对比结果具有客观性,使用了三种有代表性的核方法(线性核函数,径向基核函数和sigmod核函数).然后,在训练集上采用网格搜索自动调参分别建立SVR和KRR的最优模型,对血糖值进行预测.最后,在测试集上对比分析SVR和KRR预测的均方误差(MSE)和拟合时间等指标.结果表明:均方误差(MSE)都小于0.006,且KRR的MSE比SVR的小0.0002,KRR的预测精度比SVR更高;而SVR的预测时间比KRR的少0.803秒,SVR的预测效率比KRR好.  相似文献   

2.
Biased regression is an alternative to ordinary least squares (OLS) regression, especially when explanatory variables are highly correlated. In this paper, we examine the geometrical structure of the shrinkage factors of biased estimators. We show that, in most cases, shrinkage factors cannot belong to [0,1] in all directions. We also compare the shrinkage factors of ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLSR) in the orthogonal directions obtained by the signal-to-noise ratio (SNR) algorithm. In these directions, we find that PLSR and RR behave well, whereas shrinkage factors of PCR have an erratic behaviour.  相似文献   

3.
The estimation of the regression parameters for the ill-conditioned logistic regression model is considered in this paper. We proposed five ridge regression (RR) estimators, namely, unrestricted RR, restricted ridge regression, preliminary test RR, shrinkage ridge regression and positive rule RR estimators for estimating the parameters $(\beta )$ when it is suspected that the parameter $\beta $ may belong to a linear subspace defined by $H\beta =h$ . Asymptotic properties of the estimators are studied with respect to quadratic risks. The performances of the proposed estimators are compared based on the quadratic bias and risk functions under both null and alternative hypotheses, which specify certain restrictions on the regression parameters. The conditions of superiority of the proposed estimators for departure and ridge parameters are given. Some graphical representations and efficiency analysis have been presented which support the findings of the paper.  相似文献   

4.
In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space. We investigate a class of spectral/regularized algorithms, including ridge regression, principal component regression, and gradient methods. We prove optimal, high-probability convergence results in terms of variants of norms for the studied algorithms, considering a capacity assumption on the hypothesis space and a general source condition on the target function. Consequently, we obtain almost sure convergence results with optimal rates. Our results improve and generalize previous results, filling a theoretical gap for the non-attainable cases.  相似文献   

5.
In this paper, parametric regression analyses including both linear and nonlinear regressions are investigated in the case of imprecise and uncertain data, represented by a fuzzy belief function. The parameters in both the linear and nonlinear regression models are estimated using the fuzzy evidential EM algorithm, a straightforward fuzzy version of the evidential EM algorithm. The nonlinear regression model is derived by introducing a kernel function into the proposed linear regression model. An unreliable sensor experiment is designed to evaluate the performance of the proposed linear and nonlinear parametric regression methods, called parametric evidential regression (PEVREG) models. The experimental results demonstrate the high prediction accuracy of the PEVREG models in regressions with crisp inputs and a fuzzy belief function as output.  相似文献   

6.
We consider kernel density and regression estimation for a wide class of nonlinear time series models. Asymptotic normality and uniform rates of convergence of kernel estimators are established under mild regularity conditions. Our theory is developed under the new framework of predictive dependence measures which are directly based on the data-generating mechanisms of the underlying processes. The imposed conditions are different from the classical strong mixing conditions and they are related to the sensitivity measure in the prediction theory of nonlinear time series.  相似文献   

7.
In this paper we discuss an approach to the modeling of acoustic systems that combines prior information, exploited through physical modeling, and nonlinear dynamics reconstruction, exploited through support vector machine regression. We demonstrate our approach on two case studies, both addressing the broad class of acoustic systems for which the sound generation is obtained through the interaction of a linear system (resonator) and a nonlinear system (excitation). The first case is a physically based impact model, where the resonator is described in terms of its normal modes and the nonlinear contact force is modeled through a simplified collision equation and kernel regression. In the second case study, a model of the voice phonation is illustrated in which the vocal folds are represented by a lumped linear mass-spring system and the nonlinear flow component is modeled through simple Bernoulli-based equations and kernel regression.  相似文献   

8.
Let observations come from an infinite-order autoregressive (AR) process. For predicting the future of the observed time series (referred to as the same-realization prediction), we use the least-squares predictor obtained by fitting a finite-order AR model. We also allow the order to become infinite as the number of observations does in order to obtain a better approximation. Moment bounds for the inverse sample covariance matrix with an increasing dimension are established under various conditions. We then apply these results to obtain an asymptotic expression for the mean-squared prediction error of the least-squares predictor in same-realization and increasing-order settings. The second-order term of this expression is the sum of two terms which measure both the goodness of fit and model complexity. It forms the foundation for a companion paper by Ing and Wei (Order selection for same-realization predictions in autoregressive processes, Technical report C-00-09, Institute of Statistical Science, Academia Sinica, Taipei, Taiwan, ROC, 2000) which provides the first theoretical verification that AIC is asymptotically efficient for same-realization predictions. Finally, some comparisons between the least-squares predictor and the ridge regression predictor are also given.  相似文献   

9.
半参数回归模型的几乎无偏岭估计   总被引:2,自引:0,他引:2  
胡宏昌 《系统科学与数学》2009,29(12):1605-1612
提出了半参数回归模型的几乎无偏岭估计,并与岭估计进行了比较,在均方误差意义下,几乎无偏岭估计优于岭估计. 然后讨论了有偏参数的选取问题. 最后,用模拟算例和实际应用说明了几乎无偏岭估计的有效性和可行性.  相似文献   

10.
Sliced inverse regression (SIR) is an important method for reducing the dimensionality of input variables. Its goal is to estimate the effective dimension reduction directions. In classification settings, SIR is closely related to Fisher discriminant analysis. Motivated by reproducing kernel theory, we propose a notion of nonlinear effective dimension reduction and develop a nonlinear extension of SIR called kernel SIR (KSIR). Both SIR and KSIR are based on principal component analysis. Alternatively, based on principal coordinate analysis, we propose the dual versions of SIR and KSIR, which we refer to as sliced coordinate analysis (SCA) and kernel sliced coordinate analysis (KSCA), respectively. In the classification setting, we also call them discriminant coordinate analysis and kernel discriminant coordinate analysis. The computational complexities of SIR and KSIR rely on the dimensionality of the input vector and the number of input vectors, respectively, while those of SCA and KSCA both rely on the number of slices in the output. Thus, SCA and KSCA are very efficient dimension reduction methods.  相似文献   

11.
对于实时交通信息预测,预测精度与预测时间效率始终是一对难以解决的矛盾.重点研究如何提高预测时间效率问题.以精确在线支持向量回归算法(AOSVR)为基础,提出了基于云模型的sigmoid核函数简化计算方法,建立了改进的AOSVR交通信息实时预测模型.该模型应用于实际的交通流实时预测,预测结果表明,由于简化了计算,以损失较小回归精度的代价,显著提高AOSVR模型预测效率.  相似文献   

12.
Kernel logistic regression (KLR) is a powerful nonlinear classifier. The combination of KLR and the truncated-regularized iteratively re-weighted least-squares (TR-IRLS) algorithm, has led to a powerful classification method using small-to-medium size data sets. This method (algorithm), is called truncated-regularized kernel logistic regression (TR-KLR). Compared to support vector machines (SVM) and TR-IRLS on twelve benchmark publicly available data sets, the proposed TR-KLR algorithm is as accurate as, and much faster than, SVM and more accurate than TR-IRLS. The TR-KLR algorithm also has the advantage of providing direct prediction probabilities.  相似文献   

13.
Owing to the importance of differential equations in physics, the existence of solutions for differential equations has been paid much attention. In this paper, the existence of solution are obtained for the nonlinear second order two-point boundary value problem in the reproducing kernel space. Under certain assumptions on right-hand side, we propose constructive proof for the existence result, and a method is presented to obtain the exact solution expressed by the form of series. This paper is a extension of previous paper [Wei Jiang, Minggen Cui, The exact solution and stability analysis for integral equation of third or first kind with singular kernel, Appl. Math. Comput. 202 (2) (2008) 666-674], which extends a method of solving linear problems to present method for solving nonlinear problems.  相似文献   

14.
We consider a doubly nonlinear Volterra equation involving a nonsmooth kernel and two possibly degenerate monotone operators. By exploiting an implicit time-discretization procedure, we obtain the existence of a global strong solution and extend to the nonlocal in time situation some former results by Colli [P. Colli, On some doubly nonlinear evolution equations in Banach spaces, Japan J. Indust. Appl. Math. 9 (2) (1992) 181-203].  相似文献   

15.
Selecting important features in nonlinear kernel spaces is a difficult challenge in both classification and regression problems. This article proposes to achieve feature selection by optimizing a simple criterion: a feature-regularized loss function. Features within the kernel are weighted, and a lasso penalty is placed on these weights to encourage sparsity. This feature-regularized loss function is minimized by estimating the weights in conjunction with the coefficients of the original classification or regression problem, thereby automatically procuring a subset of important features. The algorithm, KerNel Iterative Feature Extraction (KNIFE), is applicable to a wide variety of kernels and high-dimensional kernel problems. In addition, a modification of KNIFE gives a computationally attractive method for graphically depicting nonlinear relationships between features by estimating their feature weights over a range of regularization parameters. The utility of KNIFE in selecting features through simulations and examples for both kernel regression and support vector machines is demonstrated. Feature path realizations also give graphical representations of important features and the nonlinear relationships among variables. Supplementary materials with computer code and an appendix on convergence analysis are available online.  相似文献   

16.
A general Bayesian approach for stochastic versions of deterministic growth models is presented to provide predictions for crack propagation in an early stage of the growth process. To improve the prediction, the information of other crack growth processes is used in a hierarchical (mixed‐effects) model. Two stochastic versions of a deterministic growth model are compared. One is a nonlinear regression setup where the trajectory is assumed to be the solution of an ordinary differential equation with additive errors. The other is a diffusion model defined by a stochastic differential equation where increments have additive errors. While Bayesian prediction is known for hierarchical models based on nonlinear regression, we propose a new Bayesian prediction method for hierarchical diffusion models. Six growth models for each of the two approaches are compared with respect to their ability to predict the crack propagation in a large data example. Surprisingly, the stochastic differential equation approach has no advantage concerning the prediction compared with the nonlinear regression setup, although the diffusion model seems more appropriate for crack growth. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
This article derives characterizations and computational algorithms for continuous general gradient descent trajectories in high-dimensional parameter spaces for statistical model selection, prediction, and classification. Examples include proportional gradient shrinkage as an extension of LASSO and LARS, threshold gradient descent with right-continuous variable selectors, threshold ridge regression, and many more with proper combinations of variable selectors and functional forms of a kernel. In all these problems, general gradient descent trajectories are continuous piecewise analytic vector-valued curves as solutions to matrix differential equations. We show the monotonicity and convergence of the proposed algorithms in the loss or negative likelihood functions. We prove that approximations of continuous solutions via infinite series expansions are computationally more efficient and accurate compared with discretization methods. We demonstrate the applicability of our algorithms through numerical experiments with real and simulated datasets.  相似文献   

18.
We consider a prediction of a scalar variable based on both a function-valued variable and a finite number of real-valued variables. For the estimation of the regression parameters, which include the infinite dimensional function as well as the slope parameters for the real-valued variables, it is inevitable to impose some kind of regularization. We consider two different approaches, which are shown to achieve the same convergence rate of the mean squared prediction error under respective assumptions. One is based on functional principal components regression (FPCR) and the alternative is functional ridge regression (FRR) based on Tikhonov regularization. Also, numerical studies are carried out for a simulation data and a real data.  相似文献   

19.
在支持向量机预测建模中,核函数用来将低维特征空间中的非线性问题映射为高维特征空间中的线性问题.核函数的特征对于支持向量机的学习和预测都有很重要的影响.考虑到两种典型核函数—全局核(多项式核函数)和局部核(RBF核函数)在拟合与泛化方面的特性,采用了一种基于混合核函数的支持向量机方法用于预测建模.为了评价不同核函数的建模效果、得到更好的预测性能,采用遗传算法自适应进化支持向量机模型的各项参数,并将其应用于装备费用预测的实际问题中.实际计算表明采用混合核函数的支持向量机较单一核函数时有更好的预测性能,可以作为一种有效的预测建模方法在装备管理中推广应用.  相似文献   

20.
Wavelet methods are used to estimate density and (auto-) regression functions that are possibly discontinuous. For stationary time series that satisfy appropriate mixing conditions, we derive mean integrated squared errors (MISEs) of wavelet-based estimators. In contrast to the case for kernel methods, the MISEs of wavelet-based estimators are not affected by the presence of discontinuities in the curves. Applications of this approach to problems of identification of nonlinear time series models are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号