首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
给出基于二次损失的单位球盖(单位球)上确定型散乱数据核正则化回归误差的上界估计,将学习误差估计转化为核函数积分的误差分析,借助于学习理论中的K-泛函与光滑模的等价性刻画了学习速度.研究结果表明学习速度由网格范数所控制.  相似文献   

2.
最大相关熵回归在信号处理领域有广泛应用,其收敛性分析是机器学习领域中的热门研究课题.本文给出一种新的误差分析框架,将非凸优化问题转化为局部凸优化问题,然后应用凸分析方法给出最大相关熵回归(MCCR)收敛性的理论分析;将最优化回归函数表示成一种积分方程的解,用K-泛函和再生核Hilbert空间最佳逼近表示泛化误差,给出学习速度的一种上界估计.  相似文献   

3.
李秉政  王国卯 《中国科学A辑》2008,38(9):1067-1080
给出具有多项式核最小二乘正则化回归算法的逼近阶, 目的是解决学习理论中回归问题的误差分析. 构造了一种可以产生逼近于最优逼近阶的正则化方案, 而所得到的逼近阶依赖于多项式空间的维数和具有多项式核的再生核Hilbert 空间, 同时也建立了Borel 概率测度下的~$L_{\rho_X}^2$ 空间中Bernstein-Durrmeyer 算子逼近的正定理.  相似文献   

4.
根据有界差分条件,提出了学习算法的有界差分稳定框架.依据新框架,研究了机器学习阈值选择算法,再生核Hilbert空间中的正则化学习算法,Ranking学习算法和Bagging算法,证明了对应学习算法的有界差分稳定性.所获结果断言了这些算法均具有有界差分稳定性,从而为这些算法的应用奠定了理论基础.  相似文献   

5.
本文讨论了再生核Hilbert 空间上一类广泛的正则化回归算法的学习率问题. 在分析算法的样本误差时, 我们利用了一种复加权的经验过程, 保证了方差与惩罚泛函同时被阈值控制, 从而避免了繁琐的迭代过程. 本文得到了比之前文献结果更为快速的学习率.  相似文献   

6.
本文研究了由高斯核构成的拟插值算子在闭区间上的近似逼近问题.利用函数延拓和近似单位分划的方法,构造了拟插值算子,并得到了一致范数下的逼近阶估计.  相似文献   

7.
研究了压缩最小平方回归学习算法的泛化性问题.利用随机投影、覆盖数等理论以及概率不等式得到了该学习算法的泛化误差上界.所获结果表明:压缩学习虽以增大逼近误差的方式降低样本误差,但其增量是可控的.此外,通过压缩学习,在一定程度上克服了学习过程中所出现的过拟合现象.  相似文献   

8.
研究l~P-系数正则化意义下Shannon采样学习算法的收敛速度估计问题.借助l~P-空间的凸性不等式给出了样本误差和正则化误差的上界估计,并给出了用K-泛函表示的逼近误差估计.将K-泛函的收敛速度估计转化为平移网络逼近问题,在此基础上给出了用概率表示的学习速度.  相似文献   

9.
基于高斯RBF核支持向量机预测棉花商品期货主力和次主力合约协整关系的价差序列,确定最优SVM参数,并选择合适的开平仓阈值,进行同品种跨期套利.再与多项式核支持向量机套利结果对比,得到在所有开平仓阈值上,基于高斯RBF核支持向量机套利的收益率都明显高于多项式核支持向量机套利的收益率.  相似文献   

10.
马静  李星野  徐荣 《经济数学》2017,34(1):11-17
选用2008~2015共8年数据,首先基于高斯核的支持向量机在沪市A股上构建周期性的投资组合,并通过误差图和评价指标与BP神经网络、广义回归神经网络进行比较,结果表明了支持向量机在股票预测上更具有优势.再将改进遗传算法运用于上证股票市场构建最优投资组合,以上证指数作为基准进行比较,得出混合遗传算法优化组合的模型相比单一模型更为有效.  相似文献   

11.
Learning Rates of Least-Square Regularized Regression   总被引:1,自引:0,他引:1  
This paper considers the regularized learning algorithm associated with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and on the capacity of the reproducing kernel Hilbert space measured by covering numbers. When the kernel is C and the regression function lies in the corresponding reproducing kernel Hilbert space, the rate is mζ with ζ arbitrarily close to 1, regardless of the variance of the bounded probability distribution.  相似文献   

12.
In regularized kernel methods, the solution of a learning problem is found by minimizing a functional consisting of a empirical risk and a regularization term. In this paper, we study the existence of optimal solution of multi-kernel regularization learning. First, we ameliorate a previous conclusion about this problem given by Micchelli and Pontil, and prove that the optimal solution exists whenever the kernel set is a compact set. Second, we consider this problem for Gaussian kernels with variance σ∈(0,∞), and give some conditions under which the optimal solution exists.  相似文献   

13.
Learning function relations or understanding structures of data lying in manifolds embedded in huge dimensional Euclidean spaces is an important topic in learning theory. In this paper we study the approximation and learning by Gaussians of functions defined on a d-dimensional connected compact C Riemannian submanifold of which is isometrically embedded. We show that the convolution with the Gaussian kernel with variance σ provides the uniform approximation order of O(σ s ) when the approximated function is Lipschitz s ∈(0, 1]. The uniform normal neighborhoods of a compact Riemannian manifold play a central role in deriving the approximation order. This approximation result is used to investigate the regression learning algorithm generated by the multi-kernel least square regularization scheme associated with Gaussian kernels with flexible variances. When the regression function is Lipschitz s, our learning rate is (log2 m)/m) s/(8 s + 4 d) where m is the sample size. When the manifold dimension d is smaller than the dimension n of the underlying Euclidean space, this rate is much faster compared with those in the literature. By comparing approximation orders, we also show the essential difference between approximation schemes with flexible variances and those with a single variance. Supported partially by the Research Grants Council of Hong Kong [Project No. CityU 103405], City University of Hong Kong [Project No. 7001983], National Science Fund for Distinguished Young Scholars of China [Project No. 10529101], and National Basic Research Program of China [Project No. 973-2006CB303102].  相似文献   

14.
This paper studies the conditional quantile regression problem involving the pinball loss. We introduce a concept of τ-quantile of p-average logarithmic type q to complement the previous study by Steinwart and Christman (2008, 2011) [1] and [2]. A new comparison theorem is provided which can be used for further error analysis of some learning algorithms.  相似文献   

15.
In this paper, we give several results of learning errors for linear programming support vector regression. The corresponding theorems are proved in the reproducing kernel Hilbert space. With the covering number, the approximation property and the capacity of the reproducing kernel Hilbert space are measured. The obtained result (Theorem 2.1) shows that the learning error can be controlled by the sample error and regularization error. The mentioned sample error is summarized by the errors of learning regression function and regularizing function in the reproducing kernel Hilbert space. After estimating the generalization error of learning regression function (Theorem 2.2), the upper bound (Theorem 2.3) of the regularized learning algorithm associated with linear programming support vector regression is estimated.  相似文献   

16.
多层神经网络的一个快速算法   总被引:5,自引:0,他引:5  
本文对文[4]提出的前馈式多层神经网络的单参数动态搜索(SPDS)算法进行了深入的分析,给出了实现快速一维搜索的两个方案,从而实现了多层神经网络更为快速的学习训练.  相似文献   

17.
岭回归中确定K值方法的推广   总被引:1,自引:1,他引:1  
给出了一种新的逐步改进岭参数 k的方法 .这种方法能够通过调整岭参数来进一步减少岭估计的均方误差 ,并改进了 Hoerl和 Kennard的结果 .  相似文献   

18.
对确定岭参数的方法进行了推广,给出了一种新的逐步改进岭参数κ的方法,这种方法能够通过调整岭参数来进一步减少岭估计的均方误差,并改进了Hoerl和Kennard的结果。  相似文献   

19.
联想记忆系统学习算法的改进   总被引:3,自引:0,他引:3  
借助于牛顿向后插公式对文 [1 ]的 NFI-AMS学习算法进行了改进 ,改进后的联想记忆系统的学习算法不但具有原来学习算法的收敛速度快、学习精度高等优点 ,而且还具备了 CMAC-AMS本身具有的局部泛化 (generalaization亦称推广 )能力以及对周围信息的收集能力大大增强等等 .数值模拟表明 ,这种改进的NFI-AMS在信号处理、模式识别及高精度的实时智能控制等领域具有很大的应用潜力  相似文献   

20.
Analysis of Support Vector Machines Regression   总被引:1,自引:0,他引:1  
Support vector machines regression (SVMR) is a regularized learning algorithm in reproducing kernel Hilbert spaces with a loss function called the ε-insensitive loss function. Compared with the well-understood least square regression, the study of SVMR is not satisfactory, especially the quantitative estimates of the convergence of this algorithm. This paper provides an error analysis for SVMR, and introduces some recently developed methods for analysis of classification algorithms such as the projection operator and the iteration technique. The main result is an explicit learning rate for the SVMR algorithm under some assumptions. Research supported by NNSF of China No. 10471002, No. 10571010 and RFDP of China No. 20060001010.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号