首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
概率空间上基于随机样本的统计学习理论被公认为是解决小样本学习问题的最佳理论,但它难以处理非概率空间上基于受噪声影响的随机样本学习问题。基于此,引入了机会空间上样本受噪声影响的经验风险泛函、期望风险泛函、经验风险最小化原则严格一致性的定义,提出并证明了机会空间上样本受噪声影响的学习理论关键定理。  相似文献   

2.
介绍模糊粗糙理论的基本内容;提出模糊粗糙经验风险泛函,模糊粗糙期望风险泛函,模糊粗糙经验风险最小化原则等概念;最后证明基于模糊粗糙样本的统计学习理论的关键定理并构建学习过程一致收敛速度的界.  相似文献   

3.
给出了模糊随机集基于拟概率的分布函数、期望的定义及性质,证明了模糊随机集基于拟概率的Chebyshev不等式、Hoeffding不等式和强大数定律,提出了基于拟概率和模糊样本的经验风险泛函、期望风险泛函以及经验风险最小化原则严格一致性定义,并证明了基于拟概率和模糊样本的学习理论的关键定理。  相似文献   

4.
基于双重粗糙样本的统计学习理论的理论基础   总被引:1,自引:0,他引:1  
本文介绍双重粗糙理论的基本内容;提出双重粗糙经验风险泛函,双重粗糙期望风险泛函,双重粗糙经验风险最小化原则等概念;最后证明基于双重粗糙样本的统计学习理论的关键定理并讨论学习过程一致收敛速度的界.为系统建立基于不确定样本的统计学习理论并构建相应的支持向量机奠定了理论基础.  相似文献   

5.
关键定理是统计学习理论的重要组成部分,但其研究主要集中在实随机样本且假设样本不受噪声的影响。鉴于此,提出了受噪声影响的模糊随机经验风险最小化原则的定义,给出并证明了受噪声影响的模糊随机样本的学习理论的关键定理。  相似文献   

6.
学习过程一致收敛速度的界决定了学习机器的推广能力,在统计学习理论中起着很重要的作用。以刘宝碇提出的模糊随机变量的概念和基于模糊随机样本的学习理论关键定理为基础,讨论了基于模糊随机样本的学习过程一致收敛速度的界,并给出了这些界与函数容量之间的关系。  相似文献   

7.
模糊随机变量及其变分原理   总被引:4,自引:0,他引:4       下载免费PDF全文
从模糊事件的概率出发,定义了峰型模糊随机变量及其真值的期望概念。直接将参数的模糊随机性引入总势能泛函中,利用小参数摄动法建立了模糊随机变分原理。并阐明了它的应用。  相似文献   

8.
刘家和  金秀  苑莹 《运筹与管理》2016,25(1):166-174
考虑投资者面临证券市场随机和模糊的双重不确定性,把证券收益率视为随机模糊变量。在前景理论下考虑投资者的风险态度,建立不同的随机模糊收益率、期望收益隶属度函数和目标权重,构建考虑投资者风险态度的随机模糊投资组合模型。采用实证方法把市场分为下降和上升两个阶段,研究不同风险态度投资者的投资组合差异及模型表现。结果表明:投资者的风险态度会影响投资组合的结构;考虑投资者风险态度的随机模糊投资组合模型,能够满足不同风险态度投资者对投资收益和风险的差异需求,且在实际投资决策中具有可行性。  相似文献   

9.
Vapnik,Cucker和Smale已经证明了,当样本的数目趋于无限时,基于独立同分布序列学习机器的经验风险会一致收敛到它的期望风险.本文把这些基于独立同分布序列的结果推广到了α-混合序列,应用Markov不等式得到了基于α-混合序列的学习机器一致收敛速率的界.  相似文献   

10.
Vapnik, Cucker和Smale已经证明了, 当样本的数目趋于无限时, 基于独立同分布序列学习机器的经验 风险会一致收敛到它的期望风险\bd 本文把这些基于独立同分布序列的结果推广到了$\alpha$\,-混合序列, 应用Markov不等式得到了基于$\alpha$\,-混合序列的学习机器一致收敛速率的界  相似文献   

11.
Many learning problems are described by a risk functional which in turn is defined by a loss function, and a straightforward and widely known approach to learn such problems is to minimize a (modified) empirical version of this risk functional. However, in many cases this approach suffers from substantial problems such as computational requirements in classification or robustness concerns in regression. In order to resolve these issues many successful learning algorithms try to minimize a (modified) empirical risk of a surrogate loss function, instead. Of course, such a surrogate loss must be "reasonably related" to the original loss function since otherwise this approach cannot work well. For classification good surrogate loss functions have been recently identified, and the relationship between the excess classification risk and the excess risk of these surrogate loss functions has been exactly described. However, beyond the classification problem little is known on good surrogate loss functions up to now. In this work we establish a general theory that provides powerful tools for comparing excess risks of different loss functions. We then apply this theory to several learning problems including (cost-sensitive) classification, regression, density estimation, and density level detection.  相似文献   

12.
Regularized empirical risk minimization including support vector machines plays an important role in machine learning theory. In this paper regularized pairwise learning (RPL) methods based on kernels will be investigated. One example is regularized minimization of the error entropy loss which has recently attracted quite some interest from the viewpoint of consistency and learning rates. This paper shows that such RPL methods and also their empirical bootstrap have additionally good statistical robustness properties, if the loss function and the kernel are chosen appropriately. We treat two cases of particular interest: (i) a bounded and non-convex loss function and (ii) an unbounded convex loss function satisfying a certain Lipschitz type condition.  相似文献   

13.
Popkov  Yu. S. 《Doklady Mathematics》2018,98(3):646-647
Doklady Mathematics - A new method for entropy-randomized machine learning is proposed based on empirical risk minimization instead of the exact fulfillment of empirical balance conditions. The...  相似文献   

14.

The author investigates the almost sure behaviour of the increments of the partially observed, uniform empirical process. Some functional laws of the iterated logarithm are obtained for this process. As an application, new laws of the iterated logarithm are established for kernel density estimators.

  相似文献   


15.
Cognitive technologies have been described in the literature as reorganisers of thinking processes, especially where problem solving is concerned. This paper aims to analyse the possible use of Cabri-Géomètre as a cognitive tool in the elaboration of mathematical justifications in the context of problem-based mathematics. Some empirical examples are given to illustrate the significance of the specific learning situation. The complexity of learning environments incorporating computer-based activities is stressed as a condition for them to be effective in the introduction of the idea of mathematical justification and its evolution towards a sense of proving.  相似文献   

16.
In regularized kernel methods, the solution of a learning problem is found by minimizing a functional consisting of a empirical risk and a regularization term. In this paper, we study the existence of optimal solution of multi-kernel regularization learning. First, we ameliorate a previous conclusion about this problem given by Micchelli and Pontil, and prove that the optimal solution exists whenever the kernel set is a compact set. Second, we consider this problem for Gaussian kernels with variance σ∈(0,∞), and give some conditions under which the optimal solution exists.  相似文献   

17.
18.
VaR风险控制体系的建立与应用   总被引:3,自引:0,他引:3  
目前VaR作为一种新的风险控制工具得到越来越广泛的应用,投资组合理论则一直沿用经典的σ2风险控制体系,虽说有人已经将VaR引入到了投资组合应用中来,但其风险控制尚未脱离对σ2的分解.将在引入股票相对价格的基础上构建了VaR风险控制体系,将投资风险VaRP分解为大盘指数风险VaRI和股票相对价格的风险VaRS之和,并给出了此风险控制体系在投资组合方面的基本应用方法.  相似文献   

19.
Evaluation for generalization performance of learning algorithms has been the main thread of machine learning theoretical research. The previous bounds describing the generalization performance of the empirical risk minimization (ERM) algorithm are usually established based on independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by establishing the generalization bounds of the ERM algorithm with uniformly ergodic Markov chain (u.e.M.c.) samples. We prove the bounds on the rate of uniform convergence/relative uniform convergence of the ERM algorithm with u.e.M.c. samples, and show that the ERM algorithm with u.e.M.c. samples is consistent. The established theory underlies application of ERM type of learning algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号