共查询到19条相似文献,搜索用时 8 毫秒
1.
传统的机器学习方法无法捕捉到电力负荷需求的不确定性以及动态变化规律.本文将最新提出的隐马尔可夫模型在线学习算法应用于电力负荷预测研究,充分提取历史数据中的不确定性特征和动态变化规律,并结合分解算法,更精确利用数据中的动态变化特征,从而提高预测精度.算法基于隐马尔可夫概率预测模型,在获得新样本时对模型进行在线更新,适应最新数据;利用STL时序分解算法对负荷数据进行分解,使具有不同不确定性和动态变化规律的分量分离开,再分别使用在线学习算法对不同特征的分量进行预测,构造电力负荷预测组合算法.基于三个公开电力负荷数据集的测试结果表明,相比于单一的在线学习模型,本文提出的组合算法提高了预测精度,预测相对误差最高减少了27%. 相似文献
2.
价格数量折扣可以提高订购量, 是库存决策中的一个重要因素. 特别地, 当订购量达到一定水平时, 价格折扣才会发生. 应用理论计算机科学兴起的弱集成算法, 研究具有这种价格数量折扣的多阶段报童问题的在线策略. 弱集成算法是一种在线序列决策算法, 其主要特点是不对未来输入做任何统计假设, 克服了报童问题研究中需要对需求做概率假设的困难. 主要将弱集成算法应用到固定订购量的专家策略, 给出了价格数量折扣下多阶段报童问题的具体在线策略;得到了该在线策略相对于最优专家策略的理论保证. 进一步将回收价值和缺货损失费引入, 给出了推广的在线策略及其理论结果. 最后应用数值算例说明了给出的在线策略具有较好的竞争性能. 相似文献
3.
《数学的实践与认识》2017,(16)
大数据时代背景下,越来越多领域对大数据计算提出了高要求,尤其各行各业产生的大数据更多地是一种动态的流式数据形态,因此,实现实时、快速、高效的大数据流计算与分析日益紧要.在线机器学习算法是解决实时大数据流分析的有效方案.在机器学习算法中,通过核学习能够获得有效的核函数,而所选核函数又对核学习器的性能有很大影响.结合在线机器学习与核函数研究一种适用于大数据流环境下的多任务在线学习算法,探讨了算法过程中可能出现的扰动项,应用数据依赖核的构建方法提高了算法的广泛性.算法不需要对历史数据流进行存储和重新扫描,只需选择一个数据集样本,在分析新的流式大数据时能够在可接受时间内直接将当前核函数更新为最合适的核函数,非常适合应用于流式大数据环境下的核学习问题. 相似文献
4.
线性与非线性规划算法与理论 总被引:3,自引:0,他引:3
线性规划与非线性规划是数学规划中经典而重要的研究方向. 主要介绍该研究方向的背景知识,并介绍线性规划、无约束优化和约束优化的最新算法与理论以及一些前沿与热点问题. 交替方向乘子法是一类求解带结构的约束优化问题的方法,近年来倍受重视. 全局优化是一个对于应用优化领域非常重要的研究方向. 因此也试图介绍这两个方面的一些最新研究进展和问题. 相似文献
5.
针对连续数据流分类问题,基于在线学习理论,提出一种在线logistic回归算法.研究带有正则项的在线logistic回归,提出了在线logistic-l2回归模型,并给出了理论界估计.最终实验结果表明,随着在线迭代次数的增加,提出的模型与算法能够达到离线预测的分类结果.本文工作为处理海量流数据分类问题提供了一种新的有效方法. 相似文献
6.
7.
8.
带机器准备时间的平行机在线与半在线排序 总被引:12,自引:0,他引:12
本文研究带机器准备时间的m台平行机系统在线和半在线排序问题.对在线排序问题,我们证明了LS算法的最坏情况界为2-1/m.对已知工件加工时间递减,已知总加工时间和已知工件最大加工时间三个半在线模型,我们分析了它们的下界和所给算法的最坏情况界.对其中两台机情形均得到了最好近似算怯。 相似文献
9.
10.
本文研究了目标为极大化机器最早完工时间的带机器准备时间的m台平行机在线和半在线排序问题.对于在线排序问题,本文证明了LS算法的竞争比为m.对于已知所有工件加工时间总和(sum)和最大工件加工时间(max)的两个半在线模型,本文分析了它们的下界,并给出了竞争比均为m-1的最优算法. 相似文献
11.
We show the inability of any pure strategy imitation rule for leading a decision maker towards optimality for given and fixed population behaviour. The intuition is that a pure strategy state space is too small to deal with a large variety of environments.
This result helps to understand the optimality result obtained by Schlag (1998), where the population behaviour is let to
evolve over time. The intuition is that the group composition provides an additional state space in which information about
the environment can be accumulated. 相似文献
12.
13.
本文研究了由高斯核的方差在算法中引起的一些误差,利用再生核的一些特殊性质对这些误差进行分析.这些误差在分析算法的收敛速度时起到了重要的作用. 相似文献
14.
Shu-guang Han Jiu-ling Guo Lu-ping Zhang Jue-liang Hu Yi-wei Jiang Di-wei Zhou 《高校应用数学学报(英文版)》2017,32(2):237-252
This paper investigates the online inventory problem with interrelated prices in which a decision of when and how much to replenish must be made in an online fashion even without concrete knowledge of future prices.Four new online models with different price correlations are proposed in this paper,which are the linear-decrease model,the log-decrease model,the logarithmic model and the exponential model.For the first two models,the online algorithms are developed,and as the performance measure of online algorithm,the upper and lower bounds of competitive ratios of the algorithms are derived respectively.For the exponential and logarithmic models,the online algorithms are proposed by the solution of linear programming and the corresponding competitive ratios are analyzed,respectively.Additionally,the algorithm designed for the exponential model is optimal,and the algorithm for the logarithmic model is optimal only under some certain conditions.Moreover,some numerical examples illustrate that the algorithms based on the dprice-conservative strategy are more suitable when the purchase price fluctuates relatively flat. 相似文献
15.
Yiming Ying 《Advances in Computational Mathematics》2007,27(3):273-291
In this paper, we are interested in the analysis of regularized online algorithms associated with reproducing kernel Hilbert
spaces. General conditions on the loss function and step sizes are given to ensure convergence. Explicit learning rates are
also given for particular step sizes.
★The author’s current address: Department of Computer Sciences, University College London, Gower Street, London WC1E, England,
UK. 相似文献
16.
Gaussians are important tools for learning from data of large dimensions. The variance of a Gaussian kernel is a measurement of the frequency range of function components or features retrieved by learning algorithms induced by the Gaussian. The learning ability and approximation power increase when the variance of the Gaussian decreases. Thus, it is natural to use Gaussians with decreasing variances for online algorithms when samples are imposed one by one. In this paper, we consider fully online classification algorithms associated with a general loss function and varying Gaussians which are closely related to regularization schemes in reproducing kernel Hilbert spaces. Learning rates are derived in terms of the smoothness of a target function associated with the probability measure controlling sampling and the loss function. A critical estimate is given for the norm of the difference of regularized target functions as the variance of the Gaussian changes. Concrete learning rates are presented for the online learning algorithm with the least square loss function. 相似文献
17.
Recently, there has been considerable work on analyzing learning algorithms with pairwise loss functions in the batch setting. There is relatively little theoretical work on analyzing their online algorithms, despite of their popularity in practice due to the scalability to big data. In this paper, we consider online learning algorithms with pairwise loss functions based on regularization schemes in reproducing kernel Hilbert spaces. In particular, we establish the convergence of the last iterate of the online algorithm under a very weak assumption on the step sizes and derive satisfactory convergence rates for polynomially decaying step sizes. Our technique uses Rademacher complexities which handle function classes associated with pairwise loss functions. Since pairwise learning involves pairs of examples, which are no longer i.i.d., standard techniques do not directly apply to such pairwise learning algorithms. Hence, our results are a non-trivial extension of those in the setting of univariate loss functions to the pairwise setting. 相似文献
18.