首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Extreme learning machine (ELM) not only is an effective classifier in supervised learning, but also can be applied on unsupervised learning and semi-supervised learning. The model structure of unsupervised extreme learning machine (US-ELM) and semi-supervised extreme learning machine (SS-ELM) are same as ELM, the difference between them is the cost function. We introduce kernel function to US-ELM and propose unsupervised extreme learning machine with kernel (US-KELM). And SS-KELM has been proposed. Wavelet analysis has the characteristics of multivariate interpolation and sparse change, and Wavelet kernel functions have been widely used in support vector machine. Therefore, to realize a combination of the wavelet kernel function, US-ELM, and SS-ELM, unsupervised extreme learning machine with wavelet kernel function (US-WKELM) and semi-supervised extreme learning machine with wavelet kernel function (SS-WKELM) are proposed in this paper. The experimental results show the feasibility and validity of US-WKELM and SS-WKELM in clustering and classification.  相似文献   

2.
This paper presents a methodology for finding optimal system parameters and optimal control parameters using a novel adaptive particle swarm optimization (APSO) algorithm. In the proposed APSO, every particle dynamically adjusts inertia weight according to feedback taken from particles’ best memories. The main advantages of the proposed APSO are to achieve faster convergence speed and better solution accuracy with minimum incremental computational burden. In the beginning we attempt to utilize the proposed algorithm to identify the unknown system parameters the structure of which is assumed to be known previously. Next, according to the identified system, PID gains are optimally found by also using the proposed algorithm. Two simulated examples are finally given to demonstrate the effectiveness of the proposed algorithm. The comparison to PSO with linearly decreasing inertia weight (LDW-PSO) and genetic algorithm (GA) exhibits the APSO-based system’s superiority.  相似文献   

3.
In recent years, a great deal of research has focused on the sparse representation for signal. Particularly, a dictionary learning algorithm, K-SVD, is introduced to efficiently learn an redundant dictionary from a set of training signals. Indeed, much progress has been made in different aspects. In addition, there is an interesting technique named extreme learning machine (ELM), which is an single-layer feed-forward neural networks (SLFNs) with a fast learning speed, good generalization and universal classification capability. In this paper, we propose an optimization method about K-SVD, which is an denoising deep extreme learning machines based on autoencoder (DDELM-AE) for sparse representation. In other words, we gain a new learned representation through the DDELM-AE and as the new “input”, it makes the conventional K-SVD algorithm perform better. To verify the classification performance of the new method, we conduct extensive experiments on real-world data sets. The performance of the deep models (i.e., Stacked Autoencoder) is comparable. The experimental results indicate the fact that our proposed method is very efficient in the sight of speed and accuracy.  相似文献   

4.
In this paper, a new method for nonlinear system identification via extreme learning machine neural network based Hammerstein model (ELM-Hammerstein) is proposed. The ELM-Hammerstein model consists of static ELM neural network followed by a linear dynamic subsystem. The identification of nonlinear system is achieved by determining the structure of ELM-Hammerstein model and estimating its parameters. Lipschitz quotient criterion is adopted to determine the structure of ELM-Hammerstein model from input–output data. A generalized ELM algorithm is proposed to estimate the parameters of ELM-Hammerstein model, where the parameters of linear dynamic part and the output weights of ELM neural network are estimated simultaneously. The proposed method can obtain more accurate identification results with less computation complexity. Three simulation examples demonstrate its effectiveness.  相似文献   

5.
Kernel logistic regression (KLR) is a very powerful algorithm that has been shown to be very competitive with many state-of the art machine learning algorithms such as support vector machines (SVM). Unlike SVM, KLR can be easily extended to multi-class problems and produces class posterior probability estimates making it very useful for many real world applications. However, the training of KLR using gradient based methods or iterative re-weighted least squares can be unbearably slow for large datasets. Coupled with poor conditioning and parameter tuning, training KLR can quickly design matrix become infeasible for some real datasets. The goal of this paper is to present simple, fast, scalable, and efficient algorithms for learning KLR. First, based on a simple approximation of the logistic function, a least square algorithm for KLR is derived that avoids the iterative tuning of gradient based methods. Second, inspired by the extreme learning machine (ELM) theory, an explicit feature space is constructed through a generalized single hidden layer feedforward network and used for training iterative re-weighted least squares KLR (IRLS-KLR) and the newly proposed least squares KLR (LS-KLR). Finally, for large-scale and/or poorly conditioned problems, a robust and efficient preconditioned learning technique is proposed for learning the algorithms presented in the paper. Numerical results on a series of artificial and 12 real bench-mark datasets show first that LS-KLR compares favorable with SVM and traditional IRLS-KLR in terms of accuracy and learning speed. Second, the extension of ELM to KLR results in simple, scalable and very fast algorithms with comparable generalization performance to their original versions. Finally, the introduced preconditioned learning method can significantly increase the learning speed of IRLS-KLR.  相似文献   

6.
双并联前馈神经网络模型是单层感知机和单隐层前馈神经网络的混合结构,本文构造了一种双并联快速学习机算法,与其他类似算法比较,提出的算法能利用较少的隐层单元及更少的待定参数,获得近似的学习性能.数值实验表明,对很多实际分类问题,提出的算法具备更佳的泛化能力,因而可以作为快速学习机算法的有益补充.  相似文献   

7.
基于指数Laplace损失函数的回归估计鲁棒超限学习机   总被引:1,自引:0,他引:1       下载免费PDF全文
实际问题的数据集通常受到各种噪声的影响,超限学习机(extreme learning machine, ELM)对这类数据集进行学习时,表现出预测精度低、预测结果波动大.为了克服该缺陷,采用了能够削弱噪声影响的指数Laplace损失函数.该损失函数是建立在Gauss核函数基础上,具有可微、非凸、有界且能够趋近于Laplace函数的特点.将其引入到超限学习机中,提出了鲁棒超限学习机回归估计(exponential Laplace loss function based robust ELM for regression, ELRELM)模型.利用迭代重赋权算法求解模型的优化问题.在每次迭代中,噪声样本点被赋予较小的权值,能够有效地提高预测精度.真实数据集实验验证了所提出的模型相比较于对比算法具有更优的学习性能和鲁棒性.  相似文献   

8.
Interbank Offered rate is the only direct market rate in China’s currency market. Volatility forecasting of China Interbank Offered Rate (IBOR) has a very important theoretical and practical significance for financial asset pricing and financial risk measure or management. However, IBOR is a dynamics and non-steady time series whose developmental changes have stronger random fluctuation, so it is difficult to forecast the volatility of IBOR. This paper offers a hybrid algorithm using grey model and extreme learning machine (ELM) to forecast volatility of IBOR. The proposed algorithm is composed of three phases. In the first, grey model is used to deal with the original IBOR time series by accumulated generating operation (AGO) and weaken the stochastic volatility in original series. And then, a forecasting model is founded by using ELM to analyze the new IBOR series. Lastly, the predictive value of the original IBOR series can be obtained by inverse accumulated generating operation (IAGO). The new model is applied to forecasting Interbank Offered Rate of China. Compared with the forecasting results of BP and classical ELM, the new model is more efficient to forecasting short- and middle-term volatility of IBOR.  相似文献   

9.
In this era of big data, more and more models need to be trained to mine useful knowledge from large scale data. It has become a challenging problem to train multiple models accurately and efficiently so as to make full use of limited computing resources. As one of ELM variants, online sequential extreme learning machine (OS-ELM) provides a method to learn from incremental data. MapReduce, which provides a simple, scalable and fault-tolerant framework, can be utilized for large scale learning. In this paper, we propose an efficient parallel method for batched online sequential extreme learning machine (BPOS-ELM) training using MapReduce. Map execution time is estimated with historical statistics, where regression method and inverse distance weighted interpolation method are used. Reduce execution time is estimated based on complexity analysis and regression method. Based on the estimations, BPOS-ELM generates a Map execution plan and a Reduce execution plan. Finally, BPOS-ELM launches one MapReduce job to train multiple OS-ELM models according to the generated execution plan, and collects execution information to further improve estimation accuracy. Our proposal is evaluated with real and synthetic data. The experimental results show that the accuracy of BPOS-ELM is at the same level as those of OS-ELM and parallel OS-ELM (POS-ELM) with higher training efficiencies.  相似文献   

10.
Assemble-to-order (ATO) systems refer to a manufacturing process in which a customer must first place an order before the ordered item is manufactured. An ATO system that operates under a continuous-review base-stock policy can be formulated as a stochastic simulation optimization problem (SSOP) with a huge search space, which is known as NP-hard. This work develops an ordinal optimization (OO) based metaheuristic algorithm, abbreviated to OOMH, to determine a near-optimal design (target inventory level) in ATO systems. The proposed approach covers three main modules, which are meta-modeling, exploration, and exploitation. In the meta-modeling module, the extreme learning machine (ELM) is used as a meta-model to estimate the approximate objective value of a design. In the exploration module, the elite teaching-learning-based optimization (TLBO) approach is utilized to select N candidate designs from the entire search space, where the fitness of a design is evaluated using the ELM. In the exploitation module, the sequential ranking-and-selection (R&S) scheme is used to optimally allocate the computing resource and budget for effective selecting the critical designs from the N candidate designs. Finally, the proposed algorithm is applied to two general ATO systems. The large ATO system comprises 12 items on eight products and the moderately sized ATO system is composed of eight items on five products. Test results that are obtained using the OOMH approach are compared with those obtained using three heuristic methods and a discrete optimization-via-simulation (DOvS) algorithm. Analytical results reveal that the proposed method yields solutions of much higher quality with a much higher computational efficiency than the three heuristic methods and the DOvS algorithm.  相似文献   

11.
为了捕捉农产品市场期货价格波动的复杂特征,进一步提高其预测精度,基于分解集成的思想,构建包含变分模态分解(VMD)和极限学习机(ELM)的分解集成预测模型。首先,利用VMD分解的自适应性和非递归性,选择VMD将复杂时间序列分解成多个模态分量(IMF)。其次,针对VMD分解关键参数模态数K的选取难题,提出基于最小模糊熵准则寻找最优K值的方法,有效避免模态混淆和端点效应问题,从而提升VMD的分解能力。最后,利用ELM强大的学习能力和泛化能力,对VMD分解得到的不同尺度子序列进行预测,集成得到最终预测结果。以CBOT交易所稻谷、小麦、豆粕期货价格作为研究对象,实证结果表明,该分解集成预测模型在预测精度和方向性指标上,显著优于单预测模型和其它分解集成预测模型,为农产品期货价格预测提供了一种新途径。  相似文献   

12.
The support vector machine (SVM) is a popular learning method for binary classification. Standard SVMs treat all the data points equally, but in some practical problems it is more natural to assign different weights to observations from different classes. This leads to a broader class of learning, the so-called weighted SVMs (WSVMs), and one of their important applications is to estimate class probabilities besides learning the classification boundary. There are two parameters associated with the WSVM optimization problem: one is the regularization parameter and the other is the weight parameter. In this article, we first establish that the WSVM solutions are jointly piecewise-linear with respect to both the regularization and weight parameter. We then develop a state-of-the-art algorithm that can compute the entire trajectory of the WSVM solutions for every pair of the regularization parameter and the weight parameter at a feasible computational cost. The derived two-dimensional solution surface provides theoretical insight on the behavior of the WSVM solutions. Numerically, the algorithm can greatly facilitate the implementation of the WSVM and automate the selection process of the optimal regularization parameter. We illustrate the new algorithm on various examples. This article has online supplementary materials.  相似文献   

13.
Electrical capacitance tomography (ECT) is a potential measurement technology for industrial process monitoring, but its applicability is generally limited by low-quality tomographic images. Boosting the performance of inverse computing imaging algorithms is the key to improving the reconstruction quality (RQ). Common regularization iteration imaging methods with analytical prior regularizers are less flexible in dealing with actual reconstruction tasks, leading to large reconstruction errors. To address the challenge, this study proposes a new imaging method, including reconstruction model and optimizer. The data-driven regularizer from a new ensemble learning model and the analytical prior regularizer with the focus on the sparsity of imaging objects are combined into a new optimization model for imaging. In the proposed ensemble learning model, the generalized low rank approximations of matrices (GLRAM) method is used to carry out the dimensionality reduction for decreasing the redundancy of the input data and improving the diversity, the extreme learning machine (ELM) serves as a base learner and the nuclear norm based matrix regression (NNMR) method is developed to aggregate the ensemble of solutions. The singular value thresholding method (SVTM) and the fast iterative shrinkage-thresholding algorithm (FISTA) are inserted into the split Bregman method (SBM) to generate a powerful optimizer for the built computational model. Its comparison to other competing methods through numerical experiments on typical imaging targets demonstrates that the developed algorithm reduces reconstruction error and achieves much more improvement in imaging quality and robustness.  相似文献   

14.
当今道路交通状态对城市管理和人们出行愈加重要,影响着人类生活的方方面面.以深圳交通为研究对象,由基础车辆数据和道路坐标构建了路网系统,从车辆速度和密度两个方面导出了交通流状态评价指数TSI.利用深度学习长短期记忆神经网络(LSTM)对车辆速度和密度两个指标进行预测,并通过对比极限学习机(ELM),时间序列(ARMA)和BP神经网络,进行仿真实验,结果表明相对于传统预测模型,所采用的LSTM网络具有更优的预测精确度和对远期预测的稳定性.最后利用预测结果计算出更能直观反映出道路交通拥堵情况的TSI指数,为人们提供了准确的交通状态预测.  相似文献   

15.
Besides requiring a good fit of the learned model to the empirical data, machine learning problems usually require such a model to satisfy additional constraints. Their satisfaction can be either imposed a-priori, or checked a-posteriori, once the optimal solution to the learning problem has been determined. In this framework, it is proved in the paper that the optimal solutions to several batch and online regression problems (specifically, the Ordinary Least Squares, Tikhonov regularization, and Kalman filtering problems) satisfy, under certain conditions, either symmetry or antisymmetry constraints, where the symmetry/antisymmetry is defined with respect to a suitable transformation of the data. Computational issues related to the obtained theoretical results (i.e., reduction of the dimensions of the matrices involved in the computations of the optimal solutions) are also described. The results, which are validated numerically, have potential application in machine-learning problems such as pairwise binary classification, learning of preference relations, and learning the weights associated with the directed arcs of a graph under symmetry/antisymmetry constraints.  相似文献   

16.
In this paper we analyze the warm-standby M/M/R machine repair problem with multiple imperfect coverage which involving the service pressure condition. When an operating machine (or warm standby) fails, it may be immediately detected, located, and replaced with a coverage probability c by a standby if one is available. We use a recursive method to develop the steady-state analytic solutions which are used to calculate various system performance measures. The total expected profit function per unit time is derived to determine the joint optimal values at the maximum profit. We first utilize the direct search method to measure the various characteristics of the profit function followed by Quasi-Newton method to search the optimal solutions. Furthermore, the particle swarm optimization (PSO) algorithm is implemented to find the optimal combinations of parameters in the pursuit of maximum profit. Finally, a comparative analysis of the Quasi-Newton method with the PSO algorithm has demonstrated that the PSO algorithm provides a powerful tool to perform the optimization problem.  相似文献   

17.
模型估计是机器学习领域一个重要的研究内容,动态数据的模型估计是系统辨识和系统控制的基础.针对AR时间序列模型辨识问题,证明了在给定阶数下AR模型参数的最小二乘估计本质上也是一种矩估计.根据结构风险最小化原理,通过对模型拟合度和模型复杂度的折衷,提出了基于稀疏结构迭代的AR序列模型估计算法,并讨论了基于广义岭估计的最优正则化参数选取规则.数值结果表明,方法能以节省参数的方式有效地实现AR模型的辨识,比矩估计法结果有明显改善.  相似文献   

18.
There are more than two dozen variants of particle swarm optimization (PSO) algorithms in the literature. Recently, a new variant, called accelerated PSO (APSO), shows some extra advantages in convergence for global search. In the present study, we will introduce chaos into the APSO in order to further enhance its global search ability. Firstly, detailed studies are carried out on benchmark problems with twelve different chaotic maps to find out the most efficient one. Then the chaotic APSO (CAPSO) will be compared with some other chaotic PSO algorithms presented in the literature. The performance of the CAPSO algorithm is also validated using three engineering problems. The results show that the CAPSO with an appropriate chaotic map can clearly outperform standard APSO, with very good performance in comparison with other algorithms and in application to a complex problem.  相似文献   

19.
Scheduling with unexpected machine breakdowns   总被引:1,自引:0,他引:1  
We investigate an online version of a basic scheduling problem where a set of jobs has to be scheduled on a number of identical machines so as to minimize the makespan. The job processing times are known in advance and preemption of jobs is allowed. Machines are non-continuously available, i.e., they can break down and recover at arbitrary time instances not known in advance. New machines may be added as well. Thus machine availabilities change online. We first show that no online algorithm can construct optimal schedules. We also show that no online algorithm can achieve a bounded competitive ratio if there may be time intervals where no machine is available. Then we present an online algorithm that constructs schedules with an optimal makespan of CmaxOPT if a lookahead of one is given, i.e., the algorithm always knows the next point in time when the set of available machines changes. Finally, we give an online algorithm without lookahead that constructs schedules with a nearly optimal makespan of CmaxOPT+, for any >0, if at any time at least one machine is available. Our results demonstrate that not knowing machine availabilities in advance is of little harm.  相似文献   

20.
§ 1  IntroductionIf you knock the word“SVM”in the SCI index tool on International network,youwould take on thousands of records immediately.This shows its great effects on ourworld.SVM,namely,support vector machines have been successfully applied to a numberof applications ranging from particle identification and text categorization to engine knockdetection,bioinformatics and database marketing[1— 6] .The approach is systematic andproperly motivated by statistical learning theory[7] .…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号