首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.  相似文献   

2.
A neural fuzzy control system with structure and parameter learning   总被引:8,自引:0,他引:8  
A general connectionist model, called neural fuzzy control network (NFCN), is proposed for the realization of a fuzzy logic control system. The proposed NFCN is a feedforward multilayered network which integrates the basic elements and functions of a traditional fuzzy logic controller into a connectionist structure which has distributed learning abilities. The NFCN can be constructed from supervised training examples by machine learning techniques, and the connectionist structure can be trained to develop fuzzy logic rules and find membership functions. Associated with the NFCN is a two-phase hybrid learning algorithm which utilizes unsupervised learning schemes for structure learning and the backpropagation learning scheme for parameter learning. By combining both unsupervised and supervised learning schemes, the learning speed converges much faster than the original backpropagation algorithm. The two-phase hybrid learning algorithm requires exact supervised training data for learning. In some real-time applications, exact training data may be expensive or even impossible to obtain. To solve this problem, a reinforcement neural fuzzy control network (RNFCN) is further proposed. The RNFCN is constructed by integrating two NFCNs, one functioning as a fuzzy predictor and the other as a fuzzy controller. By combining a proposed on-line supervised structure-parameter learning technique, the temporal difference prediction method, and the stochastic exploratory algorithm, a reinforcement learning algorithm is proposed, which can construct a RNFCN automatically and dynamically through a reward-penalty signal (i.e., “good” or “bad” signal). Two examples are presented to illustrate the performance and applicability of the proposed models and learning algorithms.  相似文献   

3.
Artificial neural networks have, in recent years, been very successfully applied in a wide range of areas. A major reason for this success has been the existence of a training algorithm called backpropagation. This algorithm relies upon the neural units in a network having input/output characteristics that are continuously differentiable. Such units are significantly less easy to implement in silicon than are neural units with Heaviside (step-function) characteristics. In this paper, we show how a training algorithm similar to backpropagation can be developed for 2-layer networks of Heaviside units by treating the network weights (i.e., interconnection strengths) as random variables. This is then used as a basis for the development of a training algorithm for networks with any number of layers by drawing upon the idea of internal representations. Some examples are given to illustrate the performance of these learning algorithms.  相似文献   

4.
Operations and other business decisions often depend on accurate time-series forecasts. These time series usually consist of trend-cycle, seasonal, and irregular components. Existing methodologies attempt to first identify and then extrapolate these components to produce forecasts. The proposed process partners this decomposition procedure with neural network methodologies to combine the strengths of economics, statistics, and machine learning research. Stacked generalization first uses transformations and decomposition to pre-process a time series. Then a time-delay neural network receives the resulting components as inputs. The outputs of this neural network are then input to a backpropagation algorithm that synthesizes the processed components into a single forecast. Genetic algorithms guide the architecture selection for both the time-delay and backpropagation neural networks. The empirical examples used in this study reveal that the combination of transformation, feature extraction, and neural networks through stacked generalization gives more accurate forecasts than classical decomposition or ARIMA models.?Scope and Purpose.?The research reported in this paper examines two concurrent issues. The first evaluates the performance of neural networks in forecasting time series. The second assesses the use of stacked generalization as a way of refining this process. The methodology is applied to four economic and business time series. Those studying time series and neural networks, particularly in terms of combining tools from the statistical community with neural network technology, will find this paper relevant.  相似文献   

5.
6.
This paper introduces a new concept of the connection weight to the standard recurrent neural networks—Elman and Jordan networks. The architecture of the modified networks is the same as that of the original recurrent neural networks. However, unlike the original recurrent neural networks whose connection weight is a single real number, in the modified networks the weight of each connection is multi-valued, depending on the value of the input data involved. The backpropagation learning algorithm is also modified to suit the proposed concept. The modified networks have been benchmarked against the feedforward neural network and the original recurrent neural networks. The experimental results on twelve benchmark problems show that the modified networks are clearly superior to the other three methods.  相似文献   

7.
8.
9.
This study compares the predictive performance of three neural network methods, namely the learning vector quantization, the radial basis function, and the feedforward network that uses the conjugate gradient optimization algorithm, with the performance of the logistic regression and the backpropagation algorithm. All these methods are applied to a dataset of 139 matched-pairs of bankrupt and non-bankrupt US firms for the period 1983–1994. The results of this study indicate that the contemporary neural network methods applied in the present study provide superior results to those obtained from the logistic regression method and the backpropagation algorithm.  相似文献   

10.
1. IntroductionThe feedforward Multilayer Perceptron (MLP) is one of the most widely used artificial neural networks among other network models. Its field of application includes patternrecognition, identification and control of dynamic systems, system modeling and nonlinearprediction of time series, etc. [1--41 founded on its nonlinear function approximation capability. Research of this type of networks has been stimulated since the discovery andpopularization of the Backpropagation learnin…  相似文献   

11.
Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results.  相似文献   

12.
This research deals with complementary neural networks (CMTNN) for the regression problem. Complementary neural networks consist of a pair of neural networks called truth neural network and falsity neural network, which are trained to predict truth and falsity outputs, respectively. In this paper, a novel adjusted averaging technique is proposed in order to enhance the result obtained from the basic CMTNN. We test our proposed technique based on the classical benchmark problems including housing, concrete compressive strength, and computer hardware data sets from the UCI machine learning repository. We also realize our technique to the porosity prediction problem based on well log data set obtained from practical field data in the oil and gas industry. We found that our proposed technique provides better performance when compared to the traditional CMTNN, backpropagation neural network, and support vector regression with linear, polynomial, and radial basis function kernels.  相似文献   

13.
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已经成为机器学习和神经计算领域的一个研究热点.利用Bagging技术和不同的神经网络算法生成集成个体,并用偏最小二乘回归方法从中提取集成因子,再利用贝叶斯正则化神经网络对其集成,以此建立上证指数预测模型.通过上证指数开、收盘价进行实例分析,计算结果表明该方法预测精度高、稳定性好.  相似文献   

14.
The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative cross-validation. Further, we discuss the possibility of exploiting the ensemble of networks offered by leave-one-out for performing ensemble predictions. We show that the generalization performance of the equally weighted ensemble predictor is identical to that of the network trained on the whole training set.Numerical experiments on the sunspot time series prediction benchmark demonstrate the potential of the linear unlearning technique.  相似文献   

15.
Conventional supervised learning in neural networks is carried out by performing unconstrained minimization of a suitably defined cost function. This approach has certain drawbacks, which can be overcome by incorporating additional knowledge in the training formalism. In this paper, two types of such additional knowledge are examined: Network specific knowledge (associated with the neural network irrespectively of the problem whose solution is sought) or problem specific knowledge (which helps to solve a specific learning task). A constrained optimization framework is introduced for incorporating these types of knowledge into the learning formalism. We present three examples of improvement in the learning behaviour of neural networks using additional knowledge in the context of our constrained optimization framework. The two network specific examples are designed to improve convergence and learning speed in the broad class of feedforward networks, while the third problem specific example is related to the efficient factorization of 2-D polynomials using suitably constructed sigma-pi networks.  相似文献   

16.
The Artificial Bee Colony (ABC) is a swarm intelligence algorithm for optimization that has previously been applied to the training of neural networks. This paper examines more carefully the performance of the ABC algorithm for optimizing the connection weights of feed-forward neural networks for classification tasks, and presents a more rigorous comparison with the traditional Back-Propagation (BP) training algorithm. The empirical results for benchmark problems demonstrate that using the standard “stopping early” approach with optimized learning parameters leads to improved BP performance over the previous comparative study, and that a simple variation of the ABC approach provides improved ABC performance too. With both improvements applied, the ABC approach does perform very well on small problems, but the generalization performances achieved are only significantly better than standard BP on one out of six datasets, and the training times increase rapidly as the size of the problem grows. If different, evolutionary optimized, BP learning rates are allowed for the two layers of the neural network, BP is significantly better than the ABC on two of the six datasets, and not significantly different on the other four.  相似文献   

17.
The estimation of the extent of a polluted zone after an accidental spill occurred in road transport is essential to assess the risk of water resources contamination and to design remediation plans. This paper presents a metamodel based on artificial neural networks (ANN) for estimating the depth of the contaminated zone and the volume of pollutant infiltration in the soil in a two-layer soil (a silty cover layer protecting a chalky aquifer) after a pollutant discharge at the soil surface. The ANN database is generated using USEPA NAPL-Simulator. For each case the extent of contamination is computed as a function of cover layer permeability and thickness, water table depth and soil surface–pollutant contact time. Different feedforward artificial neural networks with error backpropagation (BPNN) are trained and tested using subsets of the database, and validated on yet another subset. Their performance is compared with a metamodelling method using multilinear regression approximation. The proposed ANN metamodel is used to assess the risk for a DNAPL pollution to reach the groundwater resource underneath the road axis of a highway project in the north of France.  相似文献   

18.
In this paper, we discuss the visualization of multidimensional data. A well-known procedure for mapping data from a high-dimensional space onto a lower-dimensional one is Sammon’s mapping. This algorithm preserves as well as possible all interpattern distances. We investigate an unsupervised backpropagation algorithm to train a multilayer feed-forward neural network (SAMANN) to perform the Sammon’s nonlinear projection. Sammon mapping has a disadvantage. It lacks generalization, which means that new points cannot be added to the obtained map without recalculating it. The SAMANN network offers the generalization ability of projecting new data, which is not present in the original Sammon’s projection algorithm. To save computation time without losing the mapping quality, we need to select optimal values of control parameters. In our research the emphasis is put on the optimization of the learning rate. The experiments are carried out both on artificial and real data. Two cases have been analyzed: (1) training of the SAMANN network with full data set, (2) retraining of the network when the new data points appear.  相似文献   

19.
This paper presents an MLP‐type neural network with some fixed connections and a backpropagation‐type training algorithm that identifies the full set of solutions of a complete system of nonlinear algebraic equations with n equations and n unknowns. The proposed structure is based on a backpropagation‐type algorithm with bias units in output neurons layer. Its novelty and innovation with respect to similar structures is the use of the hyperbolic tangent output function associated with an interesting feature, the use of adaptive learning rate for the neurons of the second hidden layer, a feature that adds a high degree of flexibility and parameter tuning during the network training stage. The paper presents the theoretical aspects for this approach as well as a set of experimental results that justify the necessity of such an architecture and evaluate its performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
快速自底向上构造神经网络的方法   总被引:2,自引:0,他引:2  
介绍了一种构造神经网络的新方法 .常规的瀑流关联 (Cascade-Correlation)算法起始于最小网络(没有隐含神经元 ) ,然后逐一地往网络里增加新隐含神经元并训练 ,结束于期望性能的获得 .我们提出一种与构造算法 (Constructive Algorithm)相关的快速算法 ,这种算法从适当的初始网络结构开始 ,然后不断地往网络里增加新的神经元和相关权值 ,直到满意的结果获得为止 .实验证明 ,这种快速方法与以往的常规瀑流关联方法相比 ,有几方面优点 :更好的分类性能 ,更小的网络结构和更快的学习速度 .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号