首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Image compression using neural networks has been attempted with some promise. Among the architectures, feedforward backpropagation networks (FFBPN) have been used in several attempts. Although it is demonstrated that using the mean quadratic error function is equivalent to applying the Karhunen-Loeve transformation method, promise still arises from directed learning possibilities, generalization abilities and performance of the network once trained. In this paper we propose an architecture and an improved training method to attempt to solve some of the shortcomings of traditional data compression systems based on feedforward neural networks trained with backpropagation—the dynamic autoassociation neural network (DANN).The successful application of neural networks to any task requires proper training of the network. In this research, this issue is taken as the main consideration in the design of DANN. We emphasize the convergence of the learning process proposed by DANN. This process provides an escape mechanism, by adding neurons in a random state, to avoid the local minima trapping seen in traditional PFBPN. Also, DANN's training algorithm constrains the error for every pattern to an allowed interval to balance the training for every pattern, thus improving the performance rates in recognition and generalization. The addition of these two mechanisms to DANN's training algorithm has the result of improving the final quality of the images processed by DANN.The results of several tasks presented to DANN-based compression are compared and contrasted with the performance of an FFBPN-based system applied to the same task. These results indicate that DANN is superior to FFBPN when applied to image compression.  相似文献   

2.
Artificial neural networks (ANN) have been widely used for both classification and prediction. This paper is focused on the prediction problem in which an unknown function is approximated. ANNs can be viewed as models of real systems, built by tuning parameters known as weights. In training the net, the problem is to find the weights that optimize its performance (i.e., to minimize the error over the training set). Although the most popular method for training these networks is back propagation, other optimization methods such as tabu search or scatter search have been successfully applied to solve this problem. In this paper we propose a path relinking implementation to solve the neural network training problem. Our method uses GRG, a gradient-based local NLP solver, as an improvement phase, while previous approaches used simpler local optimizers. The experimentation shows that the proposed procedure can compete with the best-known algorithms in terms of solution quality, consuming a reasonable computational effort.  相似文献   

3.
用在线梯度法训练积单元神经网络的收敛性分析   总被引:1,自引:0,他引:1  
<正>1引言仅由加和单元构成的传统前向神经网络已经广泛应用于模式识别及函数逼近等领域.但在处理比较复杂的问题时,这种网络往往需要补充大量的隐节点,这样就不可避免地增  相似文献   

4.
Different methodologies have been introduced in recent years with the aim of approximating unknown functions. Basically, these methodologies are general frameworks for representing non-linear mappings from several input variables to several output variables. Research into this problem occurs in applied mathematics (multivariate function approximation), statistics (nonparametric multiple regression) and computer science (neural networks). However, since these methodologies have been proposed in different fields, most of the previous papers treat them in isolation, ignoring contributions in the other areas. In this paper we consider five well known approaches for function approximation. Specifically we target polynomial approximation, general additive models (Gam), local regression (Loess), multivariate additive regression splines (Mars) and artificial neural networks (Ann).Neural networks can be viewed as models of real systems, built by tuning parameters known as weights. In training the net, the problem is to find the weights that optimize its performance (i.e. to minimize the error over the training set). Although the most popular method for Ann training is back propagation, other optimization methods based on metaheuristics have recently been adapted to this problem, outperforming classical approaches. In this paper we propose a short term memory tabu search method, coupled with path relinking and BFGS (a gradient-based local NLP solver) to provide high quality solutions to this problem. The experimentation with 15 functions previously reported shows that a feed-forward neural network with one hidden layer, trained with our procedure, can compete with the best-known approximating methods. The experimental results also show the effectiveness of a new mechanism to avoid overfitting in neural network training.  相似文献   

5.
A novel scheme is proposed for the design of backstepping control for a class of state-feedback nonlinear systems. In the design, the unknown nonlinear functions are approximated by the neural networks (NNs) identification models. The Lyapunov function of every subsystem consists of the tracking error and the estimation errors of NN weight parameters. The adaptive gains are dynamically determined in a structural way instead of keeping them constants, which can guarantee system stability and parameter estimation convergence. When the modeling errors are available, the indirect backstepping control is proposed, which can guarantee the functional approximation error will converge to a rather small neighborhood of the minimax functional approximation error. When the modeling errors are not available, the direct backstepping control is proposed, where only the tracking error is necessary. The simulation results show the effectiveness of the proposed schemes.  相似文献   

6.
Deep neural networks have successfully been trained in various application areas with stochastic gradient descent. However, there exists no rigorous mathematical explanation why this works so well. The training of neural networks with stochastic gradient descent has four different discretization parameters: (i) the network architecture; (ii) the amount of training data; (iii) the number of gradient steps; and (iv) the number of randomly initialized gradient trajectories. While it can be shown that the approximation error converges to zero if all four parameters are sent to infinity in the right order, we demonstrate in this paper that stochastic gradient descent fails to converge for ReLU networks if their depth is much larger than their width and the number of random initializations does not increase to infinity fast enough.  相似文献   

7.
Artificial neural networks have, in recent years, been very successfully applied in a wide range of areas. A major reason for this success has been the existence of a training algorithm called backpropagation. This algorithm relies upon the neural units in a network having input/output characteristics that are continuously differentiable. Such units are significantly less easy to implement in silicon than are neural units with Heaviside (step-function) characteristics. In this paper, we show how a training algorithm similar to backpropagation can be developed for 2-layer networks of Heaviside units by treating the network weights (i.e., interconnection strengths) as random variables. This is then used as a basis for the development of a training algorithm for networks with any number of layers by drawing upon the idea of internal representations. Some examples are given to illustrate the performance of these learning algorithms.  相似文献   

8.
Online Gradient Methods with a Punishing Term for Neural Networks   总被引:1,自引:0,他引:1  
1 IntroductionOnline gradient methods (OGM, for short) are widely used for training neuraJ networks (cf.Il,2,3,4]). Its iterative convergence for linear models is proved in e.g. I5,6,71. A nonlinearn1odel is considered in [8]. During the iterative training procedure, sometimes (see the nextsection of this paper)the weight of the network may become very laxge, causing difficultiesin the implementation of the network by electronic circuits. A revised error fUnction ispresented in [gl to prev…  相似文献   

9.
The aim of this paper is to investigate approximation operators with logarithmic sigmoidal function of a class of two neural networks weights and a class of quasi-interpolation operators. Using these operators as approximation tools, the upper bounds of estimate errors are estimated for approximating continuous functions.  相似文献   

10.
The estimation of multivariate regression functions from bounded i.i.d. data is considered. The L 2 error with integration with respect to the design measure is used as an error criterion. The distribution of the design is assumed to be concentrated on a finite set. Neural network estimates are defined by minimizing the empirical L 2 risk over various sets of feedforward neural networks. Nonasymptotic bounds on the L 2 error of these estimates are presented. The results imply that neural networks are able to adapt to additive regression functions and to regression functions which are a sum of ridge functions, and hence are able to circumvent the curse of dimensionality in these cases.  相似文献   

11.
12.
In this paper we propose a nonmonotone approach to recurrent neural networks training for temporal sequence processing applications. This approach allows learning performance to deteriorate in some iterations, nevertheless the network’s performance is improved over time. A self-scaling BFGS is equipped with an adaptive nonmonotone technique that employs approximations of the Lipschitz constant and is tested on a set of sequence processing problems. Simulation results show that the proposed algorithm outperforms the BFGS as well as other methods previously applied to these sequences, providing an effective modification that is capable of training recurrent networks of various architectures.  相似文献   

13.
Training neural networks with noisy data as an ill-posed problem   总被引:3,自引:0,他引:3  
This paper is devoted to the analysis of network approximation in the framework of approximation and regularization theory. It is shown that training neural networks and similar network approximation techniques are equivalent to least-squares collocation for a corresponding integral equation with mollified data.Results about convergence and convergence rates for exact data are derived based upon well-known convergence results about least-squares collocation. Finally, the stability properties with respect to errors in the data are examined and stability bounds are obtained, which yield rules for the choice of the number of network elements.  相似文献   

14.
In this paper,the technique of approximate partition of unity is used to construct a class of neural networks operators with sigmoidal functions.Using the modulus of continuity of function as a metric,...  相似文献   

15.
We propose a novel algorithm,based on physics-informed neural networks (PINNs) to efficiently approximate solutions of nonlinear dispersive PDEs such as the KdV-Kawahara,Camassa-Holm and Benjamin-Ono equations.The stability of solutions of these dispersive PDEs is leveraged to prove rigorous bounds on the resulting error.We present several numerical experiments to demonstrate that PINNs can approximate solutions of these dispersive PDEs very accurately.  相似文献   

16.
This paper is concerned with the delay-dependent exponential robust filtering problem for switched Hopfield neural networks with time-delay. A new delay-dependent switched exponential robust filter is proposed that results in an exponentially stable filtering error system with a guaranteed robust performance. The design of the switched exponential robust filter for these types of neural networks can be achieved by solving a linear matrix inequality (LMI), which can be easily facilitated using standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed filter.  相似文献   

17.
The supervisor and searcher cooperation framework (SSC), introduced in Refs. 1 and 2, provides an effective way to design efficient optimization algorithms combining the desirable features of the two existing ones. This work aims to develop efficient algorithms for a wide range of noisy optimization problems including those posed by feedforward neural networks training. It introduces two basic SSC algorithms. The first seems suited for generic problems. The second is motivated by neural networks training problems. It introduces also inexact variants of the two algorithms, which seem to possess desirable properties. It establishes general theoretical results about the convergence and speed of SSC algorithms and illustrates their appealing attributes through numerical tests on deterministic, stochastic, and neural networks training problems.  相似文献   

18.
距离空间中插值神经网络的误差估计   总被引:2,自引:0,他引:2  
研究距离空间中的神经网络插值与逼近问题.首先引进一类广义的激活函数,用比较简洁的方法讨论距离空间中插值神经网络的存在性,然后给出插值神经网络逼近连续函数的误差估计.  相似文献   

19.
Neural networks have been widely used as a promising method for time series forecasting. However, limited empirical studies on seasonal time series forecasting with neural networks yield mixed results. While some find that neural networks are able to model seasonality directly and prior deseasonalization is not necessary, others conclude just the opposite. In this paper, we investigate the issue of how to effectively model time series with both seasonal and trend patterns. In particular, we study the effectiveness of data preprocessing, including deseasonalization and detrending, on neural network modeling and forecasting performance. Both simulation and real data are examined and results are compared to those obtained from the Box–Jenkins seasonal autoregressive integrated moving average models. We find that neural networks are not able to capture seasonal or trend variations effectively with the unpreprocessed raw data and either detrending or deseasonalization can dramatically reduce forecasting errors. Moreover, a combined detrending and deseasonalization is found to be the most effective data preprocessing approach.  相似文献   

20.
Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号