首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents an efficient approach based on recurrent neural network for solving nonlinear optimization. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The main advantage of the developed network is that it treats optimization and constraint terms in different stages with no interference with each other. Moreover, the proposed approach does not require specification of penalty and weighting parameters for its initialization. A study of the modified Hopfield model is also developed to analyze its stability and convergence. Simulation results are provided to demonstrate the performance of the proposed neural network.  相似文献   

2.
In this paper we show that any increasing functional of the first k   eigenvalues of the Dirichlet Laplacian admits a (quasi-)open minimizer among the subsets of RNRN of unit measure. In particular, there exists such a minimizer which is bounded, where the bound depends on k and N, but not on the functional.  相似文献   

3.
Reformulations of a generalization of a second-order cone complementarity problem (GSOCCP) as optimization problems are introduced, which preserve differentiability. Equivalence results are proved in the sense that the global minimizers of the reformulations with zero objective value are solutions to the GSOCCP and vice versa. Since the optimization problems involved include only simple constraints, a whole range of minimization algorithms may be used to solve the equivalent problems. Taking into account that optimization algorithms usually seek stationary points, a theoretical result is established that ensures equivalence between stationary points of the reformulation and solutions to the GSOCCP. Numerical experiments are presented that illustrate the advantages and disadvantages of the reformulations. Supported by FAPESP (01/04597-4), CNPq, PRONEX-Optimization, FAEPEX-Unicamp.  相似文献   

4.
We present and analyze novel hierarchical a posteriori error estimates for a self-adjoint elliptic obstacle problem. Under a suitable saturation assumption, we prove the efficiency and reliability of our hierarchical estimates. The proof is based upon some new observations on the efficiency of some hierarchical error indicators. These new observations allow us to remove an additional regularity condition on the underlying grid required in the previous analysis. Numerical computations confirm our theoretical findings.  相似文献   

5.
The problem considered is as follows:m resources are to be allocated ton activities, with resourcei contributing linearly to the potential for activityj according to the coefficientE(i,j). The objective is to minimize some nonlinear function of the potentials. If the objective function is sufficiently well behaved, the problem can be solved in finitely many steps using the method described in this paper.The author thanks S. Toi Lawphongpanich for several helpful references and suggestions.  相似文献   

6.
宏观经济预测模型体系研究   总被引:4,自引:1,他引:4  
针对我国宏观经济管理的实际需要,以国家和地区宏观经济中长期预测和规划为研究目的,本建立了一个以投入产出模型和人工神经网络模型为核心,结合使用最优化技术的宏观经济预测模型体系,该预测模型体系已应用于某市“十五”时期的宏观经济指标测算中,预测结果已被政府计划部门在研究制定“十五”计划时采用。  相似文献   

7.
    
Constructing neural networks for function approximation is a classical and longstanding topic in approximation theory. In this paper, we aim at constructing deep neural networks with three hidden layers using a sigmoidal activation function to approximate smooth and sparse functions. Specifically, we prove that the constructed deep nets with controllable magnitude of free parameters can reach the optimal approximation rate in approximating both smooth and sparse functions. In particular, we prove that neural networks with three hidden layers can avoid the phenomenon of saturation, i.e., the phenomenon that for some neural network architectures, the approximation rate stops improving for functions of very high smoothness.  相似文献   

8.
We use sextic spline function to develop numerical method for the solution of system of second-order boundary-value problems associated with obstacle, unilateral, and contact problems. We show that the approximate solutions obtained by the present method are better than those produced by other collocation, finite difference and spline methods. A numerical example is given to illustrate practical usefulness of our method.  相似文献   

9.
Randomize-then-optimize (RTO) is widely used for sampling from posterior distribu-tions in Bayesian inverse problems.However,RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient.In this work,we present a novel goal-oriented deep neural networks (DNN) sur-rogate approach to substantially reduce the computation burden of RTO.In particular,we propose to drawn the training points for the DNN-surrogate from a local approximated posterior distribution-yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach.We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO ap-proach,which shows that DNN-RTO can significantly outperform the traditional RTO.  相似文献   

10.
Regularization is typically based on the choice of some parametric family of nearby solutions, and the choice of this family is a task in itself. Then, a suitable parameter must be chosen in order to find an approximation of good quality. We focus on the second task. There exist deterministic and stochastic models for describing noise and solutions in inverse problems. We will establish a unified framework for treating different settings for the analysis of inverse problems, which allows us to prove the convergence and optimality of parameter choice schemes based on minimization in a generic way. We show that the well known quasi-optimality criterion falls in this class. Furthermore we present a new parameter choice method and prove its convergence by using this newly established tool.  相似文献   

11.
12.
隐显线性多步方法由隐式线性多步方法和显式线性多步法组合而成.本文主要讨论求解满足单边Lipschitz条件的非线性刚性初值问题和一类奇异摄动初值问题的隐显线性多步方法的误差分析.最后,由数值例子验证了所获的理论结果的正确性及方法处理这两类问题的有效性.  相似文献   

13.
Conventional supervised learning in neural networks is carried out by performing unconstrained minimization of a suitably defined cost function. This approach has certain drawbacks, which can be overcome by incorporating additional knowledge in the training formalism. In this paper, two types of such additional knowledge are examined: Network specific knowledge (associated with the neural network irrespectively of the problem whose solution is sought) or problem specific knowledge (which helps to solve a specific learning task). A constrained optimization framework is introduced for incorporating these types of knowledge into the learning formalism. We present three examples of improvement in the learning behaviour of neural networks using additional knowledge in the context of our constrained optimization framework. The two network specific examples are designed to improve convergence and learning speed in the broad class of feedforward networks, while the third problem specific example is related to the efficient factorization of 2-D polynomials using suitably constructed sigma-pi networks.  相似文献   

14.
For a strictly convex integrand f : ℝn → ℝ with linear growth we discuss the variational problem among mappings u : ℝn ⊃ Ω → ℝ of Sobolev class W11 with zero trace satisfying in addition u ≥ ψ for a given function ψ such that ψ|∂Ω < 0. We introduce a natural dual problem which admits a unique maximizer σ. In further sections the smoothness of σ is investigated using a special J-minimizing sequence with limit u* ∈ C1,α (Ω) for which the duality relation holds.  相似文献   

15.
16.
17.
We give an existence result of the obstacle parabolic equations(b(x,u))/(t)-div(a(x,t,u,▽u))+div(φ(x,t,u))=f in Q_T,where b(x,u) is bounded function of u,the term-div(a(x,t,u,▽u)) is a Leray-Lions type operator and the function φ is a nonlinear lower order and satisfy only the growth condition.The second term f belongs to L~1(Q_T).The proof of an existence solution is based on the penalization methods.  相似文献   

18.
基于一个含有控制参数的修正Lagrangian函数,该文建立了一个求解非线性约束优化问题的修正Lagrangian算法.在一些适当的条件下,证明了控制参数存在一个阀值,当控制参数小于这一阀值时,由这一算法产生的序列解局部收敛于问题的Kuhn-Tucker点,并且建立了解的误差上界.最后给出一些约束优化问题的数值结果.  相似文献   

19.
Training neural networks with noisy data as an ill-posed problem   总被引:3,自引:0,他引:3  
This paper is devoted to the analysis of network approximation in the framework of approximation and regularization theory. It is shown that training neural networks and similar network approximation techniques are equivalent to least-squares collocation for a corresponding integral equation with mollified data.Results about convergence and convergence rates for exact data are derived based upon well-known convergence results about least-squares collocation. Finally, the stability properties with respect to errors in the data are examined and stability bounds are obtained, which yield rules for the choice of the number of network elements.  相似文献   

20.
《Optimization》2012,61(12):1467-1490
Large outliers break down linear and nonlinear regression models. Robust regression methods allow one to filter out the outliers when building a model. By replacing the traditional least squares criterion with the least trimmed squares (LTS) criterion, in which half of data is treated as potential outliers, one can fit accurate regression models to strongly contaminated data. High-breakdown methods have become very well established in linear regression, but have started being applied for non-linear regression only recently. In this work, we examine the problem of fitting artificial neural networks (ANNs) to contaminated data using LTS criterion. We introduce a penalized LTS criterion which prevents unnecessary removal of valid data. Training of ANNs leads to a challenging non-smooth global optimization problem. We compare the efficiency of several derivative-free optimization methods in solving it, and show that our approach identifies the outliers correctly when ANNs are used for nonlinear regression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号