首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Differential evolution algorithms represent an up to date and efficient way of solving complicated optimization tasks. In this article we concentrate on the ability of the differential evolution algorithms to attain the global minimum of the cost function. We demonstrate that although often declared as a global optimizer the classic differential evolution algorithm does not in general guarantee the convergence to the global minimum. To improve this weakness we design a simple modification of the classic differential evolution algorithm. This modification limits the possible premature convergence to local minima and ensures the asymptotic global convergence. We also introduce concepts that are necessary for the subsequent proof of the asymptotic global convergence of the modified algorithm. We test the classic and modified algorithm by numerical experiments and compare the efficiency of finding the global minimum for both algorithms. The tests confirm that the modified algorithm is significantly more efficient with respect to the global convergence than the classic algorithm.  相似文献   

2.
In this paper, we study a strong convergence for monotone operators. We first introduce the hybrid type algorithm for monotone operators. Next, we obtain a strong convergence theorem (Theorem 3.3) for finding a zero point of an inverse-strongly monotone operator in a Banach space. Finally, we apply our convergence theorem to the problem of finding a minimizer of a convex function.  相似文献   

3.
We consider the problem of finding solutions of systems of monotone equations. The Newton-type algorithm proposed in Ref. 1 has a very nice global convergence property in that the whole sequence of iterates generated by this algorithm converges to a solution, if it exists. Superlinear convergence of this algorithm is obtained under a standard nonsingularity assumption. The nonsingularity condition implies that the problem has a unique solution; thus, for a problem with more than one solution, such a nonsingularity condition cannot hold. In this paper, we show that the superlinear convergence of this algorithm still holds under a local error-bound assumption that is weaker than the standard nonsingularity condition. The local error-bound condition may hold even for problems with nonunique solutions. As an application, we obtain a Newton algorithm with very nice global and superlinear convergence for the minimum norm solution of linear programs.This research was supported by the Singapore-MIT Alliance and the Australian Research Council.  相似文献   

4.
本文研究非线性互补问题(NCP)的求解算法,先将NCP转化为约束全局优化问题(CGOP),然后直接移植求解问题(CGOP)的水平值估计算法^[4,5]来求解问题(NCP).文章证明了算法对于NCP是收敛的,数值实验说明了算法的有效性.  相似文献   

5.
Lipschitzian optimization without the Lipschitz constant   总被引:30,自引:0,他引:30  
We present a new algorithm for finding the global minimum of a multivariate function subject to simple bounds. The algorithm is a modification of the standard Lipschitzian approach that eliminates the need to specify a Lipschitz constant. This is done by carrying out simultaneous searches using all possible constants from zero to infinity. On nine standard test functions, the new algorithm converges in fewer function evaluations than most competing methods.The motivation for the new algorithm stems from a different way of looking at the Lipschitz constant. In particular, the Lipschitz constant is viewed as a weighting parameter that indicates how much emphasis to place on global versus local search. In standard Lipschitzian methods, this constant is usually large because it must equal or exceed the maximum rate of change of the objective function. As a result, these methods place a high emphasis on global search and exhibit slow convergence. In contrast, the new algorithm carries out simultaneous searches using all possible constants, and therefore operates at both the global and local level. Once the global part of the algorithm finds the basin of convergence of the optimum, the local part of the algorithm quickly and automatically exploits it. This accounts for the fast convergence of the new algorithm on the test functions.  相似文献   

6.
Forcing strong convergence of proximal point iterations in a Hilbert space   总被引:1,自引:1,他引:0  
This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinite-dimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinite-dimensional case. We propose a new proximal-type algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns. Received January 6, 1998 / Revised version received August 9, 1999?Published online November 30, 1999  相似文献   

7.
We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradient-type step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximal-algorithm-based framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forward-backward splitting algorithm of Tseng falls within the presented general framework.  相似文献   

8.
Particle swarm optimization (PSO) is an evolutionary algorithm used extensively. This paper presented a new particle swarm optimizer based on evolutionary game (EGPSO). We map particles’ finding optimal solution in PSO algorithm to players’ pursuing maximum utility by choosing strategies in evolutionary games, using replicator dynamics to model the behavior of particles. And in order to overcome premature convergence a multi-start technique was introduced. Experimental results show that EGPSO can overcome premature convergence and has great performance of convergence property over traditional PSO.  相似文献   

9.
We propose a smoothing trust region filter algorithm for nonsmooth nonconvex least squares problems. We present convergence theorems of the proposed algorithm to a Clarke stationary point or a global minimizer of the objective function under certain conditions. Preliminary numerical experiments show the efficiency of the proposed algorithm for finding zeros of a system of polynomial equations with high degrees on the sphere and solving differential variational inequalities.  相似文献   

10.
This paper presents a weak convergence residual algorithm for finding a fixed point of a nonexpansive mapping in a real Hilbert space. To study the numerical behavior of the algorithm it is included an extensive series of numerical experiments. Our computational experiments show that the new algorithm is computationally efficient.  相似文献   

11.
An algorithm based on a combination of the polyhedral and quadratic approximation is given for finding stationary points for unconstrained minimization problems with locally Lips-chitz problem functions that are not necessarily convex or differentiable. Global convergence of the algorithm is established. Under additional assumptions, it is shown that the algorithm generates Newton iterations and that the convergence is superlinear. Some encouraging numerical experience is reported. This work was supported by the grant No. 201/96/0918 given by the Czech Republic Grant Agency.  相似文献   

12.
We associate a multiparameter spectral problem in a real Euclidean space with a variational problem of finding a minimum of a certain functional. We establish the equivalence of the spectralproblem and the variational problem. On the basis of the gradient procedure, we propose a numerical algorithm for the determination of its eigenvalues and eigenvectors. The local convergence of the algorithm is proved.  相似文献   

13.
In this paper, we introduce and study the random variational inclusions with random fuzzy and random relaxed cocoercive mappings. We define an iterative algorithm for finding the approximate solutions of this class of variational inclusions and establish the convergence of iterative sequences generated by proposed algorithm. Our results improve and generalize many known corresponding results.  相似文献   

14.
《Optimization》2012,61(2):171-200
Column generation is an increasingly popular basic tool for the solution of large-scale mathematical programming problems. As problems being solved grow bigger, column generation may however become less efficient in its present form, where columns typically are not optimizing, and finding an optimal solution instead entails finding an optimal convex combination of a huge number of them. We present a class of column generation algorithms in which the columns defining the restricted master problem may be chosen to be optimizing in the limit, thereby reducing the total number of columns needed. This first article is devoted to the convergence properties of the algorithm class, and includes global (asymptotic) convergence results for differentiable minimization, finite convergence results with respect to the optimal face and the optimal solution, and extensions of these results to variational inequality problems. An illustration of its possibilities is made on a nonlinear network flow model, contrasting its convergence characteristics to that of the restricted simplicial decomposition (RSD) algorithm.  相似文献   

15.
This paper introduces an algorithm for univariate optimization using linear lower bounding functions (LLBF's). An LLBF over an interval is a linear function which lies below the given function over an interval and matches the given function at one end point of the interval. We first present an algorithm using LLBF's for finding the nearest root of a function in a search direction. When the root-finding method is applied to the derivative of an objective function, it is an optimization algorithm which guarantees to locate the nearest extremum along a search direction. For univariate optimization, we show that this approach is a Newton-type method, which is globally convergent with superlinear convergence rate. The applications of this algorithm to global optimization and other optimization problems are also discussed.  相似文献   

16.
An interior proximal point algorithm for finding a solution of a linear program is presented. The distinguishing feature of this algorithm is the addition of a quadratic proximal term to the linear objective function. This perturbation has allowed us to obtain solutions with better feasibility. Implementation of this algorithm shows that the algorithms. We also establish global convergence and local linear convergence of the algorithm.This research was supported by National Science Foundation Grants DCR-85-21228 and CCR-87-23091 and by Air Force Office of Scientific Research Grants AFOSR-86-0172 and AFOSR-89-0410. It was conducted while the author was a Graduate Student at the Computer Sciences Department, University of Wisconsin, Madison, Wisconsin.  相似文献   

17.
The backward–backward algorithm is a tool for finding minima of a regularization of the sum of two convex functions in Hilbert spaces. We generalize this setting to Hadamard spaces and prove the convergence of an error-tolerant version of the backward–backward method.  相似文献   

18.
基于寻找分离超平面的三种经典线搜索技术,本文提出了一种自适应线搜索技术.结合谱梯度投影法,提出了凸约束非光滑单调方程组的一个谱梯度投影算法.该算法不需要计算和存储任何矩阵,因而适合求解大规模非光滑的非线性单调方程组.在较弱的条件下,证明了方法的全局收敛性,并分析了算法的收敛率.数值试验结果表明算法是有效的和鲁棒的.  相似文献   

19.
Consider the utilization of a Lagrangian dual method which is convergent for consistent convex optimization problems. When it is used to solve an infeasible optimization problem, its inconsistency will then manifest itself through the divergence of the sequence of dual iterates. Will then the sequence of primal subproblem solutions still yield relevant information regarding the primal program? We answer this question in the affirmative for a convex program and an associated subgradient algorithm for its Lagrange dual. We show that the primal–dual pair of programs corresponding to an associated homogeneous dual function is in turn associated with a saddle-point problem, in which—in the inconsistent case—the primal part amounts to finding a solution in the primal space such that the Euclidean norm of the infeasibility in the relaxed constraints is minimized; the dual part amounts to identifying a feasible steepest ascent direction for the Lagrangian dual function. We present convergence results for a conditional \(\varepsilon \)-subgradient optimization algorithm applied to the Lagrangian dual problem, and the construction of an ergodic sequence of primal subproblem solutions; this composite algorithm yields convergence of the primal–dual sequence to the set of saddle-points of the associated homogeneous Lagrangian function; for linear programs, convergence to the subset in which the primal objective is at minimum is also achieved.  相似文献   

20.
The proximal point algorithm, which is a well-known tool for finding minima of convex functions, is generalized from the classical Hilbert space framework into a nonlinear setting, namely, geodesic metric spaces of non-positive curvature. We prove that the sequence generated by the proximal point algorithm weakly converges to a minimizer, and also discuss a related question: convergence of the gradient flow.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号