首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 482 毫秒
1.
The theoretical foundation of integral global optimization has become widely known and well accepted [4],[24],[25]. However, more effort is needed to demonstrate the effectiveness of the integral global optimization algorithms. In this work we detail the implementation of the integral global minimization algorithms. We describe how the integral global optimization method handles nonconvex unconstrained or box constrained, constrained or discrete minimization problems. We illustrate the flexibility and the efficiency of integral global optimization method by presenting the performance of algorithms on a collection of well known test problems in global optimization literature. We provide the software which solves these test problems and other minimization problems. The performance of the computations demonstrates that the integral global algorithms are not only extremely flexible and reliable but also very efficient.Research supported partially by NSERC grant and Mount St Vincent University research grant.  相似文献   

2.
In this paper, we present constrained simulated annealing (CSA), an algorithm that extends conventional simulated annealing to look for constrained local minima of nonlinear constrained optimization problems. The algorithm is based on the theory of extended saddle points (ESPs) that shows the one-to-one correspondence between a constrained local minimum and an ESP of the corresponding penalty function. CSA finds ESPs by systematically controlling probabilistic descents in the problem-variable subspace of the penalty function and probabilistic ascents in the penalty subspace. Based on the decomposition of the necessary and sufficient ESP condition into multiple necessary conditions, we present constraint-partitioned simulated annealing (CPSA) that exploits the locality of constraints in nonlinear optimization problems. CPSA leads to much lower complexity as compared to that of CSA by partitioning the constraints of a problem into significantly simpler subproblems, solving each independently, and resolving those violated global constraints across the subproblems. We prove that both CSA and CPSA asymptotically converge to a constrained global minimum with probability one in discrete optimization problems. The result extends conventional simulated annealing (SA), which guarantees asymptotic convergence in discrete unconstrained optimization, to that in discrete constrained optimization. Moreover, it establishes the condition under which optimal solutions can be found in constraint-partitioned nonlinear optimization problems. Finally, we evaluate CSA and CPSA by applying them to solve some continuous constrained optimization benchmarks and compare their performance to that of other penalty methods.  相似文献   

3.
We present a new strategy for the constrained global optimization of expensive black box functions using response surface models. A response surface model is simply a multivariate approximation of a continuous black box function which is used as a surrogate model for optimization in situations where function evaluations are computationally expensive. Prior global optimization methods that utilize response surface models were limited to box-constrained problems, but the new method can easily incorporate general nonlinear constraints. In the proposed method, which we refer to as the Constrained Optimization using Response Surfaces (CORS) Method, the next point for costly function evaluation is chosen to be the one that minimizes the current response surface model subject to the given constraints and to additional constraints that the point be of some distance from previously evaluated points. The distance requirement is allowed to cycle, starting from a high value (global search) and ending with a low value (local search). The purpose of the constraint is to drive the method towards unexplored regions of the domain and to prevent the premature convergence of the method to some point which may not even be a local minimizer of the black box function. The new method can be shown to converge to the global minimizer of any continuous function on a compact set regardless of the response surface model that is used. Finally, we considered two particular implementations of the CORS method which utilize a radial basis function model (CORS-RBF) and applied it on the box-constrained Dixon–Szegö test functions and on a simple nonlinearly constrained test function. The results indicate that the CORS-RBF algorithms are competitive with existing global optimization algorithms for costly functions on the box-constrained test problems. The results also show that the CORS-RBF algorithms are better than other algorithms for constrained global optimization on the nonlinearly constrained test problem.  相似文献   

4.
This paper proposes several globally convergent geometric optimization algorithms on Riemannian manifolds, which extend some existing geometric optimization techniques. Since any set of smooth constraints in the Euclidean space R n (corresponding to constrained optimization) and the R n space itself (corresponding to unconstrained optimization) are both special Riemannian manifolds, and since these algorithms are developed on general Riemannian manifolds, the techniques discussed in this paper provide a uniform framework for constrained and unconstrained optimization problems. Unlike some earlier works, the new algorithms have less restrictions in both convergence results and in practice. For example, global minimization in the one-dimensional search is not required. All the algorithms addressed in this paper are globally convergent. For some special Riemannian manifold other than R n , the new algorithms are very efficient. Convergence rates are obtained. Applications are discussed. This paper is based on part of the Ph.D Thesis of the author under the supervision of Professor Tits, University of Maryland, College Park, Maryland. The author is in debt to him for invaluable suggestions on earlier versions of this paper. The author is grateful to the Associate Editor and anonymous reviewers, who pointed out a number of papers that have been included in the references; they made also detailed suggestions that lead to significant improvements of the paper. Finally, the author thanks Dr. S.T. Smith for making available his Ph.D Thesis.  相似文献   

5.
The aim of this paper is to show that the new continuously differentiable exact penalty functions recently proposed in literature can play an important role in the field of constrained global optimization. In fact they allow us to transfer ideas and results proposed in unconstrained global optimization to the constrained case.First, by drawing our inspiration from the unconstrained case and by using the strong exactness properties of a particular continuously differentiable penalty function, we propose a sufficient condition for a local constrained minimum point to be global.Then we show that every constrained local minimum point satisfying the second order sufficient conditions is an attraction point for a particular implementable minimization algorithm based on the considered penalty function. This result can be used to define new classes of global algorithms for the solution of general constrained global minimization problems. As an example, in this paper we describe a simulated annealing algorithm which produces a sequence of points converging in probability to a global minimum of the original constrained problem.  相似文献   

6.
Many real life problems can be stated as a continuous minimax optimization problem. Well-known applications to engineering, finance, optics and other fields demonstrate the importance of having reliable methods to tackle continuous minimax problems. In this paper a new approach to the solution of continuous minimax problems over reals is introduced, using tools based on modal intervals. Continuous minimax problems, and global optimization as a particular case, are stated as the computation of semantic extensions of continuous functions, one of the key concepts of modal intervals. Modal intervals techniques allow to compute, in a guaranteed way, such semantic extensions by means of an efficient algorithm. Several examples illustrate the behavior of the algorithms in unconstrained and constrained minimax problems.  相似文献   

7.
An overview of interval arithmetical tools and basic techniques is presented that can be used to construct deterministic global optimization algorithms. These tools are applicable to unconstrained and constrained optimization as well as to nonsmooth optimization and to problems over unbounded domains. Since almost all interval based global optimization algorithms use branch-and-bound methods with iterated bisection of the problem domain we also embed our overview in such a setting.  相似文献   

8.
We suggest a new heuristic for solving unconstrained continuous optimization problems. It is based on a generalized version of the variable neighborhood search metaheuristic. Different neighborhoods and distributions, induced from different metrics are ranked and used to get random points in the shaking step. We also propose VNS for solving constrained optimization problems. The constraints are handled using exterior point penalty functions within an algorithm that combines sequential and exact penalty transformations. The extensive computer analysis that includes the comparison with genetic algorithm and some other approaches on standard test functions are given. With our approach we obtain encouraging results.  相似文献   

9.
A Single Component Mutation Evolutionary Programming   总被引:1,自引:0,他引:1  
In this paper, a novel evolutionary programming is proposed for solving the upper and lower bound optimization problems as well as the linear constrained optimization problems. There are two characteristics of the algorithm: first, only one component of the current solution is mutated in each iteration; second, it can solve the linear constrained optimization problems directly without converting it into unconstrained problems. By solving two kinds of the optimization problems, the algorithm can not only effectively find optimal or close to optimal solutions but also reduce the number of function evolutions compared with the other heuristic algorithms.  相似文献   

10.
Global optimization and simulated annealing   总被引:19,自引:0,他引:19  
In this paper we are concerned with global optimization, which can be defined as the problem of finding points on a bounded subset of n in which some real valued functionf assumes its optimal (maximal or minimal) value.We present a stochastic approach which is based on the simulated annealing algorithm. The approach closely follows the formulation of the simulated annealing algorithm as originally given for discrete optimization problems. The mathematical formulation is extended to continuous optimization problems, and we prove asymptotic convergence to the set of global optima. Furthermore, we discuss an implementation of the algorithm and compare its performance with other well-known algorithms. The performance evaluation is carried out for a standard set of test functions from the literature.  相似文献   

11.
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochastic optimization both with and without inequality constraints. The algorithm combines the smoothed functional (SF) scheme for estimating the gradient with the quasi-Newton method to solve the optimization problem. Newton algorithms typically update the Hessian at each instant and subsequently (a) project them to the space of positive definite and symmetric matrices, and (b) invert the projected Hessian. The latter operation is computationally expensive. In order to save computational effort, we propose in this paper a quasi-Newton SF (QN-SF) algorithm based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule. In Bhatnagar (ACM TModel Comput S. 18(1): 27–62, 2007), a Jacobi variant of Newton SF (JN-SF) was proposed and implemented to save computational effort. We compare our QN-SF algorithm with gradient SF (G-SF) and JN-SF algorithms on two different problems – first on a simple stochastic function minimization problem and the other on a problem of optimal routing in a queueing network. We observe from the experiments that the QN-SF algorithm performs significantly better than both G-SF and JN-SF algorithms on both the problem settings. Next we extend the QN-SF algorithm to the case of constrained optimization. In this case too, the QN-SF algorithm performs much better than the JN-SF algorithm. Finally we present the proof of convergence for the QN-SF algorithm in both unconstrained and constrained settings.  相似文献   

12.
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.  相似文献   

13.
《Optimization》2012,61(12):1457-1471
A modified Polak–Ribière–Polyak conjugate gradient algorithm which satisfies both the sufficient descent condition and the conjugacy condition is presented. These properties are independent of the line search. The algorithms use the standard Wolfe line search. Under standard assumptions, we show the global convergence of the algorithm. Numerical comparisons with conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that this computational scheme outperforms the known Polak–Ribière–Polyak algorithm, as well as some other unconstrained optimization algorithms.  相似文献   

14.
填充函数法是求解多变量、多极值函数全局优化问题的有效方法.这种方法的关键是构造填充函数.本文在无Lipschitz连续条件下,对一般无约束最优化问题提出了一类单参数填充函数.讨论了其填充性质,并设计了一个求解约束全局优化问题的填充函数算法,数值实验表明,算法是有效的.  相似文献   

15.
As a synchronization parallel framework, the parallel variable transformation (PVT) algorithm is effective to solve unconstrained optimization problems. In this paper, based on the idea that a constrained optimization problem is equivalent to a differentiable unconstrained optimization problem by introducing the Fischer Function, we propose an asynchronous PVT algorithm for solving large-scale linearly constrained convex minimization problems. This new algorithm can terminate when some processor satisfies terminal condition without waiting for other processors. Meanwhile, it can enhances practical efficiency for large-scale optimization problem. Global convergence of the new algorithm is established under suitable assumptions. And in particular, the linear rate of convergence does not depend on the number of processors.  相似文献   

16.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

17.
The trust region problem, minimization of a quadratic function subject to a spherical trust region constraint, occurs in many optimization algorithms. In a previous paper, the authors introduced an inexpensive approximate solution technique for this problem that involves the solution of a two-dimensional trust region problem. They showed that using this approximation in an unconstrained optimization algorithm leads to the same theoretical global and local convergence properties as are obtained using the exact solution to the trust region problem. This paper reports computational results showing that the two-dimensional minimization approach gives nearly optimal reductions in then-dimension quadratic model over a wide range of test cases. We also show that there is very little difference, in efficiency and reliability, between using the approximate or exact trust region step in solving standard test problems for unconstrained optimization. These results may encourage the application of similar approximate trust region techniques in other contexts.Research supported by ARO contract DAAG 29-84-K-0140, NSF grant DCR-8403483, and NSF cooperative agreement DCR-8420944.  相似文献   

18.
We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation techniques to the objective function within an overlapping branch and bound algorithm for convex constrained global optimization. Unlike other branch and bound algorithms, lower bounds are obtained via nonconvex underestimators of the function. For a numerical example, we apply the proposed branch and bound algorithm to radial basis function approximations.  相似文献   

19.
A successive unconstrained dual optimization (SUDO) method is developed to solve the high order tensors?? best rank-one approximation problems, in the least-squares sense. The constrained dual program of tensors?? rank-one approximation is transformed into a sequence of unconstrained optimization problems, for where a fast gradient method is proposed. We introduce the steepest ascent direction, a initial step length strategy and a backtracking line search rule for each iteration. A proof of the global convergence of the SUDO algorithm is given. Preliminary numerical experiments show that our method outperforms the alternating least squares (ALS) method.  相似文献   

20.
A constrained minimax problem is converted to minimization of a sequence of unconstrained and continuously differentiable functions in a manner similar to Morrison's method for constrained optimization. One can thus apply any efficient gradient minimization technique to do the unconstrained minimization at each step of the sequence. Based on this approach, two algorithms are proposed, where the first one is simpler to program, and the second one is faster in general. To show the efficiency of the algorithms even for unconstrained problems, examples are taken to compare the two algorithms with recent methods in the literature. It is found that the second algorithm converges faster with respect to the other methods. Several constrained examples are also tried and the results are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号