首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We give an approach for finding a global minimization with equality and inequality Constraints.Our approach is to construct an exact penalty function, and prove that the global minimal points of this exact penalty function are the primal constrained global minimal points. Thus we convert the problem of global constrained optimization into a problem of global unconstrained optimization. Furthermore, the integral approach for finding a global minimization for a class of discontinuous functions is used and an implementable algorithm is given.  相似文献   

2.
边界约束非凸二次规划问题的分枝定界方法   总被引:2,自引:0,他引:2  
本文是研究带有边界约束非凸二次规划问题,我们把球约束二次规划问题和线性约束凸二次规划问题作为子问题,分明引用了它们的一个求整体最优解的有效算法,我们提出几种定界的紧、松驰策略,给出了求解原问题整体最优解的分枝定界算法,并证明了该算法的收敛性,不同的定界组合就可以产生不同的分枝定界算法,最后我们简单讨论了一般有界凸域上非凸二次规划问题求整体最优解的分枝与定界思想。  相似文献   

3.
This paper considers the nonlinearly constrained continuous global minimization problem. Based on the idea of the penalty function method, an auxiliary function, which has approximately the same global minimizers as the original problem, is constructed. An algorithm is developed to minimize the auxiliary function to find an approximate constrained global minimizer of the constrained global minimization problem. The algorithm can escape from the previously converged local minimizers, and can converge to an approximate global minimizer of the problem asymptotically with probability one. Numerical experiments show that it is better than some other well known recent methods for constrained global minimization problems.  相似文献   

4.
In this article, we study convergence of the extragradient method for constrained convex minimization problems in a Hilbert space. Our goal is to obtain an ε-approximate solution of the problem in the presence of computational errors, where ε is a given positive number. Most results known in the literature establish convergence of optimization algorithms, when computational errors are summable. In this article, the convergence of the extragradient method for solving convex minimization problems is established for nonsummable computational errors. We show that the the extragradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.  相似文献   

5.
The trust region problem, minimization of a quadratic function subject to a spherical trust region constraint, occurs in many optimization algorithms. In a previous paper, the authors introduced an inexpensive approximate solution technique for this problem that involves the solution of a two-dimensional trust region problem. They showed that using this approximation in an unconstrained optimization algorithm leads to the same theoretical global and local convergence properties as are obtained using the exact solution to the trust region problem. This paper reports computational results showing that the two-dimensional minimization approach gives nearly optimal reductions in then-dimension quadratic model over a wide range of test cases. We also show that there is very little difference, in efficiency and reliability, between using the approximate or exact trust region step in solving standard test problems for unconstrained optimization. These results may encourage the application of similar approximate trust region techniques in other contexts.Research supported by ARO contract DAAG 29-84-K-0140, NSF grant DCR-8403483, and NSF cooperative agreement DCR-8420944.  相似文献   

6.
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.  相似文献   

7.
We present a new strategy for the constrained global optimization of expensive black box functions using response surface models. A response surface model is simply a multivariate approximation of a continuous black box function which is used as a surrogate model for optimization in situations where function evaluations are computationally expensive. Prior global optimization methods that utilize response surface models were limited to box-constrained problems, but the new method can easily incorporate general nonlinear constraints. In the proposed method, which we refer to as the Constrained Optimization using Response Surfaces (CORS) Method, the next point for costly function evaluation is chosen to be the one that minimizes the current response surface model subject to the given constraints and to additional constraints that the point be of some distance from previously evaluated points. The distance requirement is allowed to cycle, starting from a high value (global search) and ending with a low value (local search). The purpose of the constraint is to drive the method towards unexplored regions of the domain and to prevent the premature convergence of the method to some point which may not even be a local minimizer of the black box function. The new method can be shown to converge to the global minimizer of any continuous function on a compact set regardless of the response surface model that is used. Finally, we considered two particular implementations of the CORS method which utilize a radial basis function model (CORS-RBF) and applied it on the box-constrained Dixon–Szegö test functions and on a simple nonlinearly constrained test function. The results indicate that the CORS-RBF algorithms are competitive with existing global optimization algorithms for costly functions on the box-constrained test problems. The results also show that the CORS-RBF algorithms are better than other algorithms for constrained global optimization on the nonlinearly constrained test problem.  相似文献   

8.
In this work we propose a Cauchy-like method for solving smooth unconstrained vector optimization problems. When the partial order under consideration is the one induced by the nonnegative orthant, we regain the steepest descent method for multicriteria optimization recently proposed by Fliege and Svaiter. We prove that every accumulation point of the generated sequence satisfies a certain first-order necessary condition for optimality, which extends to the vector case the well known “gradient equal zero” condition for real-valued minimization. Finally, under some reasonable additional hypotheses, we prove (global) convergence to a weak unconstrained minimizer.As a by-product, we show that the problem of finding a weak constrained minimizer can be viewed as a particular case of the so-called Abstract Equilibrium problem.  相似文献   

9.
A class of algorithms for nonlinearly constrained optimization problems is proposed. The subproblems of the algorithms are linearly constrained quadratic minimization problems which contain an updated estimate of the Hessian of the Lagrangian. Under suitable conditions and updating schemes local convergence and a superlinear rate of convergence are established. The convergence proofs require among other things twice differentiable objective and constraint functions, while the calculations use only first derivative data. Rapid convergence has been obtained in a number of test problems by using a program based on the algorithms proposed here.Research supported by NSF Grant GJ-35292 at the University of Wisconsin.  相似文献   

10.
In this paper, we address the global optimization of functions subject to bound and linear constraints without using derivatives of the objective function. We investigate the use of derivative-free models based on radial basis functions (RBFs) in the search step of direct-search methods of directional type. We also study the application of algorithms based on difference of convex (d.c.) functions programming to solve the resulting subproblems which consist of the minimization of the RBF models subject to simple bounds on the variables. Extensive numerical results are reported with a test set of bound and linearly constrained problems.  相似文献   

11.
The aim of this paper is to show that the new continuously differentiable exact penalty functions recently proposed in literature can play an important role in the field of constrained global optimization. In fact they allow us to transfer ideas and results proposed in unconstrained global optimization to the constrained case.First, by drawing our inspiration from the unconstrained case and by using the strong exactness properties of a particular continuously differentiable penalty function, we propose a sufficient condition for a local constrained minimum point to be global.Then we show that every constrained local minimum point satisfying the second order sufficient conditions is an attraction point for a particular implementable minimization algorithm based on the considered penalty function. This result can be used to define new classes of global algorithms for the solution of general constrained global minimization problems. As an example, in this paper we describe a simulated annealing algorithm which produces a sequence of points converging in probability to a global minimum of the original constrained problem.  相似文献   

12.
This paper considers constrained and unconstrained parametric global optimization problems in a real Hilbert space. We assume that the gradient of the cost functional is Lipschitz continuous but not smooth. A suitable choice of parameters implies the linear or superlinear (supergeometric) convergence of the iterative method. From the numerical experiments, we conclude that our algorithm is faster than other existing algorithms for continuous but nonsmooth problems, when applied to unconstrained global optimization problems. However, because we solve 2n + 1 subproblems for a large number n of independent variables, our algorithm is somewhat slower than other algorithms, when applied to constrained global optimization.This work was partially supported by the NATO Outreach Fellowship - Mathematics 219.33.We thank Professor Hans D. Mittelmann, Arizona State University, for cooperation and support.  相似文献   

13.
In this paper, we first examine how global optimality of non-convex constrained optimization problems is related to Lagrange multiplier conditions. We then establish Lagrange multiplier conditions for global optimality of general quadratic minimization problems with quadratic constraints. We also obtain necessary global optimality conditions, which are different from the Lagrange multiplier conditions for special classes of quadratic optimization problems. These classes include weighted least squares with ellipsoidal constraints, and quadratic minimization with binary constraints. We discuss examples which demonstrate that our optimality conditions can effectively be used for identifying global minimizers of certain multi-extremal non-convex quadratic optimization problems. The work of Z. Y. Wu was carried out while the author was at the Department of Applied Mathematics, University of New South Wales, Sydney, Australia.  相似文献   

14.
A barrier function method for minimax problems   总被引:2,自引:0,他引:2  
This paper presents an algorithm based on barrier functions for solving semi-infinite minimax problems which arise in an engineering design setting. The algorithm bears a resemblance to some of the current interior penalty function methods used to solve constrained minimization problems. Global convergence is proven, and numerical results are reported which show that the algorithm is exceptionally robust, and that its performance is comparable, while its structure is simpler than that of current first-order minimax algorithms.This research was supported by the National Science Foundation grant ECS-8517362, the Air Force Office Scientific Research grant 86-0116, the California State MICRO program, and the United Kingdom Science and Engineering Research Council.  相似文献   

15.
A constrained minimax problem is converted to minimization of a sequence of unconstrained and continuously differentiable functions in a manner similar to Morrison's method for constrained optimization. One can thus apply any efficient gradient minimization technique to do the unconstrained minimization at each step of the sequence. Based on this approach, two algorithms are proposed, where the first one is simpler to program, and the second one is faster in general. To show the efficiency of the algorithms even for unconstrained problems, examples are taken to compare the two algorithms with recent methods in the literature. It is found that the second algorithm converges faster with respect to the other methods. Several constrained examples are also tried and the results are presented.  相似文献   

16.
《Optimization》2012,61(3):403-419
In this article, the application of the electromagnetism-like method (EM) for solving constrained optimization problems is investigated. A number of penalty functions have been tested with EM in this investigation, and their merits and demerits have been discussed. We have also provided motivations for such an investigation. Finally, we have compared EM with two recent global optimization algorithms from the literature. We have shown that EM is a suitable alternative to these methods and that it has a role to play in solving constrained global optimization problems.  相似文献   

17.
We propose an algorithm for constrained global optimization to tackle non-convex nonlinear multivariate polynomial programming problems. The proposed Bernstein branch and prune algorithm is based on the Bernstein polynomial approach. We introduce several new features in this proposed algorithm to make the algorithm more efficient. We first present the Bernstein box consistency and Bernstein hull consistency algorithms to prune the search regions. We then give Bernstein contraction algorithm to avoid the computation of Bernstein coefficients after the pruning operation. We also include a new Bernstein cut-off test based on the vertex property of the Bernstein coefficients. The performance of the proposed algorithm is numerically tested on 13 benchmark problems. The results of the tests show the proposed algorithm to be overall considerably superior to existing method in terms of the chosen performance metrics.  相似文献   

18.
On piecewise quadratic Newton and trust region problems   总被引:1,自引:0,他引:1  
Some recent algorithms for nonsmooth optimization require solutions to certain piecewise quadratic programming subproblems. Two types of subproblems are considered in this paper. The first type seeks the minimization of a continuously differentiable and strictly convex piecewise quadratic function subject to linear equality constraints. We prove that a nonsmooth version of Newton’s method is globally and finitely convergent in this case. The second type involves the minimization of a possibly nonconvex and nondifferentiable piecewise quadratic function over a Euclidean ball. Characterizations of the global minimizer are studied under various conditions. The results extend a classical result on the trust region problem. Partially supported by National University of Singapore under grant 930033.  相似文献   

19.
In this paper, we propose two interior proximal algorithms inspired by the logarithmic-quadratic proximal method. The first method we propose is for general linearly constrained quasiconvex minimization problems. For this method, we prove global convergence when the regularization parameters go to zero. The latter assumption can be dropped when the function is assumed to be pseudoconvex. We also obtain convergence results for quasimonotone variational inequalities, which are more general than monotone ones.  相似文献   

20.
The flower pollination algorithm (FPA) is a relatively new swarm optimization algorithm that inspired by the pollination phenomenon of natural phanerogam. Since its proposed, it has received widespread attention and been applied in various engineering fields. However, the FPA still has certain drawbacks, such as inadequate optimization precision and poor convergence. In this paper, an innovative flower pollination algorithm based on cloud mutation is proposed (CMFPA), which adds information of all dimensions in the global optimization stage and uses the designed cloud mutation method to redistribute the population center. To verify the performance of the CMFPA in solving continuous optimization problems, we test twenty-four well-known functions, composition functions of CEC2013 and all benchmark functions of CEC2017. The results demonstrate that the CMFPA has better performance compared with other state-of-the-art algorithms. In addition, the CMFPA is implemented for five constrained optimization problems in practical engineering, and the performance is compared with state-of-the-art algorithms to further prove the effectiveness and efficiency of the CMFPA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号