首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Based on the NEWUOA algorithm, a new derivative-free algorithm is developed, named LCOBYQA. The main aim of the algorithm is to find a minimizer $x^{*} \in\mathbb{R}^{n}$ of a non-linear function, whose derivatives are unavailable, subject to linear inequality constraints. The algorithm is based on the model of the given function constructed from a set of interpolation points. LCOBYQA is iterative, at each iteration it constructs a quadratic approximation (model) of the objective function that satisfies interpolation conditions, and leaves some freedom in the model. The remaining freedom is resolved by minimizing the Frobenius norm of the change to the second derivative matrix of the model. The model is then minimized by a trust-region subproblem using the conjugate gradient method for a new iterate. At times the new iterate is found from a model iteration, designed to improve the geometry of the interpolation points. Numerical results are presented which show that LCOBYQA works well and is very competing against available model-based derivative-free algorithms.  相似文献   

2.
In this paper, we modify a derivative-free line search algorithm (DFL) proposed in the Ref. (Liuzzi et al. SIAM J Optimiz 20(5):2614–2635, 2010) to minimize a continuously differentiable function of box constrained variables or unconstrained variables with nonlinear constraints. The first-order derivatives of the objective function and of the constraints are assumed to be neither calculated nor explicitly approximated. Different line-searches are used for box-constrained variables and unconstrained variables. Accordingly the convergence to stationary points is proved. The computational behavior of the method has been evaluated on a set of test problems. The performance and data profiles are used to compare with DFL.  相似文献   

3.
A new trust-region and affine scaling algorithm for linearly constrained optimization is presented in this paper. Under no nondegenerate assumption, we prove that any limit point of the sequence generated by the new algorithm satisfies the first order necessary condition and there exists at least one limit point of the sequence which satisfies the second order necessary condition. Some preliminary numerical experiments are reported. The work was done while visiting Institute of Applied Mathematics, AMSS, CAS.  相似文献   

4.
5.
A DERIVATIVE-FREE ALGORITHM FOR UNCONSTRAINED OPTIMIZATION   总被引:1,自引:0,他引:1  
In this paper a hybrid algorithm which combines the pattern search method and the genetic algorithm for unconstrained optimization is presented. The algorithm is a deterministic pattern search algorithm,but in the search step of pattern search algorithm,the trial points are produced by a way like the genetic algorithm. At each iterate, by reduplication,crossover and mutation, a finite set of points can be used. In theory,the algorithm is globally convergent. The most stir is the numerical results showing that it can find the global minimizer for some problems ,which other pattern search algorithms don't bear.  相似文献   

6.
A trust-region algorithm for solving the equality constrained optimization problem is presented. This algorithm uses the Byrd and Omojokun way of computing the trial steps, but it differs from the Byrd and Omojokun algorithm in the way steps are evaluated. A global convergence theory for this new algorithm is presented. The main feature of this theory is that the linear independence assumption on the gradients of the constraints is not assumed.This research was supported in part by the Center for Research on Parallel Computation, by Grant NSF-CCR-91-20008, and by the REDI Foundation.  相似文献   

7.
We study an approach for minimizing a convex quadratic function subject to two quadratic constraints. This problem stems from computing a trust-region step for an SQP algorithm proposed by Celis, Dennis and Tapia (1985) for equality constrained optimization. Our approach is to reformulate the problem into a univariate nonlinear equation()=0 where the function() is continuous, at least piecewise differentiable and monotone. Well-established methods then can be readily applied. We also consider an extension of our approach to a class of non-convex quadratic functions and show that our approach is applicable to reduced Hessian SQP algorithms. Numerical results are presented indicating that our algorithm is reliable, robust and has the potential to be used as a building block to construct trust-region algorithms for small-sized problems in constrained optimization.This research was performed while the author was on a postdoctoral appointment in the Department of Mathematical Sciences, Rice University, Houston, TX, USA and was supported in part by AFOSR 85-0243 and DOE DEFG05-86ER 25017.  相似文献   

8.
In this paper,we propose a derivative-free trust region algorithm for constrained minimization problems with separable structure,where derivatives of the objective function are not available and cannot be directly approximated.At each iteration,we construct a quadratic interpolation model of the objective function around the current iterate.The new iterates are generated by minimizing the augmented Lagrangian function of this model over the trust region.The filter technique is used to ensure the feasibility and optimality of the iterative sequence.Global convergence of the proposed algorithm is proved under some suitable assumptions.  相似文献   

9.
10.
In this paper we propose a derivative-free optimization algorithm based on conditional moments for finding the maximizer of an objective function. The proposed algorithm does not require calculation or approximation of any order derivative of the objective function. The step size in iteration is determined adaptively according to the local geometrical feature of the objective function and a pre-specified quantity representing the desired precision. The theoretical properties including convergence of the method are presented. Numerical experiments comparing with the Newton, Quasi-Newton and trust region methods are given to illustrate the effectiveness of the algorithm.  相似文献   

11.
In this paper we first establish a Lagrange multiplier condition characterizing a regularized Lagrangian duality for quadratic minimization problems with finitely many linear equality and quadratic inequality constraints, where the linear constraints are not relaxed in the regularized Lagrangian dual. In particular, in the case of a quadratic optimization problem with a single quadratic inequality constraint such as the linearly constrained trust-region problems, we show that the Slater constraint qualification (SCQ) is necessary and sufficient for the regularized Lagrangian duality in the sense that the regularized duality holds for each quadratic objective function over the constraints if and only if (SCQ) holds. A new theorem of the alternative for systems involving both equality constraints and two quadratic inequality constraints plays a key role. We also provide classes of quadratic programs, including a class of CDT-subproblems with linear equality constraints, where (SCQ) ensures regularized Lagrangian duality.  相似文献   

12.
Penalty and interior-point methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid infeasibility detection. Interior-point methods have become the workhorse in large-scale optimization due to their Newton-like qualities, both in terms of their scalability and convergence behavior. Each of these two strategies, however, have certain disadvantages that make their use either impractical or inefficient for certain classes of problems. The goal of this paper is to present a penalty-interior-point method that possesses the advantages of penalty and interior-point techniques, but does not suffer from their disadvantages. Numerous attempts have been made along these lines in recent years, each with varying degrees of success. The novel feature of the algorithm in this paper is that our focus is not only on the formulation of the penalty-interior-point subproblem itself, but on the design of updates for the penalty and interior-point parameters. The updates we propose are designed so that rapid convergence to a solution of the nonlinear optimization problem or an infeasible stationary point is attained. We motivate the convergence properties of our algorithm and illustrate its practical performance on large sets of problems, including sets of problems that exhibit degeneracy or are infeasible.  相似文献   

13.
In a recent paper (Ref. 1), the author proposed a trust-region algorithm for solving the problem of minimizing a nonlinear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warranted. He studied the behavior of the penalty parameter and proved several global and local convergence results. One of these results is that there exists a subsequence of the iterates generated by the algorithm that converges to a point that satisfies the first-order necessary conditions.In the current paper, we show that, for this algorithm, there exists a subsequence of iterates that converges to a point that satisfies both the first-order and the second-order necessary conditions.This research was supported by the Rice University Center for Research on Parallel Computation, Grant R31853, and the REDI Foundation.  相似文献   

14.
We present a new filter trust-region approach for solving unconstrained nonlinear optimization problems making use of the filter technique introduced by Fletcher and Leyffer to generate non-monotone iterations. We also use the concept of a multidimensional filter used by Gould et?al. (SIAM J. Optim. 15(1):17?C38, 2004) and introduce a new filter criterion showing good properties. Moreover, we introduce a new technique for reducing the size of the filter. For the algorithm, we present two different convergence analyses. First, we show that at least one of the limit points of the sequence of the iterates is first-order critical. Second, we prove the stronger property that all the limit points are first-order critical for a modified version of our algorithm. We also show that, under suitable conditions, all the limit points are second-order critical. Finally, we compare our algorithm with a natural trust-region algorithm and the filter trust-region algorithm of Gould et al. on the CUTEr unconstrained test problems Gould et?al. in ACM Trans. Math. Softw. 29(4):373?C394, 2003. Numerical results demonstrate the efficiency and robustness of our proposed algorithms.  相似文献   

15.
In this paper, the feasible type SQP method is improved. A new SQP algorithm is presented to solve the nonlinear inequality constrained optimization. As compared with the existing SQP methods, per single iteration, in order to obtain the search direction, it is only necessary to solve equality constrained quadratic programming subproblems and systems of linear equations. Under some suitable conditions, the global and superlinear convergence can be induced.  相似文献   

16.
A trust region algorithm for equality constrained optimization   总被引:2,自引:0,他引:2  
A trust region algorithm for equality constrained optimization is proposed that employs a differentiable exact penalty function. Under certain conditions global convergence and local superlinear convergence results are proved.  相似文献   

17.
提出了一个处理等式约束优化问题新的SQP算法,该算法通过求解一个增广Lagrange函数的拟Newton方法推导出一个等式约束二次规划子问题,从而获得下降方向.罚因子具有自动调节性,并能避免趋于无穷.为克服Maratos效应采用增广Lagrange函数作为效益函数并结合二阶步校正方法.在适当的条件下,证明算法是全局收敛的,并且具有超线性收敛速度.  相似文献   

18.
A tolerant algorithm for linearly constrained optimization calculations   总被引:3,自引:0,他引:3  
Two extreme techniques when choosing a search direction in a linearly constrained optimization calculation are to take account of all the constraints or to use an active set method that satisfies selected constraints as equations, the remaining constraints being ignored. We prefer an intermediate method that treats all inequality constraints with small residuals as inequalities with zero right hand sides and that disregards the other inequality conditions. Thus the step along the search direction is not restricted by any constraints with small residuals, which can help efficiency greatly, particularly when some constraints are nearly degenerate. We study the implementation, convergence properties and performance of an algorithm that employs this idea. The implementation considerations include the choice and automatic adjustment of the tolerance that defines the small residuals, the calculation of the search directions, and the updating of second derivative approximations. The main convergence theorem imposes no conditions on the constraints except for boundedness of the feasible region. The numerical results indicate that a Fortran implementation of our algorithm is much more reliable than the software that was tested by Hock and Schittkowski (1981). Therefore the algorithm seems to be very suitable for general use, and it is particularly appropriate for semi-infinite programming calculations that have many linear constraints that come from discretizations of continua.  相似文献   

19.
We introduce a new trust-region method for unconstrained optimization where the radius update is computed using the model information at the current iterate rather than at the preceding one. The update is then performed according to how well the current model retrospectively predicts the value of the objective function at last iterate. Global convergence to first- and second-order critical points is proved under classical assumptions and preliminary numerical experiments on CUTEr problems indicate that the new method is very competitive.  相似文献   

20.
This paper presents a feasible direction algorithm for the minimization of a pseudoconvex function over a smooth, compact, convex set. We establish that each cluster point of the generated sequence is an optimal solution of the problem without introducing anti-jamming procedures. Each iteration of the algorithm involves as subproblems only one line search for a zero of a continuously differentiable convex function and one univariate function minimization on a compact interval.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号