首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There are well established rival theories about the economy. These have, in turn, led to the development of rival models purporting to represent the economic system. The models are large systems of discrete-time nonlinear dynamic equations. Observed data of the real system does not, in general, provide sufficient information for statistical methods to invalidate all but one of the rival models. In such a circumstance, there is uncertainty about which model to use in the formulation of policy. Prudent policy design would suggest that a model-based policy should take into account all the rival models. This is achieved as a pooling of the models. The pooling that yields the policy which is robust to model choice is formulated as a constrained min-max problem. The minimization is over the decision variables and the maximization is over the rival models. Only equality constraints are considered.A successive quadratic programming algorithm is discussed for the solution of the min-max problem. The algorithm uses a stepsize strategy based on a differentiable penalty function for the constraints. Two alternative quadratic subproblems can be used. One is a quadratic min-max and the other a quadratic programming problem. The objective function of either subproblem includes a linear term which is dependent on the penalty function. The penalty parameter is determined at every iteration, using a strategy that ensures a descent property as well as the boundedness of the penalty term. The boundedness follows since the strategy is always satisfied for finite values of the parameter which needs to be increased a finite number of times.The global and local convergence of the algorithm is established. The conditions, involving projected Hessian approximations, are discussed under which the algorithm achieves unit stepsizes and subsequently Q-superlinear convergence.  相似文献   

2.
A fundamental problem in constrained nonlinear optimization algorithms is the design of a satisfactory stepsize strategy which converges to unity. In this paper, we discuss stepsize strategies for Newton or quasi-Newton algorithms which require the solution of quadratic optimization subproblems. Five stepsize strategies are considered for three different subproblems, and the conditions under which the stepsizes will converge to unity are established. It is shown that these conditions depend critically on the convergence of the Hessian approximations used in the algorithms. The stepsize strategies are constructed using basic principles from which the conditions to unit stepsizes follow. Numerical results are discussed in an Appendix.Paper presented to the XI Symposium on Mathematical Programming, Bonn, Germany, 1982.This work was completed while the author was visiting the European University in Florence where, in particular, Professors Fitoussi and Velupillai provided the opportunity for its completion. The author is grateful to Dr. L. C. W. Dixon for his helpful comments and criticisms on numerous versions of the paper, and to R. G. Becker for programming the algorithms in Section 3 and for helpful discussions concerning these algorithms.  相似文献   

3.
Usual global convergence results for sequential quadratic programming (SQP) algorithms with linesearch rely on some a priori assumptions about the generated sequences, such as boundedness of the primal sequence and/or of the dual sequence and/or of the sequence of values of a penalty function used in the linesearch procedure. Different convergence statements use different combinations of assumptions, but they all assume boundedness of at least one of the sequences mentioned above. In the given context boundedness assumptions are particularly undesirable, because even for non-pathological and well-behaved problems the associated penalty functions (whose descent is used to produce primal iterates) may not be bounded below for any value of the penalty parameter. Consequently, boundedness assumptions on the iterates are not easily justifiable. By introducing a very simple and computationally cheap safeguard in the linesearch procedure, we prove boundedness of the primal sequence in the case when the feasible set is nonempty, convex, and bounded. If, in addition, the Slater condition holds, we obtain a complete global convergence result without any a priori assumptions on the iterative sequences. The safeguard consists of not accepting a further increase of constraints violation at iterates which are infeasible beyond a chosen threshold, which can always be ensured by the proposed modified SQP linesearch criterion. The author is supported in part by CNPq Grants 301508/2005-4, 490200/2005-2, 550317/2005-8, by PRONEX–Optimization, and by FAPERJ Grant E-26/151.942/2004.  相似文献   

4.
In Ref. 2, four algorithms of dual matrices for function minimization were introduced. These algorithms are characterized by the simultaneous use of two matrices and by the property that the one-dimensional search for the optimal stepsize is not needed for convergence. For a quadratic function, these algorithms lead to the solution in at mostn+1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn+2. In this paper, the above-mentioned algorithms are tested numerically by using five nonquadratic functions. In order to investigate the effects of the stepsize on the performances of these algorithms, four schemes for the stepsize factor are employed, two corresponding to small-step processes and two corresponding to large-step processes. The numerical results show that, in spite of the wide range employed in the choice of the stepsize factor, all algorithms exhibit satisfactory convergence properties and compare favorably with the corresponding quadratically convergent algorithms using one-dimensional searches for optimal stepsizes.  相似文献   

5.
王福胜  张瑞 《计算数学》2018,40(1):49-62
针对带不等式约束的极大极小问题,借鉴一般约束优化问题的模松弛强次可行SQP算法思想,提出了求解不等式约束极大极小问题的一个新型模松弛强次可行SQCQP算法.首先,通过在QCQP子问题中选取合适的罚函数,保证了算法的可行性以及目标函数F(x)的下降性,同时简化QCQP子问题二次约束项参数α_k的选取,可保证算法的可行性和收敛性.其次,算法步长的选取合理简单.最后,在适当的假设条件下证明了算法具有全局收敛性及强收敛性.初步的数值试验结果表明算法是可行有效的.  相似文献   

6.
A Sequential Quadratic Programming (in short, SQP) algorithm is presented for solving constrained nonlinear programming problems. The algorithm uses three stepsize strategies, in order to achieve global and superlinear convergence. Switching rules are implemented that combine the merits and avoid the drawbacks of the three stepsize strategies. A penalty parameter is determined, using an adaptive strategy that aims to achieve sufficient decrease of the activated merit function. Global convergence is established and it is also shown that, locally, unity step sizes are accepted. Therefore, superlinear convergence is not impeded under standard assumptions. Global convergence and convergence of the stepsizes are displayed on test problems from the Hock and Schittkowski collection.  相似文献   

7.
A globally convergent method for nonlinear programming   总被引:23,自引:0,他引:23  
Recently developed Newton and quasi-Newton methods for nonlinear programming possess only local convergence properties. Adopting the concept of the damped Newton method in unconstrained optimization, we propose a stepsize procedure to maintain the monotone decrease of an exact penalty function. In so doing, the convergence of the method is globalized.This research was supported in part by the National Science Foundation under Grant No. ENG-75-10486.  相似文献   

8.
In this paper we propose a recursive quadratic programming algorithm for nonlinear programming problems with inequality constraints that uses as merit function a differentiable exact penalty function. The algorithm incorporates an automatic adjustment rule for the selection of the penalty parameter and makes use of an Armijo-type line search procedure that avoids the need to evaluate second order derivatives of the problem functions. We prove that the algorithm possesses global and superlinear convergence properties. Numerical results are reported.  相似文献   

9.
A working set SQCQP algorithm with simple nonmonotone penalty parameters   总被引:1,自引:0,他引:1  
In this paper, we present a new sequential quadratically constrained quadratic programming (SQCQP) algorithm, in which a simple updating strategy of the penalty parameter is adopted. This strategy generates nonmonotone penalty parameters at early iterations and only uses the multiplier corresponding to the bound constraint of the quadratically constrained quadratic programming (QCQP) subproblem instead of the multipliers of the quadratic constraints, which will bring some numerical advantages. Furthermore, by using the working set technique, we remove the constraints of the QCQP subproblem that are locally irrelevant, and thus the computational cost could be reduced. Without assuming the convexity of the objective function or the constraints, the algorithm is proved to be globally, superlinearly and quadratically convergent. Preliminary numerical results show that the proposed algorithm is very promising when compared with the tested SQP algorithms.  相似文献   

10.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

11.
Summary This paper presents a readily implementable algorithm for solving constrained minimization problems involving (possibly nonsmooth) convex functions. The constraints are handled as in the successive quadratic approximations methods for smooth problems. An exact penalty function is employed for stepsize selection. A scheme for automatic limitation of penalty growth is given. Global convergence of the algorithm is established, as well as finite termination for piecewise linear problems. Numerical experience is reported.Sponsored by Program CPBP 02.15  相似文献   

12.
An augmented Lagrangian nonlinear programming algorithm has been developed. Its goals are to achieve robust global convergence and fast local convergence. Several unique strategies help the algorithm achieve these dual goals. The algorithm consists of three nested loops. The outer loop estimates the Kuhn-Tucker multipliers at a rapid linear rate of convergence. The middle loop minimizes the augmented Lagrangian functions for fixed multipliers. This loop uses the sequential quadratic programming technique with a box trust region stepsize restriction. The inner loop solves a single quadratic program. Slack variables and a constrained form of the fixed-multiplier middleloop problem work together with curved line searches in the inner-loop problem to allow large penalty wieghts for rapid outer-loop convergence. The inner-loop quadratic programs include quadratic onstraint terms, which complicate the inner loop, but speed the middle-loop progress when the constraint curvature is large.The new algorithm compares favorably with a commercial sequential quadratic programming algorithm on five low-order test problems. Its convergence is more robust, and its speed is not much slower.This research was supported in part by the National Aeronautics and Space Administration under Grant No. NAG-1-1009.  相似文献   

13.
In this paper, we consider the Extended Kalman Filter (EKF) for solving nonlinear least squares problems. EKF is an incremental iterative method based on Gauss-Newton method that has nice convergence properties. Although EKF has the global convergence property under some conditions, the convergence rate is only sublinear under the same conditions. One of the reasons why EKF shows slow convergence is the lack of explicit stepsize. In the paper, we propose a stepsize rule for EKF and establish global convergence of the algorithm under the boundedness of the generated sequence and appropriate assumptions on the objective function. A notable feature of the stepsize rule is that the stepsize is kept greater than or equal to 1 at each iteration, and increases at a linear rate of k under an additional condition. Therefore, we can expect that the proposed method converges faster than the original EKF. We report some numerical results, which demonstrate that the proposed method is promising.  相似文献   

14.
This paper studies convergence properties of regularized Newton methods for minimizing a convex function whose Hessian matrix may be singular everywhere. We show that if the objective function is LC2, then the methods possess local quadratic convergence under a local error bound condition without the requirement of isolated nonsingular solutions. By using a backtracking line search, we globalize an inexact regularized Newton method. We show that the unit stepsize is accepted eventually. Limited numerical experiments are presented, which show the practical advantage of the method.  相似文献   

15.
In this paper we introduce an augmented Lagrangian type algorithm for strictly convex quadratic programming problems with equality constraints. The new feature of the proposed algorithm is the adaptive precision control of the solution of auxiliary problems in the inner loop of the basic algorithm. Global convergence and boundedness of the penalty parameter are proved and an error estimate is given that does not have any term that accounts for the inexact solution of the auxiliary problems. Numerical experiments illustrate efficiency of the algorithm presented  相似文献   

16.
Exact penalty function algorithm with simple updating of the penalty parameter   总被引:13,自引:0,他引:13  
A new globally convergent algorithm for minimizing an objective function subject to equality and inequality constraints is presented. The algorithm determines a search direction by solving a quadratic programming subproblem, which always has an optimal solution, and uses an exact penalty function to compute the steplength along this direction through an Armijo-type scheme. The special structure of the quadratic subproblem is exploited to construct a new and simple method for updating the penalty parameter. This method may increase or reduce the value of the penalty parameter depending on some easily performed tests. A new method for updating the Hessian of the Lagrangian is presented, and a Q-superlinear rate of convergence is established.This work was supported in part by the British Council and the Conselho Nacional de Desenvolvimento Cientifico & Tecnologico/CNPq, Rio de Janeiro, Brazil.The authors are very grateful to Mr. Lam Yeung for his invaluable assistance in computing the results and to a reviewer for constructive advice.  相似文献   

17.
A new, infeasible QP-free algorithm for nonlinear constrained optimization problems is proposed. The algorithm is based on a continuously differentiable exact penalty function and on active-set strategy. After a finite number of iterations, the algorithm requires only the solution of two linear systems at each iteration. We prove that the algorithm is globally convergent toward the KKT points and that, if the second-order sufficiency condition and the strict complementarity condition hold, then the rate of convergence is superlinear or even quadratic. Moreover, we incorporate two automatic adjustment rules for the choice of the penalty parameter and make use of an approximated direction as derivative of the merit function so that only first-order derivatives of the objective and constraint functions are used.  相似文献   

18.
A quasi-Newton extension of the Goldstein-Levitin-Polyak (GLP) projected gradient algorithm for constrained optimization is considered. Essentially, this extension projects an unconstrained descent step on to the feasible region. The determination of the stepsize is divided into two stages. The first is a stepsize sequence, chosen from the range [1,2], converging to unity. This determines the size of the unconstrained step. The second is a stepsize chosen from the range [0,1] according to a stepsize strategy and determines the length of the projected step. Two such strategies are considered. The first bounds the objective function decrease by a conventional linear functional, whereas the second uses a quadratic functional as a bound.The introduction of the unconstrained step provides the option of taking steps that are larger than unity. It is shown that unit steplengths and subsequently superlinear convergence rates are attained if the projection of the quasi-Newton Hessian approximation approaches the projection of the Hessian at the solution. Thus, the requirement in the GLP algorithm for a positive definite Hessian at the solution is relaxed. This allows the use of strictly positive definite Hessian approximations, thereby simplifying the quadratic subproblem involved, even if the Hessian at the solution is not strictly positive definite.This research was funded by a Science and Engineering Research Council Advanced Fellowship. The author is also grateful to an anonymous referee for numerous constructive criticisms and comments.  相似文献   

19.
We present a class of trust region algorithms without using a penalty function or a filter for nonlinear inequality constrained optimization and analyze their global and local convergence. In each iteration, the algorithms reduce the value of objective function or the measure of constraints violation according to the relationship between optimality and feasibility. A sequence of steps focused on improving optimality is referred to as an f-loop, while some restoration phase focuses on improving feasibility and is called an h-loop. In an f-loop, the algorithms compute trial step by solving a classic QP subproblem rather than using composite-step strategy. Global convergence is ensured by requiring the constraints violation of each iteration not to exceed an progressively tighter bound on constraints violation. By using a second order correction strategy based on active set identification technique, Marato’s effect is avoided and fast local convergence is shown. The preliminary numerical results are encouraging.  相似文献   

20.
Conjugate gradient methods are efficient methods for minimizing differentiable objective functions in large dimension spaces. However, converging line search strategies are usually not easy to choose, nor to implement. Sun and colleagues (Ann. Oper. Res. 103:161–173, 2001; J. Comput. Appl. Math. 146:37–45, 2002) introduced a simple stepsize formula. However, the associated convergence domain happens to be overrestrictive, since it precludes the optimal stepsize in the convex quadratic case. Here, we identify this stepsize formula with one iteration of the Weiszfeld algorithm in the scalar case. More generally, we propose to make use of a finite number of iterates of such an algorithm to compute the stepsize. In this framework, we establish a new convergence domain, that incorporates the optimal stepsize in the convex quadratic case. The authors thank the associate editor and the reviewer for helpful comments and suggestions. C. Labat is now in postdoctoral position, Johns Hopkins University, Baltimore, MD, United States.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号