首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The sequential gradient-restoration algorithm (SGRA) was developed in the late 1960s for the solution of equality-constrained nonlinear programs and has been successfully implemented by Miele and coworkers on many large-scale problems. The algorithm consists of two major sequentially applied phases. The first is a gradient-type minimization in a subspace tangent to the constraint surface, and the second is a feasibility restoration procedure. In Part 1, the original SGRA algorithm is described and is compared with two other related methods: the gradient projection and the generalized reduced gradient methods. Next, the special case of linear equalities is analyzed. It is shown that, in this case, only the gradient-type minimization phase is needed, and the SGRA becomes identical to the steepest-descent method. Convergence proofs for the nonlinearly constrained case are given in Part 2.Partial support for this work was provided by the Fund for the Promotion of Research at Technion, Israel Institute of Technology, Haifa, Israel.  相似文献   

2.
In this paper, the problem of minimizing a nonlinear functionf(x) subject to a nonlinear constraint (x)=0 is considered, wheref is a scalar,x is ann-vector, and is aq-vector, withq<n. A conjugate gradient-restoration algorithm similar to those developed by Mieleet al. (Refs. 1 and 2) is employed. This particular algorithm consists of a sequence of conjugate gradient-restoration cycles. The conjugate gradient portion of each cycle is based upon a conjugate gradient algorithm that is derived for the special case of a quadratic function subject to linear constraints. This portion of the cycle involves a single step and is designed to decrease the value of the function while satisfying the constraints to first order. The restoration portion of each cycle involves one or more iterations and is designed to restore the norm of the constraint function to within a predetermined tolerance about zero.The conjugate gradient-restoration sequence is reinitialized with a simple gradient step everyn–q or less cycles. At the beginning of each simple gradient step, a positive-definite preconditioning matrix is used to accelerate the convergence of the algorithm. The preconditioner chosen,H +, is the positive-definite reflection of the Hessian matrixH. The matrixH + is defined herein to be a matrix whose eigenvectors are identical to those of the Hessian and whose eigenvalues are the moduli of the latter's eigenvalues. A singular-value decomposition is used to efficiently construct this matrix. The selection of the matrixH + as the preconditioner is motivated by the fact that gradient algorithms exhibit excellent convergence characteristics on quadratic problems whose Hessians have small condition numbers. To this end, the transforming operatorH + 1/2 produces a transformed Hessian with a condition number of one.A higher-order example, which has resulted from a new eigenstructure assignment formulation (Ref. 3), is used to illustrate the rapidity of convergence of the algorithm, along with two simpler examples.  相似文献   

3.
The problem of the thermal stability of a horizontal incompressible fluid layer with linear and nonlinear temperature distributions is solved by using the sequential gradient-restoration algorithm developed for optimal control problems. The hydrodynamic boundary conditions for the layer include a rigid or free upper surface and a rigid lower surface. The resulting disturbing equations are solved as a Bolza problem in the calculus of variations. The results of the study are compared with the existing works in the literature.The authors acknowledge valuable discussions with Dr. A. Miele.  相似文献   

4.
Described here is the structure and theory for a sequential quadratic programming algorithm for solving sparse nonlinear optimization problems. Also provided are the details of a computer implementation of the algorithm along with test results. The algorithm maintains a sparse approximation to the Cholesky factor of the Hessian of the Lagrangian. The solution to the quadratic program generated at each step is obtained by solving a dual quadratic program using a projected conjugate gradient algorithm. An updating procedure is employed that does not destroy sparsity.  相似文献   

5.
In this paper, sequential gradient-restoration algorithms for optimal control problems are considered, and attention is focused on the restoration phase. It is shown that the Lagrange multipliers associated with the restoration phase not only solve the auxiliary minimization problem of the restoration phase, but are also endowed with a supplementary optimality property: they minimize a special functional, quadratic in the multipliers, subject to the multiplier differential equations and boundary conditions, for given state, control, and parameter.Dedicated to L. CesariThis work was supported by a grant of the National Science Foundation.  相似文献   

6.
In a companion paper (Part 1, J. Optim. Theory Appl. 137(3), [2008]), we determined the optimal starting conditions for the rendezvous maneuver using an optimal control approach. In this paper, we study the same problem with a mathematical programming approach. Specifically, we consider the relative motion between a target spacecraft in a circular orbit and a chaser spacecraft moving in its proximity as described by the Clohessy-Wiltshire equations. We consider the class of multiple-subarc trajectories characterized by constant thrust controls in each subarc. Under these conditions, the Clohessy-Wiltshire equations can be integrated in closed form and in turn this leads to optimization processes of the mathematical programming type. Within the above framework, we study the rendezvous problem under the assumption that the initial separation coordinates and initial separation velocities are free except for the requirement that the initial chaser-to-target distance is given. In particular, we consider the rendezvous between the Space Shuttle (chaser) and the International Space Station (target). Once a given initial distance SS-to-ISS is preselected, the present work supplies not only the best initial conditions for the rendezvous trajectory, but simultaneously the corresponding final conditions for the ascent trajectory.  相似文献   

7.
In this paper, a class of general nonlinear programming problems with inequality and equality constraints is discussed. Firstly, the original problem is transformed into an associated simpler equivalent problem with only inequality constraints. Then, inspired by the ideals of the sequential quadratic programming (SQP) method and the method of system of linear equations (SLE), a new type of SQP algorithm for solving the original problem is proposed. At each iteration, the search direction is generated by the combination of two directions, which are obtained by solving an always feasible quadratic programming (QP) subproblem and a SLE, respectively. Moreover, in order to overcome the Maratos effect, the higher-order correction direction is obtained by solving another SLE. The two SLEs have the same coefficient matrices, and we only need to solve the one of them after a finite number of iterations. By a new line search technique, the proposed algorithm possesses global and superlinear convergence under some suitable assumptions without the strict complementarity. Finally, some comparative numerical results are reported to show that the proposed algorithm is effective and promising.  相似文献   

8.
The convergence analysis of a nonlinear Lagrange algorithm for solving nonlinear constrained optimization problems with both inequality and equality constraints is explored in detail. The estimates for the derivatives of the multiplier mapping and the solution mapping of the proposed algorithm are discussed via the technique of the singular value decomposition of matrix. Based on the estimates, the local convergence results and the rate of convergence of the algorithm are presented when the penalty parameter is less than a threshold under a set of suitable conditions on problem functions. Furthermore, the condition number of the Hessian of the nonlinear Lagrange function with respect to the decision variables is analyzed, which is closely related to efficiency of the algorithm. Finally, the preliminary numericM results for several typical test problems are reported.  相似文献   

9.
In this work, a subsampled Levenberg-Marquardt algorithm is proposed for solving nonconvex finite-sum optimization problem. At each iteration, based on subsampled function value, gradient and simplified Hessian, a linear system is inexactly solved and the regularized parameter is updated as trust-region algorithms. Provided the sample size increases asymptotically, we prove that the generated sequence converges to a stationary point almost surely.  相似文献   

10.
The problem of minimizing a function fnof(x) subject to the nonlinear constraint ?(x) = 0 is considered, where fnof is a scalar, x is an n-vector, and ? is a q-vector, with q < n. The sequential gradient-restoration algorithm (SGRA: Miele, [1, 2]) and the gradient-projection algorithm (GPA: Rosen, [3, 4]) are considered. These algorithms have one common characteristic: they are all composed of the alternate succession of gradient phases and restoration phases. However, they are different in several aspects, namely, (a) problem formulation, (b) structure of the gradient phase, and (c) structure of the restoration phase. First, a critical summary of SGRA and GPA is presented. Then, a comparison is undertaken by considering the speed of convergence and, above all, robustness (that is, the capacity of an algorithm to converge to a solution). The comparison is done through 16 numerical examples. In order to understand the separate effects of characteristics (a), (b), (c), six new experimental algorithms are generated by combining parts of Miele's algorithm with parts of Rosen's algorithm. Thus, the total number of algorithms investigated is eight. The numerical results show that Miele's method is on the average faster than Rosen's method. More importantly, regarding robustness, Miele's method compares favorably with Rosen's method. Through the examples, it is shown that Miele's advantage in robustness is more prominent as the curvature of the constraint increases. While this advantage is due to the combined effect of characteristics (a), (b), (c), it is characteristic (c) that plays the dominant role. Indeed, Miele's restoration provides a better search direction as well as better step-size control than Rosen's restoration.  相似文献   

11.
In this paper we give a new convergence analysis of a projective scaling algorithm. We consider a long-step affine scaling algorithm applied to a homogeneous linear programming problem obtained from the original linear programming problem. This algorithm takes a fixed fraction λ≤2/3 of the way towards the boundary of the nonnegative orthant at each iteration. The iteration sequence for the original problem is obtained by pulling back the homogeneous iterates onto the original feasible region with a conical projection, which generates the same search direction as the original projective scaling algorithm at each iterate. The recent convergence results for the long-step affine scaling algorithm by the authors are applied to this algorithm to obtain some convergence results on the projective scaling algorithm. Specifically, we will show (i) polynomiality of the algorithm with complexities of O(nL) and O(n 2 L) iterations for λ<2/3 and λ=2/3, respectively; (ii) global covnergence of the algorithm when the optimal face is unbounded; (iii) convergence of the primal iterates to a relative interior point of the optimal face; (iv) convergence of the dual estimates to the analytic center of the dual optimal face; and (v) convergence of the reduction rate of the objective function value to 1−λ.  相似文献   

12.
In this paper, we propose a BFGS (Broyden–Fletcher–Goldfarb–Shanno)-SQP (sequential quadratic programming) method for nonlinear inequality constrained optimization. At each step, the method generates a direction by solving a quadratic programming subproblem. A good feature of this subproblem is that it is always consistent. Moreover, we propose a practical update formula for the quasi-Newton matrix. Under mild conditions, we prove the global and superlinear convergence of the method. We also present some numerical results.  相似文献   

13.
The cubic algorithm (Ref. 1) is a nongradient method for the solution of multi-extremal, nonconvex Lipschitzian optimization problems. The precision and complexity of this algorithm are studied, and improved computational schemes are proposed.  相似文献   

14.
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka–?ojasiewicz property.  相似文献   

15.
Convergence Properties of Two-Stage Stochastic Programming   总被引:6,自引:0,他引:6  
This paper considers a procedure of two-stage stochastic programming in which the performance function to be optimized is replaced by its empirical mean. This procedure converts a stochastic optimization problem into a deterministic one for which many methods are available. Another strength of the method is that there is essentially no requirement on the distribution of the random variables involved. Exponential convergence for the probability of deviation of the empirical optimum from the true optimum is established using large deviation techniques. Explicit bounds on the convergence rates are obtained for the case of quadratic performance functions. Finally, numerical results are presented for the famous news vendor problem, which lends experimental evidence supporting exponential convergence.  相似文献   

16.
For current sequential quadratic programming (SQP) type algorithms, there exist two problems: (i) in order to obtain a search direction, one must solve one or more quadratic programming subproblems per iteration, and the computation amount of this algorithm is very large. So they are not suitable for the large-scale problems; (ii) the SQP algorithms require that the related quadratic programming subproblems be solvable per iteration, but it is difficult to be satisfied. By using ε-active set procedure with a special penalty function as the merit function, a new algorithm of sequential systems of linear equations for general nonlinear optimization problems with arbitrary initial point is presented. This new algorithm only needs to solve three systems of linear equations having the same coefficient matrix per iteration, and has global convergence and local superlinear convergence. To some extent, the new algorithm can overcome the shortcomings of the SQP algorithms mentioned above. Project partly supported by the National Natural Science Foundation of China and Tianyuan Foundation of China.  相似文献   

17.
In this paper, the Iri-Imai algorithm for solving linear and convex quadratic programming is extended to solve some other smooth convex programming problems. The globally linear convergence rate of this extended algorithm is proved, under the condition that the objective and constraint functions satisfy a certain type of convexity, called the harmonic convexity in this paper. A characterization of this convexity condition is given. The same convexity condition was used by Mehrotra and Sun to prove the convergence of a path-following algorithm.The Iri-Imai algorithm is a natural generalization of the original Newton algorithm to constrained convex programming. Other known convergent interior-point algorithms for smooth convex programming are mainly based on the path-following approach.  相似文献   

18.
对无约束规划 ( P) :minx∈ Rnf ( x) ,其中 f ( x)是 Rn→ R1上的一阶连续可微函数 ,设计了一个超记忆梯度求解算法 ,并在去掉迭代点列 { xk}有界和广义 Armijo步长搜索下 ,讨论了算法的全局的收敛性 ,证明了算法具有较强的收敛性质  相似文献   

19.
This paper deals with a modified nonlinear inexact Uzawa (MNIU) method for solving the stabilized saddle point problem. The modified Uzawa method is an inexact inner-outer iteration with a variable relaxation parameter and has been discussed in the literature for uniform inner accuracy. This paper focuses on the general case when the accuracy of inner iteration can be variable and the convergence of MNIU with variable inner accuracy, based on a simple energy norm. Sufficient conditions for the convergence of MNIU are proposed. The convergence analysis not only greatly improves the existing convergence results for uniform inner accuracy in the literature, but also extends the convergence to the variable inner accuracy that has not been touched in literature. Numerical experiments are given to show the efficiency of the MNIU algorithm.  相似文献   

20.
本文提出了一个求解非凸半定规划的非线性Lagrange算法,当二阶充分条件以及严格互补条件成立时,证明了这一算法的收敛性定理.收敛结果表明,当惩罚参数小于某个阀值时,算法是局部收敛的;此外,还给出了解的一个依赖于惩罚参数的误差界.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号