首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The family of feasible methods for minimization with nonlinear constraints includes the nonlinear projected gradient method, the generalized reduced gradient method (GRG), and many variants of the sequential gradient restoration algorithm (SGRA). Generally speaking, a particular iteration of any of these methods proceeds in two phases. In the restoration phase, feasibility is restored by means of the resolution of an auxiliary nonlinear problem, generally a nonlinear system of equations. In the minimization phase, optimality is improved by means of the consideration of the objective function, or its Lagrangian, on the tangent subspace to the constraints. In this paper, minimal assumptions are stated on the restoration phase and the minimization phase that ensure that the resulting algorithm is globally convergent. The key point is the possibility of comparing two successive nonfeasible iterates by means of a suitable merit function that combines feasibility and optimality. The merit function allows one to work with a high degree of infeasibility at the first iterations of the algorithm. Global convergence is proved and a particular implementation of the model algorithm is described.  相似文献   

3.
For a simple nonsmooth minimization problem, the discrete minisum problem, an efficient hybrid method is presented. This method consists of an ‘inner algorithm’ (Newton method) for solving the necessary optimality conditions and a gradient-type ‘outer algorithm’. By this way we combine the large convergence area of the gradient technique with the fast final convergence of the Newton method.  相似文献   

4.
In this work we introduce two new Barzilai and Borwein-like steps sizes for the classical gradient method for strictly convex quadratic optimization problems.The proposed step sizes employ second-order information in order to obtain faster gradient-type methods.Both step sizes are derived from two unconstrained optimization models that involve approximate information of the Hessian of the objective function.A convergence analysis of the proposed algorithm is provided.Some numerical experiments are performed in order to compare the efficiency and effectiveness of the proposed methods with similar methods in the literature.Experimentally,it is observed that our proposals accelerate the gradient method at nearly no extra computational cost,which makes our proposal a good alternative to solve large-scale problems.  相似文献   

5.
Rapid progresses in information and computer technology allow the development of more advanced optimal control algorithms dealing with real-world problems. In this paper, which is Part 1 of a two-part sequence, a multiple-subarc gradient-restoration algorithm (MSGRA) is developed. We note that the original version of the sequential gradient-restoration algorithm (SGRA) was developed by Miele et al. in single-subarc form (SSGRA) during the years 1968–86; it has been applied successfully to solve a large number of optimal control problems of atmospheric and space flight.MSGRA is an extension of SSGRA, the single-subarc gradient-restoration algorithm. The primary reason for MSGRA is to enhance the robustness of gradient-restoration algorithms and also to enlarge the field of applications. Indeed, MSGRA can be applied to optimal control problems involving multiple subsystems as well as discontinuities in the state and control variables at the interface between contiguous subsystems.Two features of MSGRA are increased automation and efficiency. The automation of MSGRA is enhanced via time normalization: the actual time domain is mapped into a normalized time domain such that the normalized time length of each subarc is 1. The efficiency of MSGRA is enhanced by using the method of particular solutions to solve the multipoint boundary-value problems associated with the gradient phase and the restoration phase of the algorithm.In a companion paper [Part 2 (Ref. 2)], MSGRA is applied to compute the optimal trajectory for a multistage launch vehicle design, specifically, a rocket-powered spacecraft ascending from the Earth surface to a low Earth orbit (LEO). Single-stage, double-stage, and triple-stage configurations are considered and compared.  相似文献   

6.
一种新的无约束优化线搜索算法   总被引:1,自引:2,他引:1  
在对各种有效的线搜索算法分析的基础上,给出了一种求解光滑无约束优化问题的新的线搜索算法.对于目标函数是二次连续可微且下有界的无约束优化问题,算法具有与Wolfe-Powell线搜索算法相同的理论性质.在每一步迭代中算法至多需要计算两次梯度,对于计算目标函数梯度花费较大的情形可以节省一定的计算量.数值试验表明本文算法是可行的和有效的.  相似文献   

7.
We study methods for solving a class of the quasivariational inequalities in Hilbert space when the changeable set is described by translation of a fixed, closed and convex set. We consider one variant of the gradient-type projection method and an extragradient method. The possibilities of the choice of parameters of the gradient projection method in this case are wider than in the general case of a changeable set. The extragradient method on each iteration makes one trial step along the gradient, and the value of the gradient at the obtained point is used at the first point as the iteration direction. In the paper, we establish sufficient conditions for the convergence of the proposed methods and derive a new estimate of the rates of the convergence. The main result of this paper is contained in the convergence analysis of the extragradient method.  相似文献   

8.
In this paper, we propose an imaging technique for the detection of porous inclusions in a stationary flow governed by Stokes–Brinkmann equations. We introduce the velocity method to perform the shape deformation, and derive the structure of shape gradient for the cost functional based on the continuous adjoint method and the function space parametrization technique. Moreover, we present a gradient-type algorithm to the shape inverse problem. The numerical results demonstrate the proposed algorithm is feasible and effective for the quite high Reynolds numbers problems.  相似文献   

9.
Efficient generalized conjugate gradient algorithms,part 2: Implementation   总被引:5,自引:0,他引:5  
In Part 1 of this paper (Ref. 1), a new, generalized conjugate gradient algorithm was proposed and its convergence investigated. In this second part, the new algorithm is compared numerically with other modified conjugate gradient methods and with limited-memory quasi-Newton methods.  相似文献   

10.
This paper extends some theoretical properties of the conjugate gradient-type method FLR (Ref. 1) for iteratively solving indefinite linear systems of equations. The latter algorithm is a generalization of the conjugate gradient method by Hestenes and Stiefel (CG, Ref. 2). We develop a complete relationship between the FLR algorithm and the Lanczos process, in the case of indefinite and possibly singular matrices. Then, we develop simple theoretical results for the FLR algorithm in order to construct an approximation of the Moore-Penrose pseudoinverse of an indefinite matrix. Our approach supplies the theoretical framework for applications within unconstrained optimization. This work was partially supported by the MIUR, FIRB Research Program on Large-Scale Nonlinear Optimization and by the Ministero delle Infrastrutture e dei Trasporti in the framework of the Research Program on Safety. The author thanks Stefano Lucidi and Massimo Roma for fruitful discussions plus the Associate Editor for effective comments.  相似文献   

11.
A constrained minimax problem is converted to minimization of a sequence of unconstrained and continuously differentiable functions in a manner similar to Morrison's method for constrained optimization. One can thus apply any efficient gradient minimization technique to do the unconstrained minimization at each step of the sequence. Based on this approach, two algorithms are proposed, where the first one is simpler to program, and the second one is faster in general. To show the efficiency of the algorithms even for unconstrained problems, examples are taken to compare the two algorithms with recent methods in the literature. It is found that the second algorithm converges faster with respect to the other methods. Several constrained examples are also tried and the results are presented.  相似文献   

12.
In this paper, we present a new conjugate gradient (CG) based algorithm in the class of planar conjugate gradient methods. These methods aim at solving systems of linear equations whose coefficient matrix is indefinite and nonsingular. This is the case where the application of the standard CG algorithm by Hestenes and Stiefel (Ref. 1) may fail, due to a possible division by zero. We give a complete proof of global convergence for a new planar method endowed with a general structure; furthermore, we describe some important features of our planar algorithm, which will be used within the optimization framework of the companion paper (Part 2, Ref. 2). Here, preliminary numerical results are reported.This work was supported by MIUR, FIRB Research Program on Large-Scale Nonlinear Optimization, Rome, ItalyThe author acknowledges Luigi Grippo and Stefano Lucidi, who contributed considerably to the elaboration of this paper. The exchange of experiences with Massimo Roma was a constant help in the investigation. The author expresses his gratitude to the Associate Editor and the referees for suggestions and corrections.  相似文献   

13.
Relation between the memory gradient method and the Fletcher-Reeves method   总被引:6,自引:0,他引:6  
The minimization of a function of unconstrained variables is considered using the memory gradient method. It is shown that, for the particular case of a quadratic function, the memory gradient algorithm and the Fletcher-Reeves algorithm are identical.This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-828-67. In more expanded form, it can be found in Ref. 1.  相似文献   

14.
For solving large-scale unconstrained minimization problems, the nonlinear conjugate gradient method is welcome due to its simplicity, low storage, efficiency and nice convergence properties. Among all the methods in the framework, the conjugate gradient descent algorithm — CG_DESCENT is very popular, in which the generated directions descend automatically, and this nice property is independent of any line search used. In this paper, we generalize CG_DESCENT with two Barzilai–Borwein steplength reused cyclically. We show that the resulting algorithm owns attractive sufficient descent property and converges globally under some mild conditions. We test the proposed algorithm by using a large set of unconstrained problems with high dimensions in CUTEr library. The numerical comparisons with the state-of-the-art algorithm CG_DESCENT illustrate that the proposed method is effective, competitive, and promising.  相似文献   

15.
We propose an efficient gradient-type algorithm to solve the fourth-order LLT denoising model for both gray-scale and vector-valued images. Based on the primal-dual formulation of the original nondifferentiable model, the new algorithm updates the primal and dual variables alternately using the gradient descent/ascent flows. Numerical examples are provided to demonstrate the superiority of our algorithm.  相似文献   

16.
The continuous gradient projection method and the continuous gradient-type method in a space with a variable metric are studied for the numerical solution of quasi-variational inequalities, and conditions for the convergence of the methods proposed are established.  相似文献   

17.
A new dual gradient method is given to solve linearly constrained, strongly convex, separable mathematical programming problems. The dual problem can be decomposed into one-dimensional problems whose solutions can be computed extremely easily. The dual objective function is shown to have a Lipschitz continuous gradient, and therefore a gradient-type algorithm can be used for solving the dual problem. The primal optimal solution can be obtained from the dual optimal solution in a straightforward way. Convergence proofs and computational results are given.  相似文献   

18.
Many derivative-free methods for constrained problems are not efficient for minimizing functions on “thin” domains. Other algorithms, like those based on Augmented Lagrangians, deal with thin constraints using penalty-like strategies. When the constraints are computationally inexpensive but highly nonlinear, these methods spend many potentially expensive objective function evaluations motivated by the difficulties in improving feasibility. An algorithm that handles this case efficiently is proposed in this paper. The main iteration is split into two steps: restoration and minimization. In the restoration step, the aim is to decrease infeasibility without evaluating the objective function. In the minimization step, the objective function f is minimized on a relaxed feasible set. A global minimization result will be proved and computational experiments showing the advantages of this approach will be presented.  相似文献   

19.
The spectral projected gradient method SPG is an algorithm for large-scale bound-constrained optimization introduced recently by Birgin, Martínez, and Raydan. It is based on the Raydan unconstrained generalization of the Barzilai-Borwein method for quadratics. The SPG algorithm turned out to be surprisingly effective for solving many large-scale minimization problems with box constraints. Therefore, it is natural to test its perfomance for solving the sub-problems that appear in nonlinear programming methods based on augmented Lagrangians. In this work, augmented Lagrangian methods which use SPG as the underlying convex-constraint solver are introduced (ALSPG) and the methods are tested in two sets of problems. First, a meaningful subset of large-scale nonlinearly constrained problems of the CUTE collection is solved and compared with the perfomance of LANCELOT. Second, a family of location problems in the minimax formulation is solved against the package FFSQP.  相似文献   

20.
The matrix rank minimization problem arises in many engineering applications. As this problem is NP-hard, a nonconvex relaxation of matrix rank minimization, called the Schatten-p quasi-norm minimization(0 p 1), has been developed to approximate the rank function closely. We study the performance of projected gradient descent algorithm for solving the Schatten-p quasi-norm minimization(0 p 1) problem.Based on the matrix restricted isometry property(M-RIP), we give the convergence guarantee and error bound for this algorithm and show that the algorithm is robust to noise with an exponential convergence rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号