首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we scale the quasiNewton equation and propose a spectral scaling BFGS method. The method has a good selfcorrecting property and can improve the behavior of the BFGS method. Compared with the standard BFGS method, the single-step convergence rate of the spectral scaling BFGS method will not be inferior to that of the steepest descent method when minimizing an n-dimensional quadratic function. In addition, when the method with exact line search is applied to minimize an n-dimensional strictly convex function, it terminates within n steps. Under appropriate conditions, we show that the spectral scaling BFGS method with Wolfe line search is globally and R-linear convergent for uniformly convex optimization problems. The reported numerical results show that the spectral scaling BFGS method outperforms the standard BFGS method.  相似文献   

2.
Many iterative algorithms for optimization calculations use a second derivative approximation,B say, in order to calculate the search directiond = –B –1f(x). In order to avoid invertingB we work with matricesZ, whose columns satisfy the conjugacy relationsZ T BZ = I. We present an update ofZ that is compatible with members of the Broyden family that generate positive definite second derivative approximations. The algorithm requires only 3n 2+O(n) flops for the update ofZ and the calculation ofd. The columns of the resultantZ matrices have interesting conjugacy and orthogonality properties with respect to previous second derivative approximations and function gradients, respectively. The update also provides a simple proof of Dixon's theorem. For the BFGS method we adapt the algorithm in order to obtain a null space method for linearly constrained calculations.  相似文献   

3.
In this paper we give a new convergence analysis of a projective scaling algorithm. We consider a long-step affine scaling algorithm applied to a homogeneous linear programming problem obtained from the original linear programming problem. This algorithm takes a fixed fraction λ≤2/3 of the way towards the boundary of the nonnegative orthant at each iteration. The iteration sequence for the original problem is obtained by pulling back the homogeneous iterates onto the original feasible region with a conical projection, which generates the same search direction as the original projective scaling algorithm at each iterate. The recent convergence results for the long-step affine scaling algorithm by the authors are applied to this algorithm to obtain some convergence results on the projective scaling algorithm. Specifically, we will show (i) polynomiality of the algorithm with complexities of O(nL) and O(n 2 L) iterations for λ<2/3 and λ=2/3, respectively; (ii) global covnergence of the algorithm when the optimal face is unbounded; (iii) convergence of the primal iterates to a relative interior point of the optimal face; (iv) convergence of the dual estimates to the analytic center of the dual optimal face; and (v) convergence of the reduction rate of the objective function value to 1−λ.  相似文献   

4.
Many iterative algorithms for optimization calculations form positive definite second derivative approximations,B say, automatically, butB is not stored explicitly because of the need to solve equations of the formBd--g. We consider working with matricesZ, whose columns satisfy the conjugacy conditionsZ 1 BZ=1. Particular attention is given to updatingZ in a way that corresponds to revisingB by the BFGS formula. A procedure is proposed that seems to be much more stable than the direct use of a product formula [1]. An extension to this procedure provides some automatic rescaling of the columns ofZ, which avoids some inefficiencies due to a poor choice of the initial second derivative approximation. Our work is also relevant to active set methods for linear inequality constraints, to updating the Cholesky factorization ofB, and to explaining some properties of the BFGS algorithm. Dedicated to Martin Beale, whose achievements, advice and encouragement were of great value to my research, especially in the field of conjugate direction methods.  相似文献   

5.
In this paper, we propose a new trust-region-projected Hessian algorithm with nonmonotonic backtracking interior point technique for linear constrained optimization. By performing the QR decomposition of an affine scaling equality constraint matrix, the conducted subproblem in the algorithm is changed into the general trust-region subproblem defined by minimizing a quadratic function subject only to an ellipsoidal constraint. By using both the trust-region strategy and the line-search technique, each iteration switches to a backtracking interior point step generated by the trustregion subproblem. The global convergence and fast local convergence rates for the proposed algorithm are established under some reasonable assumptions. A nonmonotonic criterion is used to speed up the convergence in some ill-conditioned cases. Selected from Journal of Shanghai Normal University (Natural Science), 2003, 32(4): 7–13  相似文献   

6.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

7.
This paper proposes and analyzes an affine scaling trust-region method with line search filter technique for solving nonlinear optimization problems subject to bounds on variables. At the current iteration, the trial step is generated by the general trust-region subproblem which is defined by minimizing a quadratic function subject only to an affine scaling ellipsoidal constraint. Both trust-region strategy and line search filter technique will switch to trail backtracking step which is strictly feasible. Meanwhile, the proposed method does not depend on any external restoration procedure used in line search filter technique. A new backtracking relevance condition is given which is weaker than the switching condition to obtain the global convergence of the algorithm. The global convergence and fast local convergence rate of this algorithm are established under reasonable assumptions. Preliminary numerical results are reported indicating the practical viability and show the effectiveness of the proposed algorithm.  相似文献   

8.
Consider linear programs in dual standard form with n constraints and m variables. When typical interior-point algorithms are used for the solution of such problems, updating the iterates, using direct methods for solving the linear systems and assuming a dense constraint matrix A, requires O(nm2)\mathcal{O}(nm^{2}) operations per iteration. When nm it is often the case that at each iteration most of the constraints are not very relevant for the construction of a good update and could be ignored to achieve computational savings. This idea was considered in the 1990s by Dantzig and Ye, Tone, Kaliski and Ye, den Hertog et al. and others. More recently, Tits et al. proposed a simple “constraint-reduction” scheme and proved global and local quadratic convergence for a dual-feasible primal-dual affine-scaling method modified according to that scheme. In the present work, similar convergence results are proved for a dual-feasible constraint-reduced variant of Mehrotra’s predictor-corrector algorithm, under less restrictive nondegeneracy assumptions. These stronger results extend to primal-dual affine scaling as a limiting case. Promising numerical results are reported.  相似文献   

9.
We propose a new smoothing Newton method for solving the P 0-matrix linear complementarity problem (P 0-LCP) based on CHKS smoothing function. Our algorithm solves only one linear system of equations and performs only one line search per iteration. It is shown to converge to a P 0-LCP solution globally linearly and locally quadratically without the strict complementarity assumption at the solution. To the best of author's knowledge, this is the first one-step smoothing Newton method to possess both global linear and local quadratic convergence. Preliminary numerical results indicate that the proposed algorithm is promising.  相似文献   

10.
We study a new trust region affine scaling method for general bound constrained optimization problems. At each iteration, we compute two trial steps. We compute one along some direction obtained by solving an appropriate quadratic model in an ellipsoidal region. This region is defined by an affine scaling technique. It depends on both the distances of current iterate to boundaries and the trust region radius. For convergence and avoiding iterations trapped around nonstationary points, an auxiliary step is defined along some newly defined approximate projected gradient. By choosing the one which achieves more reduction of the quadratic model from the two above steps as the trial step to generate next iterate, we prove that the iterates generated by the new algorithm are not bounded away from stationary points. And also assuming that the second-order sufficient condition holds at some nondegenerate stationary point, we prove the Q-linear convergence of the objective function values. Preliminary numerical experience for problems with bound constraints from the CUTEr collection is also reported.  相似文献   

11.
To the unconstrained programme of non-convex function, this article give a modified BFGS algorithm. The idea of the algorithm is to modify the approximate Hessian matrix for obtaining the descent direction and guaranteeing the efficacious of the quasi-Newton iteration pattern. We prove the global convergence properties of the algorithm associating with the general form of line search, and prove the quadratic convergence rate of the algorithm under some conditions.  相似文献   

12.
In this paper, we consider the second-order cone complementarity problem with P 0-property. By introducing a smoothing parameter into the Fischer-Burmeister function, we present a smoothing Newton method for the second-order cone complementarity problem. The proposed algorithm solves only a linear system of equations and performs only one line search at each iteration. At the same time, the algorithm does not have restrictions on its starting point and has global convergence. Under the assumption of nonsingularity, we establish the locally quadratic convergence of the algorithm without strict complementarity condition. Preliminary numerical results show that the algorithm is promising.  相似文献   

13.
We propose a one-step smoothing Newton method for solving the non-linear complementarity problem with P0-function (P0-NCP) based on the smoothing symmetric perturbed Fisher function(for short, denoted as the SSPF-function). The proposed algorithm has to solve only one linear system of equations and performs only one line search per iteration. Without requiring any strict complementarity assumption at the P0-NCP solution, we show that the proposed algorithm converges globally and superlinearly under mild conditions. Furthermore, the algorithm has local quadratic convergence under suitable conditions. The main feature of our global convergence results is that we do not assume a priori the existence of an accumulation point. Compared to the previous literatures, our algorithm has stronger convergence results under weaker conditions.  相似文献   

14.
15.
We extend the classical affine scaling interior trust region algorithm for the linear constrained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian. We propose and analyze a new affine scaling trust-region method in association with nonmonotonic interior backtracking line search technique for solving the linear constrained LC1 optimization where the second-order derivative of the objective function is explicitly required to be locally Lipschitzian. The general trust region subproblem in the proposed algorithm is defined by minimizing an augmented affine scaling quadratic model which requires both first and second order information of the objective function subject only to an affine scaling ellipsoidal constraint in a null subspace of the augmented equality constraints. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions where twice smoothness of the objective function is not required. Applications of the algorithm to some nonsmooth optimization problems are discussed.  相似文献   

16.
We present an extension of Karmarkar's linear programming algorithm for solving a more general group of optimization problems: convex quadratic programs. This extension is based on the iterated application of the objective augmentation and the projective transformation, followed by optimization over an inscribing ellipsoid centered at the current solution. It creates a sequence of interior feasible points that converge to the optimal feasible solution in O(Ln) iterations; each iteration can be computed in O(Ln 3) arithmetic operations, wheren is the number of variables andL is the number of bits in the input. In this paper, we emphasize its convergence property, practical efficiency, and relation to the ellipsoid method.  相似文献   

17.
In this paper, we present a new smoothing Newton method for solving monotone weighted linear complementarity problem (WCP). Our algorithm needs only to solve one linear system of equation and performs one line search per iteration. Any accumulation point of the iteration sequence generated by our algorithm is a solution of WCP. Under suitable conditions, our algorithm has local quadratic convergence rate. Numerical experiments show the feasibility and efficiency of the algorithm.  相似文献   

18.
In this paper, we propose a new affine scaling trust-region algorithm in association with nonmonotonic interior backtracking line search technique for solving nonlinear equality systems subject to bounds on variables. The trust-region subproblem is defined by minimizing a squared Euclidean norm of linear model adding the augmented quadratic affine scaling term subject only to an ellipsoidal constraint. By using both trust-region strategy and interior backtracking line search technique, each iterate switches to backtracking step generated by the general trust-region subproblem and satisfies strict interior point feasibility by line search backtracking technique. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions. A nonmonotonic criterion should bring about speeding up the convergence progress in some ill-conditioned cases. The results of numerical experiments are reported to show the effectiveness of the proposed algorithm.  相似文献   

19.
A pseudo Newton-Raphson algorithm for function minimization is presented. As in all such algorithms, an estimate of the inverse Hessian is calculated. In this case, the estimate is of the formXZX T , whereZ is a diagonal matrix; and this feature permits the use of the simple procedures to maintain the positive definiteness ofZ, and hence of the restriction ofXZX T to the range ofX. The algorithm is shown to have finite convergence for quadratic functions and asymptotic convergence for a fairly general class of functions. Some numerical results are presented, and the extension of the algorithm to deal with linear equality and inequality constraints is briefly discussed.R. Mamen acknowledges with gratitude the financial support afforded by an Athlone Fellowship and a National Research Council of Canada Post-Graduate Bursary. Dr. S. C. Chuang made useful comments on some of the proofs. Some of the results are closely related to those of Allwright (Ref. 1).  相似文献   

20.
This article proposes new conjugate gradient method for unconstrained optimization by applying the Powell symmetrical technique in a defined sense. Using the Wolfe line search conditions, the global convergence property of the method is also obtained based on the spectral analysis of the conjugate gradient iteration matrix and the Zoutendijk condition for steepest descent methods. Preliminary numerical results for a set of 86 unconstrained optimization test problems verify the performance of the algorithm and show that the Generalized Descent Symmetrical Hestenes-Stiefel algorithm is competitive with the Fletcher-Reeves (FR) and Polak-Ribiére-Polyak (PRP+) algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号