首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 257 毫秒
1.
The paper resolves the problem concerning the rate of convergence of the working set based MPRGP (modified proportioning with reduced gradient projection) algorithm with a long steplength of the reduced projected gradient step. The main results of this paper are the formula for the R-linear rate of convergence of MPRGP in terms of the spectral condition number of the Hessian matrix and the proof of the finite termination property for the problems whose solution does not satisfy the strict complementarity condition. The bound on the R-linear rate of convergence of the projected gradient is also included. For shorter steplengths these results were proved earlier by Dostál and Schöberl. The efficiency of the longer steplength is illustrated by numerical experiments. The result is an important ingredient in developming scalable algorithms for numerical solution of elliptic variational inequalities and substantiates the choice of parameters that turned out to be effective in numerical experiments.  相似文献   

2.
The proportioning algorithm with projections turned out to be an efficient algorithm for iterative solution of large quadratic programming problems with simple bounds and box constraints. Important features of this active set based algorithm are the adaptive precision control in the solution of auxiliary linear problems and capability to add or remove many indices from the active set in one step. In this paper a modification of the algorithm is presented that enables to find its rate of convergence in terms of the spectral condition number of the Hessian matrix and avoid any backtracking. The modified algorithm is shown to preserve the finite termination property of the original algorithm for problems that are not dual degenerate.  相似文献   

3.
Summary We present an algorithm which combines standard active set strategies with the gradient projection method for the solution of quadratic programming problems subject to bounds. We show, in particular, that if the quadratic is bounded below on the feasible set then termination occurs at a stationary point in a finite number of iterations. Moreover, if all stationary points are nondegenerate, termination occurs at a local minimizer. A numerical comparison of the algorithm based on the gradient projection algorithm with a standard active set strategy shows that on mildly degenerate problems the gradient projection algorithm requires considerable less iterations and time than the active set strategy. On nondegenerate problems the number of iterations typically decreases by at least a factor of 10. For strongly degenerate problems, the performance of the gradient projection algorithm deteriorates, but it still performs better than the active set method.Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38  相似文献   

4.
We present an algorithm for very large-scale linearly constrained nonlinear programming (LCNP) based on a Limited-Storage Quasi-newton method. In large-scale programming solving the reduced Newton equation at each iteration can be expensive and may not be justified when far from a local solution; besides, the amount of storage required by the reduced Hessian matrix, and even the computing time for its Quasi-Newton approximation, may be prohibitive. An alternative based on the reduced Truncated-Newton methodology, that has proved to be satisfactory for large-scale problems, is not recommended for very large-scale problems since it requires an additional gradient evaluation and the solving of two systems of linear equations per each minor iteration. We recommend a 2-step BFGS approximation of the inverse of the reduced Hessian matrix that does not require to store any matrix since the product matrix-vector is the vector to be approximated; it uses the reduced gradient and information from two previous iterations and the so-termed restart iteration. A diagonal direct BFGS preconditioning is used.  相似文献   

5.
The paper describes new conjugate gradient algorithms for large scale nonconvex problems with box constraints. In order to speed up convergence the algorithms employ scaling matrices which transform the space of original variables into the space in which Hessian matrices of the problem’s functionals have more clustered eigenvalues. This is done by applying limited memory BFGS updating matrices. Once the scaling matrix is calculated, the next few conjugate gradient iterations are performed in the transformed space. The box constraints are treated efficiently by the projection. We also present a limited memory quasi-Newton method which is a special version of our general algorithm. The presented algorithms have strong global convergence properties, in particular they identify constraints active at a solution in a finite number of iterations. We believe that they are competitive to the L-BFGS-B method and present some numerical results which support our claim.  相似文献   

6.
Techniques for estimating the condition number of a nonsingular matrix are developed. It is shown that Hager??s 1-norm condition number estimator is equivalent to the conditional gradient algorithm applied to the problem of maximizing the 1-norm of a matrix-vector product over the unit sphere in the 1-norm. By changing the constraint in this optimization problem from the unit sphere to the unit simplex, a new formulation is obtained which is the basis for both conditional gradient and projected gradient algorithms. In the test problems, the spectral projected gradient algorithm yields condition number estimates at least as good as those obtained by the previous approach. Moreover, in some cases, the spectral gradient projection algorithm, with a careful choice of the parameters, yields improved condition number estimates.  相似文献   

7.
We present a general active set algorithm for the solution of a convex quadratic programming problem having a parametrized Hessian matrix. The parametric Hessian matrix is a positive semidefinite Hessian matrix plus a real parameter multiplying a symmetric matrix of rank one or two. The algorithm solves the problem for all parameter values in the open interval upon which the parametric Hessian is positive semidefinite. The algorithm is general in that any of several existing quadratic programming algorithms can be extended in a straightforward manner for the solution of the parametric Hessian problem.This research was supported by the Natural Sciences and Engineering Research Council under Grant No. A8189 and under a Postgraduate Scholarship, by an Ontario Graduate Scholarship, and by the University of Windsor Research Board under Grant No. 9432.  相似文献   

8.
We present a general active set algorithm for the solution of a convex quadratic programming problem having a parametrized Hessian matrix. The parametric Hessian matrix is a positive semidefinite Hessian matrix plus a real parameter multiplying a symmetric matrix of rank one or two. The algorithm solves the problem for all parameter values in the open interval upon which the parametric Hessian is positive semidefinite. The algorithm is general in that any of several existing quadratic programming algorithms can be extended in a straightforward manner for the solution of the parametric Hessian problem. This research was supported by the Natural Sciences and Engineering Research Council under Grant No. A8189 and under a Postgraduate Scholarship, by an Ontario Graduate Scholarship, and by the University of Windsor Research Board under Grant No. 9432.  相似文献   

9.
The implementation of the recently proposed semi-monotonic augmented Lagrangian algorithm for the solution of large convex equality constrained quadratic programming problems is considered. It is proved that if the auxiliary problems are approximately solved by the conjugate gradient method, then the algorithm finds an approximate solution of the class of problems with uniformly bounded spectrum of the Hessian matrix at O(1) matrix–vector multiplications. If applied to the class of problems with the Hessian matrices that are in addition either sufficiently sparse or can be expressed as a product of such sparse matrices, then the cost of the solution is proportional to the dimension of the problems. Theoretical results are illustrated by numerical experiments. This research is supported by grants of the Ministry of Education No. S3086102, ET400300415 and MSM 6198910027.  相似文献   

10.
Summary. We propose an algorithm for the numerical solution of large-scale symmetric positive-definite linear complementarity problems. Each step of the algorithm combines an application of the successive overrelaxation method with projection (to determine an approximation of the optimal active set) with the preconditioned conjugate gradient method (to solve the reduced residual systems of linear equations). Convergence of the iterates to the solution is proved. In the experimental part we compare the efficiency of the algorithm with several other methods. As test example we consider the obstacle problem with different obstacles. For problems of dimension up to 24\,000 variables, the algorithm finds the solution in less then 7 iterations, where each iteration requires about 10 matrix-vector multiplications. Received July 14, 1993 / Revised version received February 1994  相似文献   

11.
In this paper, by means of an active set strategy, we present a projected spectral gradient algorithm for solving large-scale bound constrained optimization problems. A nice property of the active set estimation technique is that it can identify the active set at the optimal point without requiring strict complementary condition, which is potentially used to solve degenerated optimization problems. Under appropriate conditions, we show that this proposed method is globally convergent. We also do some numerical experiments by using some bound constrained problems from CUTEr library. The numerical comparisons with SPG, TRON, and L-BFGS-B show that the proposed method is effective and promising.  相似文献   

12.
This study proposes a random effects model based on inverse Gaussian process, where the mixture normal distribution is used to account for both unit-specific and subpopulation-specific heterogeneities. The proposed model can capture heterogeneities due to subpopulations in the same population or the units from different batches. A new Expectation-Maximization (EM) algorithm is developed for point estimation and the bias-corrected bootstrap is used for interval estimation. We show that the EM algorithm updates the parameters based on the gradient of the loglikelihood function via a projection matrix. In addition, the convergence rate depends on the condition number that can be obtained by the projection matrix and the Hessian matrix of the loglikelihood function. A simulation study is conducted to assess the proposed model and the inference methods, and two real degradation datasets are analyzed for illustration.  相似文献   

13.
In this paper, a primal-dual interior point method is proposed for general constrained optimization, which incorporated a penalty function and a kind of new identification technique of the active set. At each iteration, the proposed algorithm only needs to solve two or three reduced systems of linear equations with the same coefficient matrix. The size of systems of linear equations can be decreased due to the introduction of the working set, which is an estimate of the active set. The penalty parameter is automatically updated and the uniformly positive definiteness condition on the Hessian approximation of the Lagrangian is relaxed. The proposed algorithm possesses global and superlinear convergence under some mild conditions. Finally, some preliminary numerical results are reported.  相似文献   

14.
Newton-type methods for unconstrained optimization problems have been very successful when coupled with a modified Cholesky factorization to take into account the possible lack of positive-definiteness in the Hessian matrix. In this paper we discuss the application of these method to large problems that have a sparse Hessian matrix whose sparsity is known a priori. Quite often it is difficult, if not impossible, to obtain an analytic representation of the Hessian matrix. Determining the Hessian matrix by the standard method of finite-differences is costly in terms of gradient evaluations for large problems. Automatic procedures that reduce the number of gradient evaluations by exploiting sparsity are examined and a new procedure is suggested. Once a sparse approximation to the Hessian matrix has been obtained, there still remains the problem of solving a sparse linear system of equations at each iteration. A modified Cholesky factorization can be used. However, many additional nonzeros (fill-in) may be created in the factors, and storage problems may arise. One way of approaching this problem is to ignore fill-in in a systematic manner. Such technique are calledpartial factorization schemes. Various existing partial factorization are analyzed and three new ones are developed. The above algorithms were tested on a set of problems. The overall conclusions were that these methods perfom well in practice.  相似文献   

15.
A quasi-Newton extension of the Goldstein-Levitin-Polyak (GLP) projected gradient algorithm for constrained optimization is considered. Essentially, this extension projects an unconstrained descent step on to the feasible region. The determination of the stepsize is divided into two stages. The first is a stepsize sequence, chosen from the range [1,2], converging to unity. This determines the size of the unconstrained step. The second is a stepsize chosen from the range [0,1] according to a stepsize strategy and determines the length of the projected step. Two such strategies are considered. The first bounds the objective function decrease by a conventional linear functional, whereas the second uses a quadratic functional as a bound.The introduction of the unconstrained step provides the option of taking steps that are larger than unity. It is shown that unit steplengths and subsequently superlinear convergence rates are attained if the projection of the quasi-Newton Hessian approximation approaches the projection of the Hessian at the solution. Thus, the requirement in the GLP algorithm for a positive definite Hessian at the solution is relaxed. This allows the use of strictly positive definite Hessian approximations, thereby simplifying the quadratic subproblem involved, even if the Hessian at the solution is not strictly positive definite.This research was funded by a Science and Engineering Research Council Advanced Fellowship. The author is also grateful to an anonymous referee for numerous constructive criticisms and comments.  相似文献   

16.
An algorithm is presented for numerical computation of choreographies in spaces of constant negative curvature in a hyperbolic cotangent potential, extending the ideas given in a companion paper [14] for computing choreographies in the plane in a Newtonian potential and on a sphere in a cotangent potential. Following an idea of Diacu, Pérez-Chavela and Reyes Victoria [9], we apply stereographic projection and study the problem in the Poincaré disk. Using approximation by trigonometric polynomials and optimization methods with exact gradient and exact Hessian matrix, we find new choreographies, hyperbolic analogues of the ones presented in [14]. The algorithm proceeds in two phases: first BFGS quasi-Newton iteration to get close to a solution, then Newton iteration for high accuracy.  相似文献   

17.
The convergence analysis of a nonlinear Lagrange algorithm for solving nonlinear constrained optimization problems with both inequality and equality constraints is explored in detail. The estimates for the derivatives of the multiplier mapping and the solution mapping of the proposed algorithm are discussed via the technique of the singular value decomposition of matrix. Based on the estimates, the local convergence results and the rate of convergence of the algorithm are presented when the penalty parameter is less than a threshold under a set of suitable conditions on problem functions. Furthermore, the condition number of the Hessian of the nonlinear Lagrange function with respect to the decision variables is analyzed, which is closely related to efficiency of the algorithm. Finally, the preliminary numericM results for several typical test problems are reported.  相似文献   

18.
We present an algorithm, partitioning group correction (PGC) algorithm based on trust region and conjugate gradient method, for large-scale sparse unconstrained optimization. In large sparse optimization, computing the whole Hessian matrix and solving the Newton-like equations at each iteration can be considerably expensive when a trust region method is adopted. The method depends on a symmetric consistent partition of the columns of the Hessian matrix and an inaccurate solution to the Newton-like equations by conjugate gradient method. And we allow that the current direction exceeds the trust region bound if it is a good descent direction. Besides, we studies a method dealing with some sparse matrices having a dense structure part. Some good convergence properties are kept and we contrast the computational behavior of our method with that of other algorithms. Our numerical tests show that the algorithm is promising and quite effective, and that its performance is comparable to or better than that of other algorithms available.  相似文献   

19.
简金宝 《数学学报》2004,47(4):781-792
本文讨论无严格互补性的非线性不等式约束最优化问题,建立了一个新的序列线性方程组算法。算法每次迭代只需解一个线性方程组或计算一次广义梯度投影,并不要求Lagrange函数的近似Hessian阵正定。在较弱的假设下,证明了算法的整体收敛性、强收敛性、超线性收敛性及二次收敛速度。还对算法进行了有效的数值试验。  相似文献   

20.
基于寻找分离超平面的三种经典线搜索技术,本文提出了一种自适应线搜索技术.结合谱梯度投影法,提出了凸约束非光滑单调方程组的一个谱梯度投影算法.该算法不需要计算和存储任何矩阵,因而适合求解大规模非光滑的非线性单调方程组.在较弱的条件下,证明了方法的全局收敛性,并分析了算法的收敛率.数值试验结果表明算法是有效的和鲁棒的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号