首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
本文提供修正近似信赖域类型路经三类预条件弧线路径方法解无约束最优化问题.使用对称矩阵的稳定Bunch-Parlett易于形成信赖域子问题的弧线路径,使用单位下三角矩阵作为最优路径和修正梯度路径的预条件因子.运用预条件因子改进Hessian矩阵特征值分布加速预条件共轭梯度路径收敛速度.基于沿着三类路径信赖域子问题产生试探步,将信赖域策略与非单调线搜索技术相结合作为新的回代步.理论分析证明在合理条件下所提供的算法是整体收敛性,并且具有局部超线性收敛速率,数值结果表明算法的有效性.  相似文献   

2.
A tensor given by its canonical decomposition is approximated by another tensor (again, in the canonical decomposition) of fixed lower rank. For this problem, the structure of the Hessian matrix of the objective function is analyzed. It is shown that all the auxiliary matrices needed for constructing the quadratic model can be calculated so that the computational effort is a quadratic function of the tensor dimensionality (rather than a cubic function as in earlier publications). An economical version of the trust region Newton method is proposed in which the structure of the Hessian matrix is efficiently used for multiplying this matrix by vectors and for scaling the trust region. At each step, the subproblem of minimizing the quadratic model in the trust region is solved using the preconditioned conjugate gradient method, which is terminated if a negative curvature direction is detected for the Hessian matrix.  相似文献   

3.
When solving large complex optimization problems, the user is faced with three major problems. These are (i) the cost in human time in obtaining accurate expressions for the derivatives involved; (ii) the need to store second derivative information; and (iii), of lessening importance, the time taken to solve the problem on the computer. For many problems, a significant part of the latter can be attributed to solving Newton-like equations. In the algorithm described, the equations are solved using a conjugate direction method that only needs the Hessian at the current point when it is multiplied by a trial vector. In this paper, we present a method that finds this product using automatic differentiation while only requiring vector storage. The method takes advantage of any sparsity in the Hessian matrix and computes exact derivatives. It avoids the complexity of symbolic differentiation, the inaccuracy of numerical differentiation, the labor of finding analytic derivatives, and the need for matrix store. When far from a minimum, an accurate solution to the Newton equations is not justified, so an approximate solution is obtained by using a version of Dembo and Steihaug's truncated Newton algorithm (Ref. 1).This paper was presented at the SIAM National Meeting, Boston, Massachusetts, 1986.  相似文献   

4.
By means of a conjugate gradient strategy, we propose a trust region method for unconstrained optimization problems. The search direction is an adequate combination of the conjugate gradient direction and the trust-region direction. The global convergence and the quadratic convergence of this method are established under suitable conditions. Numerical results show that the presented method is competitive to the trust region method and the conjugate gradient method.  相似文献   

5.
一类带非单调线搜索的信赖域算法   总被引:1,自引:0,他引:1  
通过将非单调Wolfe线搜索技术与传统的信赖域算法相结合,我们提出了一类新的求解无约束最优化问题的信赖域算法.新算法在每一迭代步只需求解一次信赖域子问题,而且在每一迭代步Hesse阵的近似都满足拟牛顿条件并保持正定传递.在一定条件下,证明了算法的全局收敛性和强收敛性.数值试验表明新算法继承了非单调技术的优点,对于求解某...  相似文献   

6.
Newton-type methods for unconstrained optimization problems have been very successful when coupled with a modified Cholesky factorization to take into account the possible lack of positive-definiteness in the Hessian matrix. In this paper we discuss the application of these method to large problems that have a sparse Hessian matrix whose sparsity is known a priori. Quite often it is difficult, if not impossible, to obtain an analytic representation of the Hessian matrix. Determining the Hessian matrix by the standard method of finite-differences is costly in terms of gradient evaluations for large problems. Automatic procedures that reduce the number of gradient evaluations by exploiting sparsity are examined and a new procedure is suggested. Once a sparse approximation to the Hessian matrix has been obtained, there still remains the problem of solving a sparse linear system of equations at each iteration. A modified Cholesky factorization can be used. However, many additional nonzeros (fill-in) may be created in the factors, and storage problems may arise. One way of approaching this problem is to ignore fill-in in a systematic manner. Such technique are calledpartial factorization schemes. Various existing partial factorization are analyzed and three new ones are developed. The above algorithms were tested on a set of problems. The overall conclusions were that these methods perfom well in practice.  相似文献   

7.
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given.  相似文献   

8.
In this paper we propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an approximation of the Hessian/vector product through finite differences. For search direction computation, the method uses a forward difference approximation to the Hessian/vector product in combination with a careful choice of the finite difference interval. For the step length computation we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms including CONMIN by Shanno and Phua [D.F. Shanno, K.H. Phua, Algorithm 500, minimization of unconstrained multivariate functions, ACM Trans. Math. Softw. 2 (1976) 87–94], SCALCG by Andrei [N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Comput. Optim. Appl. 38 (2007) 401–416; N. Andrei, Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Optim. Methods Softw. 22 (2007) 561–571; N. Andrei, A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Appl. Math. Lett. 20 (2007) 645–650], and new conjugacy condition and related new conjugate gradient by Li, Tang and Wei [G. Li, C. Tang, Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math. 202 (2007) 523–539] or truncated Newton TN by Nash [S.G. Nash, Preconditioning of truncated-Newton methods, SIAM J. on Scientific and Statistical Computing 6 (1985) 599–616] using a set of 750 unconstrained optimization test problems show that the suggested algorithm outperforms these conjugate gradient algorithms as well as TN.  相似文献   

9.
In this paper, we propose a three-term conjugate gradient method via the symmetric rank-one update. The basic idea is to exploit the good properties of the SR1 update in providing quality Hessian approximations to construct a conjugate gradient line search direction without the storage of matrices and possess the sufficient descent property. Numerical experiments on a set of standard unconstrained optimization problems showed that the proposed method is superior to many well-known conjugate gradient methods in terms of efficiency and robustness.  相似文献   

10.
The trust region(TR) method for optimization is a class of effective methods.The conic model can be regarded as a generalized quadratic model and it possesses the good convergence properties of the quadratic model near the minimizer.The Barzilai and Borwein(BB) gradient method is also an effective method,it can be used for solving large scale optimization problems to avoid the expensive computation and storage of matrices.In addition,the BB stepsize is easy to determine without large computational efforts.In this paper,based on the conic trust region framework,we employ the generalized BB stepsize,and propose a new nonmonotone adaptive trust region method based on simple conic model for large scale unconstrained optimization.Unlike traditional conic model,the Hessian approximation is an scalar matrix based on the generalized BB stepsize,which resulting a simple conic model.By adding the nonmonotone technique and adaptive technique to the simple conic model,the new method needs less storage location and converges faster.The global convergence of the algorithm is established under certain conditions.Numerical results indicate that the new method is effective and attractive for large scale unconstrained optimization problems.  相似文献   

11.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.  相似文献   

12.
凸约束优化的非单调信赖域算法的收敛性   总被引:1,自引:0,他引:1  
本文对凸约束优化问题提出一类新的非单调信赖域算法,在二次模型Hesse矩阵{Bk}一致有界条件下,证明了算法具有强收敛性;在{Bk}线性增长的条件下,证明了算法具有弱收敛性;这推广了现有约束或凸约束优化问题的各种信赖域算法,改进了收敛性结果。  相似文献   

13.
This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large-scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient. We establish an complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to compute Hessian vector-products than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to L 1-regularized problems designed to produce sparse solutions. We propose a Newton-like method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms.  相似文献   

14.
信赖域法是一种保证全局收敛性的优化算法,为避免Hessian矩阵的计算,基于拟牛顿校正公式构造了求解带线性等式约束的非线性规划问题的截断拟牛顿型信赖域法.首先给出了截断拟牛顿型信赖域法的构造过程及具体步骤;然后针对随机用户均衡模型中变量和约束的特点对算法进行了修正,并将多种拟牛顿校正公式下所得结果与牛顿型信赖域法的结果进行了比较,结果发现基于对称秩1校正公式的信赖域法更为合适.最后基于数值算例结果得到了一些在算法编程过程中的重要结论,对其它形式信赖域法的编程实现具有一定的参考意义.  相似文献   

15.
采用既约预条件共轭梯度路径结合非单调技术解线性等式约束的非线性优化问题.基于广义消去法将原问题转化为等式约束矩阵的零空间中的一个无约束优化问题,通过一个增广系统获得既约预条件方程,并构造共轭梯度路径解二次模型,从而获得搜索方向和迭代步长.基于共轭梯度路径的良好性质,在合理的假设条件下,证明了算法不仅具有整体收敛性,而且保持快速的超线性收敛速率.进一步,数值计算表明了算法的可行性和有效性.  相似文献   

16.
This paper is devoted to globally convergent methods for solving large sparse systems of nonlinear equations with an inexact approximation of the Jacobian matrix. These methods include difference versions of the Newton method and various quasi-Newton methods. We propose a class of trust region methods together with a proof of their global convergence and describe an implementable globally convergent algorithm which can be used as a realization of these methods. Considerable attention is concentrated on the application of conjugate gradient-type iterative methods to the solution of linear subproblems. We prove that both the GMRES and the smoothed COS well-preconditioned methods can be used for the construction of globally convergent trust region methods. The efficiency of our algorithm is demonstrated computationally by using a large collection of sparse test problems.  相似文献   

17.
In this paper a new trust region method with simple model for solving large-scale unconstrained nonlinear optimization is proposed. By employing the generalized weak quasi-Newton equations, we derive several schemes to construct variants of scalar matrices as the Hessian approximation used in the trust region subproblem. Under some reasonable conditions, global convergence of the proposed algorithm is established in the trust region framework. The numerical experiments on solving the test problems with dimensions from 50 to 20,000 in the CUTEr library are reported to show efficiency of the algorithm.  相似文献   

18.
本文提出了一种解无约束优化问题的新的非单调自适应信赖域方法.这种方法借助于目标函数的海赛矩阵的近似数量矩阵来确定信赖域半径.在通常的条件下,给出了新算法的全局收敛性以及局部超线性收敛的结果,数值试验验证了新的非单调方法的有效性.  相似文献   

19.
Described here is the structure and theory for a sequential quadratic programming algorithm for solving sparse nonlinear optimization problems. Also provided are the details of a computer implementation of the algorithm along with test results. The algorithm maintains a sparse approximation to the Cholesky factor of the Hessian of the Lagrangian. The solution to the quadratic program generated at each step is obtained by solving a dual quadratic program using a projected conjugate gradient algorithm. An updating procedure is employed that does not destroy sparsity.  相似文献   

20.
A new active set based algorithm is proposed that uses the conjugate gradient method to explore the face of the feasible region defined by the current iterate and the reduced gradient projection with the fixed steplength to expand the active set. The precision of approximate solutions of the auxiliary unconstrained problems is controlled by the norm of violation of the Karush-Kuhn-Tucker conditions at active constraints and the scalar product of the reduced gradient with the reduced gradient projection. The modifications were exploited to find the rate of convergence in terms of the spectral condition number of the Hessian matrix, to prove its finite termination property even for problems whose solution does not satisfy the strict complementarity condition, and to avoid any backtracking at the cost of evaluation of an upper bound for the spectral radius of the Hessian matrix. The performance of the algorithm is illustrated on solution of the inner obstacle problems. The result is an important ingredient in development of scalable algorithms for numerical solution of elliptic variational inequalities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号