首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
To efficiently solve a large scale unconstrained minimization problem with a dense Hessian matrix, this paper proposes to use an incomplete Hessian matrix to define a new modified Newton method, called the incomplete Hessian Newton method (IHN). A theoretical analysis shows that IHN is convergent globally, and has a linear rate of convergence with a properly selected symmetric, positive definite incomplete Hessian matrix. It also shows that the Wolfe conditions hold in IHN with a line search step length of one. As an important application, an effective IHN and a modified IHN, called the truncated-IHN method (T-IHN), are constructed for solving a large scale chemical database optimal projection mapping problem. T-IHN is shown to work well even with indefinite incomplete Hessian matrices. Numerical results confirm the theoretical results of IHN, and demonstrate the promising potential of T-IHN as an efficient minimization algorithm.  相似文献   

2.
Minimization of the weighted nonlinear sum of squares of differences may be converted to the minimization of sum of squares. The Gauss-Newton method is recalled and the length of the step of the steepest descent method is determined by substituting the steepest descent direction in the Gauss-Newton formula. The existence of minimum is shown.  相似文献   

3.
Ukrainian Mathematical Journal - The asymptotic rate of convergence of the method of steepest descent is regarded as a function of the initial approximation. We study the level set of this rate,...  相似文献   

4.
We propose a new monotone algorithm for unconstrained optimization in the frame of Barzilai and Borwein (BB) method and analyze the convergence properties of this new descent method. Motivated by the fact that BB method does not guarantee descent in the objective function at each iteration, but performs better than the steepest descent method, we therefore attempt to find stepsize formula which enables us to approximate the Hessian based on the Quasi-Cauchy equation and possess monotone property in each iteration. Practical insights on the effectiveness of the proposed techniques are given by a numerical comparison with the BB method.  相似文献   

5.
We propose a new gradient method for quadratic programming, named SDC, which alternates some steepest descent (SD) iterates with some gradient iterates that use a constant steplength computed through the Yuan formula. The SDC method exploits the asymptotic spectral behaviour of the Yuan steplength to foster a selective elimination of the components of the gradient along the eigenvectors of the Hessian matrix, i.e., to push the search in subspaces of smaller and smaller dimensions. The new method has global and \(R\) -linear convergence. Furthermore, numerical experiments show that it tends to outperform the Dai–Yuan method, which is one of the fastest methods among the gradient ones. In particular, SDC appears superior as the Hessian condition number and the accuracy requirement increase. Finally, if the number of consecutive SD iterates is not too small, the SDC method shows a monotonic behaviour.  相似文献   

6.
An algorithm is presented that minimizes a continuously differentiable function in several variables subject to linear inequality constraints. At each step of the algorithm an arc is generated along which a move is performed until either a point yielding a sufficient descent in the function value is determined or a constraint boundary is encountered. The decision to delite a constraint from the list of active constraints is based upon periodic estimates of the Kuhn-Tucker multipliers. The curvilinear search paths are obtained by solving a linear approximation to the differential equation of the continuous steepest descent curve for the objective function on the equality constrained region defined by the constraints which are required to remain binding. If the Hessian matrix of the objective function has certain properties and if the constraint gradients are linearly independent, the sequence generated by the algorithm converges to a point satisfying the Kuhn-Tucker optimality conditions at a rate that is at least quadratic.  相似文献   

7.
徐海文  孙黎明 《计算数学》2017,39(2):200-212
凸优化问题的混合下降算法利用近似条件的已知信息和随机数扩张预测校正步得到了一组下降方向.而前向加速收缩算法利用高斯赛德尔迭代算法的技术,结合邻近点算法和近似邻近点算法的思想,构造了富有扩张性的下降方向.本文借鉴混合下降算法和前向加速收缩算法的思想,利用已有近似规则信息改善了混合下降算法的下降方向,得到了一类凸优化问题的加速混合下降算法.随后利用Markov不等式、凸函数性质和投影的基本性质等,实现了算法的依概率收敛证明.一系列数值试验表明了加速混合下降算法的有效性和效率性.  相似文献   

8.
Shape optimization based on the shape calculus is numerically mostly performed using steepest descent methods. This paper provides a novel framework for analyzing shape Newton optimization methods by exploiting a Riemannian perspective. A Riemannian shape Hessian is defined possessing often sought properties like symmetry and quadratic convergence for Newton optimization methods.  相似文献   

9.
To the unconstrained programme of non-convex function, this article give a modified BFGS algorithm. The idea of the algorithm is to modify the approximate Hessian matrix for obtaining the descent direction and guaranteeing the efficacious of the quasi-Newton iteration pattern. We prove the global convergence properties of the algorithm associating with the general form of line search, and prove the quadratic convergence rate of the algorithm under some conditions.  相似文献   

10.
Convergence properties of a class of multi-directional parallel quasi-Newton algorithmsfor the solution of unconstrained minimization problems are studied in this paper.At eachiteration these algorithms generate several different quasi-Newton directions,and thenapply line searches to determine step lengths along each direction,simultaneously.Thenext iterate is obtained among these trail points by choosing the lowest point in the sense offunction reductions.Different quasi-Newton updating formulas from the Broyden familyare used to generate a main sequence of Hessian matrix approximations.Based on theBFGS and the modified BFGS updating formulas,the global and superlinear convergenceresults are proved.It is observed that all the quasi-Newton directions asymptoticallyapproach the Newton direction in both direction and length when the iterate sequenceconverges to a local minimum of the objective function,and hence the result of superlinearconvergence follows.  相似文献   

11.
This paper is concerned with the development of a parameter-free method, closely related to penalty function and multiplier methods, for solving constrained minimization problems. The method is developed via the quadratic programming model with equality constraints. The study starts with an investigation into the convergence properties of a so-called “primal-dual differential trajectory”, defined by directions given by the direction of steepest descent with respect to the variables x of the problem, and the direction of steepest ascent with respect to the Lagrangian multipliers λ, associated with the Lagrangian function. It is shown that the trajectory converges to a stationary point (x*, λ*) corresponding to the solution of the equality constrained problem. Subsequently numerical procedures are proposed by means of which practical trajectories may be computed and the convergence of these trajectories are analyzed. A computational algorithm is presented and its application is illustrated by means of simple but representative examples. The extension of the method to inequality constrained problems is discussed and a non-rigorous argument, based on the Kuhn—Tucker necessary conditions for a constrained minimum, is put forward on which a practical procedure for determining the solution is based. The application of the method to inequality constrained problems is illustrated by its application to a couple of simple problems.  相似文献   

12.
Globally Convergent Algorithms for Unconstrained Optimization   总被引:2,自引:0,他引:2  
A new globalization strategy for solving an unconstrained minimization problem is proposed based on the idea of combining Newton's direction and the steepest descent direction WITHIN each iteration. Global convergence is guaranteed with an arbitrary initial point. The search direction in each iteration is chosen to be as close to the Newton's direction as possible and could be the Newton's direction itself. Asymptotically the Newton step will be taken in each iteration and thus the local convergence is quadratic. Numerical experiments are also reported. Possible combination of a Quasi-Newton direction with the steepest descent direction is also considered in our numerical experiments. The differences between the proposed strategy and a few other strategies are also discussed.  相似文献   

13.
In maximizing a non-linear function G(), it is well known that the steepest descent method has a slow convergence rate. Here we propose a systematic procedure to obtain a 1–1 transformation on the variables , so that in the space of the transformed variables, the steepest descent method produces the solution faster. The final solution in the original space is obtained by taking the inverse transformation. We apply the procedure in maximizing the likelihood functions of some generalized distributions which are widely used in modeling count data. It was shown that for these distributions, the steepest descent method via transformations produced the solutions very fast. It is also observed that the proposed procedure can be used to expedite the convergence rate of the first derivative based algorithms, such as Polak-Ribiere, Fletcher and Reeves conjugate gradient methods as well.  相似文献   

14.
This paper extends the full convergence of the steepest descent method with a generalized Armijo search and a proximal regularization to solve minimization problems with quasiconvex objective functions on complete Riemannian manifolds. Previous convergence results are obtained as particular cases and some examples in non-Euclidian spaces are given. In particular, our approach can be used to solve constrained minimization problems with nonconvex objective functions in Euclidian spaces if the set of constraints is a Riemannian manifold and the objective function is quasiconvex in this manifold.  相似文献   

15.
We propose a method that incorporates a non-Euclidean gradient descent step with a generic matrix sketching procedure, for solving unconstrained, nonconvex, matrix optimization problems, in which the decision variable cannot be saved in memory due to its size, and the objective function is the composition of a vector function on a linear operator. The method updates the sketch directly without updating or storing the decision variable. Subsequence convergence, global convergence under the Kurdyka–Lojasiewicz property, and rate of convergence, are established.  相似文献   

16.
The problem of globalizing the Newton method when the actual Hessian matrix is not used at every iteration is considered. A stabilization technique is studied that employs a new line search strategy for ensuring the global convergence under mild assumptions. Moreover, an implementable algorithmic scheme is proposed, where the evaluation of the second derivatives is conditioned to the behavior of the algorithm during the minimization process and the local convexity properties of the objective function. This is done in order to obtain a significant computational saving, while keeping acceptable the unavoidable degradation in convergence speed. The numerical results reported indicate that the method described may be employed advantageously in all applications where the computation of the Hessian matrix is highly time consuming.  相似文献   

17.
In this paper, an improved feasible QP-free method is proposed to solve nonlinear inequality constrained optimization problems. Here, a new modified method is presented to obtain the revised feasible descent direction. In view of the computational cost, the most attractive feature of the new algorithm is that only one system of linear equations is required to obtain the revised feasible descent direction. Thereby, per single iteration, it is only necessary to solve three systems of linear equations with the same coefficient matrix. In particular, without the positive definiteness assumption on the Hessian estimate, the proposed algorithm is still global convergence. Under some suitable conditions, the superlinear convergence rate is obtained.  相似文献   

18.
陈俊  孙文瑜 《东北数学》2008,24(1):19-30
In this paper, we combine the nonmonotone and adaptive techniques with trust region method for unconstrained minimization problems. We set a new ratio of the actual descent and predicted descent. Then, instead of the monotone sequence, the nonmonotone sequence of function values are employed. With the adaptive technique, the radius of trust region △k can be adjusted automatically to improve the efficiency of trust region methods. By means of the Bunch-Parlett factorization, we construct a method with indefinite dogleg path for solving the trust region subproblem which can handle the indefinite approximate Hessian Bk. The convergence properties of the algorithm are established. Finally, detailed numerical results are reported to show that our algorithm is efficient.  相似文献   

19.
A new diagonal quasi-Newton updating algorithm for unconstrained optimization is presented. The elements of the diagonal matrix approximating the Hessian are determined as scaled forward finite differences directional derivatives of the components of the gradient. Under mild classical assumptions, the convergence of the algorithm is proved to be linear. Numerical experiments with 80 unconstrained optimization test problems, of different structures and complexities, as well as five applications from MINPACK-2 collection, prove that the suggested algorithm is more efficient and more robust than the quasi-Newton diagonal algorithm retaining only the diagonal elements of the BFGS update, than the weak quasi-Newton diagonal algorithm, than the quasi-Cauchy diagonal algorithm, than the diagonal approximation of the Hessian by the least-change secant updating strategy and minimizing the trace of the matrix, than the Cauchy with Oren and Luenberger scaling algorithm in its complementary form (i.e. the Barzilai-Borwein algorithm), than the steepest descent algorithm, and than the classical BFGS algorithm. However, our algorithm is inferior to the limited memory BFGS algorithm (L-BFGS).  相似文献   

20.
The standard saddle point method of asymptotic expansions of integrals requires to show the existence of the steepest descent paths of the phase function and the computation of the coefficients of the expansion from a function implicitly defined by solving an inversion problem. This means that the method is not systematic because the steepest descent paths depend on the phase function on hand and there is not a general and explicit formula for the coefficients of the expansion (like in Watson's Lemma for example). We propose a more systematic variant of the method in which the computation of the steepest descent paths is trivial and almost universal: it only depends on the location and the order of the saddle points of the phase function. Moreover, this variant of the method generates an asymptotic expansion given in terms of a generalized (and universal) asymptotic sequence that avoids the computation of the standard coefficients, giving an explicit and systematic formula for the expansion that may be easily implemented on a symbolic manipulation program. As an illustrative example, the well-known asymptotic expansion of the Airy function is rederived almost trivially using this method. New asymptotic expansions of the Hankel function Hn(z) for large n and z are given as non-trivial examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号