首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 122 毫秒
1.
A subspace projected conjugate gradient method is proposed for solving large bound constrained quadratic programming. The conjugate gradient method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At every iterative level, the search direction consists of two parts, one of which is a subspace trumcated Newton direction, another is a modified gradient direction. With the projected search the algorithm is suitable to large problems. The convergence of the method is proved and same numerical tests with dimensions ranging from 5000 to 20000 are given.  相似文献   

2.
In this paper, two PVD-type algorithms are proposed for solving inseparable linear constraint optimization. Instead of computing the residual gradient function, the new algorithm uses the reduced gradients to construct the PVD directions in parallel computation, which can greatly reduce the computation amount each iteration and is closer to practical applications for solve large-scale nonlinear programming. Moreover, based on an active set computed by the coordinate rotation at each iteration, a feasible descent direction can be easily obtained by the extended reduced gradient method. The direction is then used as the PVD direction and a new PVD algorithm is proposed for the general linearly constrained optimization. And the global convergence is also proved.  相似文献   

3.
The speeding-up and slowing-down (SUSD) direction is a novel direction, which is proved to converge to the gradient descent direction under some conditions. The authors propose the derivative-free optimization algorithm SUSD-TR, which combines the SUSD direction based on the covariance matrix of interpolation points and the solution of the trust-region subproblem of the interpolation model function at the current iteration step.They analyze the optimization dynamics and convergence of the algorithm SUSD-TR. Details of the trial step and structure step are given. Numerical results show their algorithm’s efficiency, and the comparison indicates that SUSD-TR greatly improves the method’s performance based on the method that only goes along the SUSD direction. Their algorithm is competitive with state-of-the-art mathematical derivative-free optimization algorithms.  相似文献   

4.
The simplified Newton method, at the expense of fast convergence, reduces the work required by Newton method by reusing the initial Jacobian matrix. The composite Newton method attempts to balance the trade-off between expense and fast convergence by composing one Newton step with one simplified Newton step. Recently, Mehrotra suggested a predictor-corrector variant of primal-dual interior point method for linear programming. It is currently the interior-point method of the choice for linear programming. In this work we propose a predictor-corrector interior-point algorithm for convex quadratic programming. It is proved that the algorithm is equivalent to a level-1 perturbed composite Newton method. Computations in the algorithm do not require that the initial primal and dual points be feasible. Numerical experiments are made.  相似文献   

5.
A interior point scaling projected reduced Hessian method with combination of nonmonotonic backtracking technique and trust region strategy for nonlinear equality constrained optimization with nonegative constraint on variables is proposed. In order to deal with large problems,a pair of trust region subproblems in horizontal and vertical subspaces is used to replace the general full trust region subproblem. The horizontal trust region subproblem in the algorithm is only a general trust region subproblem while the vertical trust region subproblem is defined by a parameter size of the vertical direction subject only to an ellipsoidal constraint. Both trust region strategy and line search technique at each iteration switch to obtaining a backtracking step generated by the two trust region subproblems. By adopting the l1 penalty function as the merit function, the global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions. A nonmonotonic criterion and the second order correction step are used to overcome Maratos effect and speed up the convergence progress in some ill-conditioned cases.  相似文献   

6.
We extend the classical affine scaling interior trust region algorithm for the linear constrained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian. We propose and analyze a new affine scaling trust-region method in association with nonmonotonic interior backtracking line search technique for solving the linear constrained LC1 optimization where the second-order derivative of the objective function is explicitly required to be locally Lipschitzian. The general trust region subproblem in the proposed algorithm is defined by minimizing an augmented affine scaling quadratic model which requires both first and second order information of the objective function subject only to an affine scaling ellipsoidal constraint in a null subspace of the augmented equality constraints. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions where twice smoothness of the objective function is not required. Applications of the algorithm to some nonsmooth optimization problems are discussed.  相似文献   

7.
In this review, we intend to clarify the underlying ideas and the relations between various multigrid methods ranging from subset decomposition, to projected subspace decomposition and truncated multigrid. In addition, we present a novel globally convergent inexact active set method which is closely related to truncated multigrid. The numerical properties of algorithms are carefully assessed by means of a degenerate problem and a problem with a complicated coincidence set.  相似文献   

8.
A revised conjugate gradient projection method for nonlinear inequality constrained optimization problems is proposed in the paper, since the search direction is the combination of the conjugate projection gradient and the quasi-Newton direction. It has two merits. The one is that the amount of computation is lower because the gradient matrix only needs to be computed one time at each iteration. The other is that the algorithm is of global convergence and locally superlinear convergence without strict complementary condition under some mild assumptions. In addition the search direction is explicit.  相似文献   

9.
<正>Image restoration is often solved by minimizing an energy function consisting of a data-fidelity term and a regularization term.A regularized convex term can usually preserve the image edges well in the restored image.In this paper,we consider a class of convex and edge-preserving regularization functions,i.e.,multiplicative half-quadratic regularizations,and we use the Newton method to solve the correspondingly reduced systems of nonlinear equations.At each Newton iterate,the preconditioned conjugate gradient method,incorporated with a constraint preconditioner,is employed to solve the structured Newton equation that has a symmetric positive definite coefficient matrix. The eigenvalue bounds of the preconditioned matrix are deliberately derived,which can be used to estimate the convergence speed of the preconditioned conjugate gradient method.We use experimental results to demonstrate that this new approach is efficient, and the effect of image restoration is reasonably well.  相似文献   

10.
The augmented Lagrangian method is a classical method for solving constrained optimization.Recently,the augmented Lagrangian method attracts much attention due to its applications to sparse optimization in compressive sensing and low rank matrix optimization problems.However,most Lagrangian methods use first order information to update the Lagrange multipliers,which lead to only linear convergence.In this paper,we study an update technique based on second order information and prove that superlinear convergence can be obtained.Theoretical properties of the update formula are given and some implementation issues regarding the new update are also discussed.  相似文献   

11.
In this paper we propose a subspace limited memory quasi-Newton method for solving large-scale optimization with simple bounds on the variables. The limited memory quasi-Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. The search direction consists of three parts: a subspace quasi-Newton direction, and two subspace gradient and modified gradient directions. Our algorithm can be applied to large-scale problems as there is no need to solve any subproblems. The global convergence of the method is proved and some numerical results are also given.

  相似文献   


12.
1. IntroductionConsider the following NLP problemwhere the function f: Re --+ RI and gi: Re - R', j E J are twice continuously dtherentiable.In particular, we discuss the cajse, where the nUmber of variables and the nUmber of constraintsin (1.1) are large and second derivatives in (1.1) are sparse.There are some methods whiCh can solve largesscale problems, e.g. Lancelot in [2] andTDSQPLM in [9]. But they can not take adVantage of sparse structtire of the problem. A newefficient meth…  相似文献   

13.
A new active set Newton-type algorithm for the solution of inequality constrained minimization problems is proposed. The algorithm possesses the following favorable characteristics: (i) global convergence under mild assumptions; (ii) superlinear convergence of primal variables without strict complementarity; (iii) a Newton-type direction computed by means of a truncated conjugate gradient method. Preliminary computational results are reported to show viability of the approach in large scale problems having only a limited number of constraints.  相似文献   

14.
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed.  相似文献   

15.
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.  相似文献   

16.
An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection. This work was supported by the 973 project granted 2004CB719402 and the NSF project of China granted 10471036.  相似文献   

17.
低秩矩阵恢复问题作为一类在图像处理和信号数据分析等领域中都十分重要的问题已被广泛研究.本文在交替方向算法的框架下,应用非单调技术,提出一种求解低秩矩阵恢复问题的新算法.该算法在每一步迭代过程中,首先利用一步带有变步长梯度算法同时更新低秩部分的两块变量,然后采用非单调技术更新稀疏部分的变量.在一定的假设条件下,本文证明了新算法的全局收敛性.最后通过解决随机低秩矩阵恢复问题和视频前景背景分离的实例验证新算法的有效性,同时也显示非单调技术极大改善了算法的效率.  相似文献   

18.
We present an algorithm for super-scale linearly constrained nonlinear programming (LCNP) based on Newton's method. In large-scale programming solving the Newton equation at each iteration can be expensive and may not be justified when far from a local solution. For super-scale problems, the truncated Newton method (where an inaccurate solution is computed by using the conjugate-gradient method) is recommended; a diagonal BFGS preconditioning of the gradient is used, so that the number of iterations to solve the equation is reduced. The procedure for updating that preconditioning is described for LCNP when the set of active constraints or the partition of basic, superbasic and nonbasic (structural) variables have been changed.  相似文献   

19.
The truncated Newton algorithm was devised by Dembo and Steihaug (Ref. 1) for solving large sparse unconstrained optimization problems. When far from a minimum, an accurate solution to the Newton equations may not be justified. Dembo's method solves these equations by the conjugate direction method, but truncates the iteration when a required degree of accuracy has been obtained. We present favorable numerical results obtained with the algorithm and compare them with existing codes for large-scale optimization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号