首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Interpolation-based trust-region methods are an important class of algorithms for Derivative-Free Optimization which rely on locally approximating an objective function by quadratic polynomial interpolation models, frequently built from less points than there are basis components. Often, in practical applications, the contribution of the problem variables to the objective function is such that many pairwise correlations between variables are negligible, implying, in the smooth case, a sparse structure in the Hessian matrix. To be able to exploit Hessian sparsity, existing optimization approaches require the knowledge of the sparsity structure. The goal of this paper is to develop and analyze a method where the sparse models are constructed automatically. The sparse recovery theory developed recently in the field of compressed sensing characterizes conditions under which a sparse vector can be accurately recovered from few random measurements. Such a recovery is achieved by minimizing the 1-norm of a vector subject to the measurements constraints. We suggest an approach for building sparse quadratic polynomial interpolation models by minimizing the 1-norm of the entries of the model Hessian subject to the interpolation conditions. We show that this procedure recovers accurate models when the function Hessian is sparse, using relatively few randomly selected sample points. Motivated by this result, we developed a practical interpolation-based trust-region method using deterministic sample sets and minimum 1-norm quadratic models. Our computational results show that the new approach exhibits a promising numerical performance both in the general case and in the sparse one.  相似文献   

2.
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochastic optimization both with and without inequality constraints. The algorithm combines the smoothed functional (SF) scheme for estimating the gradient with the quasi-Newton method to solve the optimization problem. Newton algorithms typically update the Hessian at each instant and subsequently (a) project them to the space of positive definite and symmetric matrices, and (b) invert the projected Hessian. The latter operation is computationally expensive. In order to save computational effort, we propose in this paper a quasi-Newton SF (QN-SF) algorithm based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule. In Bhatnagar (ACM TModel Comput S. 18(1): 27–62, 2007), a Jacobi variant of Newton SF (JN-SF) was proposed and implemented to save computational effort. We compare our QN-SF algorithm with gradient SF (G-SF) and JN-SF algorithms on two different problems – first on a simple stochastic function minimization problem and the other on a problem of optimal routing in a queueing network. We observe from the experiments that the QN-SF algorithm performs significantly better than both G-SF and JN-SF algorithms on both the problem settings. Next we extend the QN-SF algorithm to the case of constrained optimization. In this case too, the QN-SF algorithm performs much better than the JN-SF algorithm. Finally we present the proof of convergence for the QN-SF algorithm in both unconstrained and constrained settings.  相似文献   

3.
A Smoothing Newton Method for General Nonlinear Complementarity Problems   总被引:5,自引:0,他引:5  
Smoothing Newton methods for nonlinear complementarity problems NCP(F) often require F to be at least a P 0-function in order to guarantee that the underlying Newton equation is solvable. Based on a special equation reformulation of NCP(F), we propose a new smoothing Newton method for general nonlinear complementarity problems. The introduction of Kanzow and Pieper's gradient step makes our algorithm to be globally convergent. Under certain conditions, our method achieves fast local convergence rate. Extensive numerical results are also reported for all complementarity problems in MCPLIB and GAMSLIB libraries with all available starting points.  相似文献   

4.
In this paper, we propose a trust region method for minimizing a function whose Hessian matrix at the solutions may be singular. The global convergence of the method is obtained under mild conditions. Moreover, we show that if the objective function is LC 2 function, the method possesses local superlinear convergence under the local error bound condition without the requirement of isolated nonsingular solution. This is the first regularized Newton method with trust region technique which possesses local superlinear (quadratic) convergence without the assumption that the Hessian of the objective function at the solution is nonsingular. Preliminary numerical experiments show the efficiency of the method. This work is partly supported by the National Natural Science Foundation of China (Grant Nos. 70302003, 10571106, 60503004, 70671100) and Science Foundation of Beijing Jiaotong University (2007RC014).  相似文献   

5.
A technique for deriving formulas for the second derivatives of a composite function with constrained variables is proposed. The original system of constraint equations is associated with a linear system of equations, whose solution is used to determine the Hessian of the function. The resulting formulas are applied to discrete problems obtained by approximating optimal control problems with the use of Runge-Kutta methods of various orders. For a particular optimal control problem, the numerical results obtained by the gradient method and Newton’s method with the resulting formulas are described and analyzed in detail.  相似文献   

6.
In this paper we propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an approximation of the Hessian/vector product through finite differences. For search direction computation, the method uses a forward difference approximation to the Hessian/vector product in combination with a careful choice of the finite difference interval. For the step length computation we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms including CONMIN by Shanno and Phua [D.F. Shanno, K.H. Phua, Algorithm 500, minimization of unconstrained multivariate functions, ACM Trans. Math. Softw. 2 (1976) 87–94], SCALCG by Andrei [N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Comput. Optim. Appl. 38 (2007) 401–416; N. Andrei, Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Optim. Methods Softw. 22 (2007) 561–571; N. Andrei, A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Appl. Math. Lett. 20 (2007) 645–650], and new conjugacy condition and related new conjugate gradient by Li, Tang and Wei [G. Li, C. Tang, Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math. 202 (2007) 523–539] or truncated Newton TN by Nash [S.G. Nash, Preconditioning of truncated-Newton methods, SIAM J. on Scientific and Statistical Computing 6 (1985) 599–616] using a set of 750 unconstrained optimization test problems show that the suggested algorithm outperforms these conjugate gradient algorithms as well as TN.  相似文献   

7.
《Optimization》2012,61(6):871-878
In this paper the behaviour of the gradient projection method for the Quadratic Programming problems with M-Matrices and lower bounds on variables is analysed. It is shown that the Chandbasekaran method for solving such problems is simply a realization of the Newton projection method if for the latter one the starting point has components equal to lower bounds on variables.  相似文献   

8.
We formulate a locally superlinearly convergent projected Newton method for constrained minimization in a Cartesian product of balls. For discrete-time,N-stage, input-constrained optimal control problems with Bolza objective functions, we then show how the required scaled tangential component of the objective function gradient can be approximated efficiently with a differential dynamic programming scheme; the computational cost and the storage requirements for the resulting modified projected Newton algorithm increase linearly with the number of stages. In calculations performed for a specific control problem with 10 stages, the modified projected Newton algorithm is shown to be one to two orders of magnitude more efficient than a standard unscaled projected gradient method.This work was supported by the National Science Foundation, Grant No. DMS-85-03746.  相似文献   

9.
We propose a modified sequential quadratic programming method for solving mixed-integer nonlinear programming problems. Under the assumption that integer variables have a smooth influence on the model functions, i.e., that function values do not change drastically when in- or decrementing an integer value, successive quadratic approximations are applied. The algorithm is stabilized by a trust region method with Yuan’s second order corrections. It is not assumed that the mixed-integer program is relaxable or, in other words, function values are evaluated only at integer points. The Hessian of the Lagrangian function is approximated by a quasi-Newton update formula subject to the continuous and integer variables. Numerical results are presented for a set of 80 mixed-integer test problems taken from the literature. The surprising result is that the number of function evaluations, the most important performance criterion in practice, is less than the number of function calls needed for solving the corresponding relaxed problem without integer variables.  相似文献   

10.
Aggregate function is a useful smoothing function to the max-function of some smooth functions and has been used to solve minimax problems, linear and nonlinear programming, generalized complementarity problems, etc. The aggregate function is a single smooth but complex function, its gradient and Hessian calculations are time-consuming. In this paper, a truncated aggregate smoothing stabilized Newton method for solving minimax problems is presented. At each iteration, only a small subset of the components in the max-function are aggregated, hence the number of gradient and Hessian calculations is reduced dramatically. The subset is adaptively updated with some truncating criterions, concerning only with computation of function values and not their gradients or Hessians, to guarantee the global convergence and, for the inner iteration, locally quadratic convergence with as few computational cost as possible. Numerical results show the efficiency of the proposed algorithm.  相似文献   

11.
We propose an effective method of numerical determination of the gradient vector and the Hessian matrix of a quadratic quality criterion in parametric optimization of continuous-discrete stochastic dynamic control systems based on the application of adjoint equations. Translated fromDinamicheskie Sistemy, No. 13, 1994, pp. 20–25.  相似文献   

12.
The solution of a nonlinear parabolic equation is studied from both the functional and the numerical points of view. Existence, uniqueness, and stability results are given. A boundary-control problem is then presented. Expressions of the gradient and the Hessian of the cost function are given with some details, and the Newton method is compared with a gradient method and the Fletcher-Reeves method. Numerical results are given. The paper concludes with a discussion of the advantages of the Newton method as applied to a control problem.  相似文献   

13.
许任飞 《经济数学》2004,21(3):258-262
本文研究求解含有奇异解的无约束最优化问题算法 .该类问题的一个重要特性是目标函数的Hessian阵可能处处奇异 .我们提出求解该类问题的一种梯度 -正则化牛顿型混合算法 .并在一定的条件下得到了算法的全局收敛性 .而且 ,经一定迭代步后 ,算法还原为正则化 Newton法 .因而 ,算法具有局部二次收敛性 .  相似文献   

14.
We show that, for an unconstrained optimization problem, the long-term optimal trajectory consists of a sequence of greatest descent directions and a Newton method in the final iteration. The greatest descent direction can be computed approximately by using a Levenberg-Marquardt like formula. This implies the view that the Newton method approximates a Levenberg-Marquardt like formula at a finite distance from the minimum point, instead of the standard view that the Levenberg-Marquadt formula is a way to approximate the Newton method. With the insight gained from this analysis, we develop a two-dimensional version of a Levenberg-Marquardt like formula. We make use of the two numerically largest components of the gradient vector to define here new search directions. In this way, we avoid the need of inverting a high-dimensional matrix. This reduces also the storage requirements for the full Hessian matrix in problems with a large number of variables. The author thanks Mark Wu, Professors Sanyang Liu, Junmin Li, Shuisheng Zhou and Feng Ye for support and help in this research as well as the referees for helpful comments.  相似文献   

15.
This paper studies convergence properties of regularized Newton methods for minimizing a convex function whose Hessian matrix may be singular everywhere. We show that if the objective function is LC2, then the methods possess local quadratic convergence under a local error bound condition without the requirement of isolated nonsingular solutions. By using a backtracking line search, we globalize an inexact regularized Newton method. We show that the unit stepsize is accepted eventually. Limited numerical experiments are presented, which show the practical advantage of the method.  相似文献   

16.
In the Newton/log-barrier method, Newton steps are taken for the log-barrier function for a fixed value of the barrier parameter until a certain convergence criterion is satisfied. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton’s method does not exhibit superlinear convergence to the minimizer of each instance of the log-barrier function until it reaches a very small neighborhood, namely within O2) of the minimizer, where μ is the barrier parameter. By analyzing the structure of the barrier Hessian and gradient in terms of the subspace of active constraint gradients and the associated null space, we show that this neighborhood is in fact much larger –Oσ) for any σ∈(1,2] – thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton/log-barrier algorithm is superlinear in the number of function/derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the step length and convergence criteria for each Newton process. Received: October 10, 1997 / Accepted: September 10, 2000?Published online February 22, 2001  相似文献   

17.
18.
In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing 1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general 1-regularized convex minimization problems, i.e., the problem of minimizing an 1-regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale 1-regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale 1-regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-art algorithms specifically designed for solving large-scale 1-regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems.  相似文献   

19.
We propose a new smoothing Newton method for solving the P 0-matrix linear complementarity problem (P 0-LCP) based on CHKS smoothing function. Our algorithm solves only one linear system of equations and performs only one line search per iteration. It is shown to converge to a P 0-LCP solution globally linearly and locally quadratically without the strict complementarity assumption at the solution. To the best of author's knowledge, this is the first one-step smoothing Newton method to possess both global linear and local quadratic convergence. Preliminary numerical results indicate that the proposed algorithm is promising.  相似文献   

20.
The limiting factors of second-order methods for large-scale semidefinite optimization are the storage and factorization of the Newton matrix. For a particular algorithm based on the modified barrier method, we propose to use iterative solvers instead of the routinely used direct factorization techniques. The preconditioned conjugate gradient method proves to be a viable alternative for problems with a large number of variables and modest size of the constrained matrix. We further propose to avoid explicit calculation of the Newton matrix either by an implicit scheme in the matrix–vector product or using a finite-difference formula. This leads to huge savings in memory requirements and, for certain problems, to further speed-up of the algorithm. Dedicated to the memory of Jos Sturm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号