首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 862 毫秒
1.
On the Newton Interior-Point Method for Nonlinear Programming Problems   总被引:2,自引:0,他引:2  
Interior-point methods have been developed largely for nonlinear programming problems. In this paper, we generalize the global Newton interior-point method introduced in Ref. 1 and we establish a global convergence theory for it, under the same assumptions as those stated in Ref. 1. The generalized algorithm gives the possibility of choosing different descent directions for a merit function so that difficulties due to small steplength for the perturbed Newton direction can be avoided. The particular choice of the perturbation enables us to interpret the generalized method as an inexact Newton method. Also, we suggest a more general criterion for backtracking, which is useful when the perturbed Newton system is not solved exactly. We include numerical experimentation on discrete optimal control problems.  相似文献   

2.
The aim of this paper is the study of different approaches to combine and scale, in an efficient manner, descent information for the solution of unconstrained optimization problems. We consider the situation in which different directions are available in a given iteration, and we wish to analyze how to combine these directions in order to provide a method more efficient and robust than the standard Newton approach. In particular, we will focus on the scaling process that should be carried out before combining the directions. We derive some theoretical results regarding the conditions necessary to ensure the convergence of combination procedures following schemes similar to our proposals. Finally, we conduct some computational experiments to compare these proposals with a modified Newton??s method and other procedures in the literature for the combination of information.  相似文献   

3.
This paper provides a modification to the Gauss—Newton method for nonlinear least squares problems. The new method is based on structured quasi-Newton methods which yield a good approximation to the second derivative matrix of the objective function. In particular, we propose BFGS-like and DFP-like updates in a factorized form which give descent search directions for the objective function. We prove local and q-superlinear convergence of our methods, and give results of computational experiments for the BFGS-like and DFP-like updates.This work was supported in part by the Grant-in-Aid for Encouragement of Young Scientists of the Japanese Ministry of Education: (A)61740133 and (A)62740137.  相似文献   

4.
In this paper, we present a primal-dual interior-point method for solving nonlinear programming problems. It employs a Levenberg-Marquardt (LM) perturbation to the Karush-Kuhn-Tucker (KKT) matrix to handle indefinite Hessians and a line search to obtain sufficient descent at each iteration. We show that the LM perturbation is equivalent to replacing the Newton step by a cubic regularization step with an appropriately chosen regularization parameter. This equivalence allows us to use the favorable theoretical results of Griewank (The modification of Newton’s method for unconstrained optimization by bounding cubic terms, 1981), Nesterov and Polyak (Math. Program., Ser. A 108:177–205, 2006), Cartis et al. (Math. Program., Ser. A 127:245–295, 2011; Math. Program., Ser. A 130:295–319, 2011), but its application at every iteration of the algorithm, as proposed by these papers, is computationally expensive. We propose a hybrid method: use a Newton direction with a line search on iterations with positive definite Hessians and a cubic step, found using a sufficiently large LM perturbation to guarantee a steplength of 1, otherwise. Numerical results are provided on a large library of problems to illustrate the robustness and efficiency of the proposed approach on both unconstrained and constrained problems.  相似文献   

5.
In this paper, we present a convergence analysis of the inexact Newton method for solving Discrete-time algebraic Riccati equations (DAREs) for large and sparse systems. The inexact Newton method requires, at each iteration, the solution of a symmetric Stein matrix equation. These linear matrix equations are solved approximatively by the alternating directions implicit (ADI) or Smith?s methods. We give some new matrix identities that will allow us to derive new theoretical convergence results for the obtained inexact Newton sequences. We show that under some necessary conditions the approximate solutions satisfy some desired properties such as the d-stability. The theoretical results developed in this paper are an extension to the discrete case of the analysis performed by Feitzinger et al. (2009) [8] for the continuous-time algebraic Riccati equations. In the last section, we give some numerical experiments.  相似文献   

6.
This paper is concerned with the open problem as to whether DFP method with inexact line search converges globally to the minimum of a uniformly convex function. We study this problem by way of a Gauss-Newton approach rather than an ordinary Newton approach. We also propose a derivative-free line search that can be implemented conveniently by a backtracking process and has such an attractive property that any iterative method with this line search generates a sequence of iterates that is approximately norm descent. Moreover, if the Jacobian matrices are uniformly nonsingular, then the generated sequenceconverges. Under appropriate conditions, we establish global and superlinear convergence of the proposed Gauss-Newton based DFP method, which supports the open problem positively.  相似文献   

7.
To guarantee global convergence of the standard (unmodified) PRP nonlinear conjugate gradient method for unconstrained optimization, the exact line search or some Armijo type line searches which force the PRP method to generate descent directions have been adopted. In this short note, we propose a non-descent PRP method in another way. We prove that the unmodified PRP method converges globally even for nonconvex minimization by the use of an approximate descent inexact line search.  相似文献   

8.
The limiting factors of second-order methods for large-scale semidefinite optimization are the storage and factorization of the Newton matrix. For a particular algorithm based on the modified barrier method, we propose to use iterative solvers instead of the routinely used direct factorization techniques. The preconditioned conjugate gradient method proves to be a viable alternative for problems with a large number of variables and modest size of the constrained matrix. We further propose to avoid explicit calculation of the Newton matrix either by an implicit scheme in the matrix–vector product or using a finite-difference formula. This leads to huge savings in memory requirements and, for certain problems, to further speed-up of the algorithm. Dedicated to the memory of Jos Sturm.  相似文献   

9.
In this paper, we make a modification to the Liu-Storey (LS) conjugate gradient method and propose a descent LS method. The method can generate sufficient descent directions for the objective function. This property is independent of the line search used. We prove that the modified LS method is globally convergent with the strong Wolfe line search. The numerical results show that the proposed descent LS method is efficient for the unconstrained problems in the CUTEr library.  相似文献   

10.
This paper concerns a short-update primal-dual interior-point method for linear optimization based on a new search direction. We apply a vector-valued function generated by a univariate function on the nonlinear equation of the system which defines the central path. The common way to obtain the equivalent form of the central path is using the square root function. In this paper we consider a new function formed by the difference of the identity map and the square root function. We apply Newton’s method in order to get the new directions. In spite of the fact that the analysis is more difficult in this case, we prove that the complexity of the algorithm is identical with the one of the best known methods for linear optimization.  相似文献   

11.
We propose a new family of Newton-type methods for the solution of constrained systems of equations. Under suitable conditions, that do not include differentiability or local uniqueness of solutions, local, quadratic convergence to a solution of the system of equations can be established. We show that as particular instances of the method we obtain inexact versions of both a recently introduced LP-based Newton method and of a Levenberg-Marquardt algorithm for the solution of systems with nonisolated solutions, and improve on corresponding existing results.  相似文献   

12.
We give a framework for the globalization of a nonsmooth Newton method. In part one we start with recalling B. Kummer’s approach to convergence analysis of a nonsmooth Newton method and state his results for local convergence. In part two we give a globalized version of this method. Our approach uses a path search idea to control the descent. After elaborating the single steps, we analyze and prove the global convergence resp. the local superlinear or quadratic convergence of the algorithm. In the third part we illustrate the method for nonlinear complementarity problems.  相似文献   

13.
In this paper we present a new steepest-descent type algorithm for convex optimization problems. Our algorithm pieces the unknown into sub-blocs of unknowns and considers a partial optimization over each sub-bloc. In quadratic optimization, our method involves Newton technique to compute the step-lengths for the sub-blocs resulting descent directions. Our optimization method is fully parallel and easily implementable, we first presents it in a general linear algebra setting, then we highlight its applicability to a parabolic optimal control problem, where we consider the blocs of unknowns with respect to the time dependency of the control variable. The parallel tasks, in the last problem, turn “on” the control during a specific time-window and turn it “off” elsewhere. We show that our algorithm significantly improves the computational time compared with recognized methods. Convergence analysis of the new optimal control algorithm is provided for an arbitrary choice of partition. Numerical experiments are presented to illustrate the efficiency and the rapid convergence of the method.  相似文献   

14.
In this paper, we propose a new distinctive version of a generalized Newton method for solving nonsmooth equations. The iterative formula is not the classic Newton type, but an exponential one. Moreover, it uses matrices from B‐differential instead of generalized Jacobian. We prove local convergence of the method and we present some numerical examples.  相似文献   

15.
用Cramer法则给出了Lagrange插值公式和Newton插值公式的简洁证明,同时得到了Vandermonde矩阵的逆矩阵的LU分解.  相似文献   

16.
In this paper, we study the search directions of three important interior-point algorithms, namely, the primal-affine scaling method (with logarithmic barrier function), the dual-affine scaling method (with logarithmic barrier function), and the primal-dual interior point method. From an algebraic point of view, we show that the search directions of these three algorithms are merely Newton directions along three different paths that lead to a solution of the Karush-Kuhn-Tucker conditions of a given linear programming problem. From a geometric point of view, we show that these directions can be obtained by solving certain well-defined subproblems. Both views provide a general platform for studying the existing interior-point methods and deriving new interior-point algorithms. We illustrate the derivation of new interior-point algorithms by replacing the logarithmic barrier function with an entropic barrier function. The results have been generalized and discussed.This work is partially supported by the North Carolina Supercomputing Center 1990 Cray Grant Program sponsored by Cray Research.  相似文献   

17.
We propose a generalized Newton method for solving the system of nonlinear equations with linear complementarity constraints in the implicit or semi-implicit time-stepping scheme for differential linear complementarity systems (DLCS). We choose a specific solution from the solution set of the linear complementarity constraints to define a locally Lipschitz continuous right-hand-side function in the differential equation. Moreover, we present a simple formula to compute an element in the Clarke generalized Jacobian of the solution function. We show that the implicit or semi-implicit time-stepping scheme using the generalized Newton method can be applied to a class of DLCS including the nondegenerate matrix DLCS and hidden Z-matrix DLCS, and has a superlinear convergence rate. To illustrate our approach, we show that choosing the least-element solution from the solution set of the Z-matrix linear complementarity constraints can define a Lipschitz continuous right-hand-side function with a computable Lipschitz constant. The Lipschitz constant helps us to choose the step size of the time-stepping scheme and guarantee the convergence.  相似文献   

18.
We propose an algorithm, semismooth Newton coordinate descent (SNCD), for the elastic-net penalized Huber loss regression and quantile regression in high dimensional settings. Unlike existing coordinate descent type algorithms, the SNCD updates a regression coefficient and its corresponding subgradient simultaneously in each iteration. It combines the strengths of the coordinate descent and the semismooth Newton algorithm, and effectively solves the computational challenges posed by dimensionality and nonsmoothness. We establish the convergence properties of the algorithm. In addition, we present an adaptive version of the “strong rule” for screening predictors to gain extra efficiency. Through numerical experiments, we demonstrate that the proposed algorithm is very efficient and scalable to ultrahigh dimensions. We illustrate the application via a real data example. Supplementary materials for this article are available online.  相似文献   

19.
In this article, we consider solvers for large-scale trust-region subproblems when the quadratic model is defined by a limited-memory symmetric rank-one (L-SR1) quasi-Newton matrix. We propose a solver that exploits the compact representation of L-SR1 matrices. Our approach makes use of both an orthonormal basis for the eigenspace of the L-SR1 matrix and the Sherman–Morrison–Woodbury formula to compute global solutions to trust-region subproblems. To compute the optimal Lagrange multiplier for the trust-region constraint, we use Newton’s method with a judicious initial guess that does not require safeguarding. A crucial property of this solver is that it is able to compute high-accuracy solutions even in the so-called hard case. Additionally, the optimal solution is determined directly by formula, not iteratively. Numerical experiments demonstrate the effectiveness of this solver.  相似文献   

20.
用Levenberg-Marquardt类的投影收缩方法解运输问题   总被引:1,自引:0,他引:1  
For solving linear variational inequalities (LVI), the projection and contraction method of Levenberg-Marquardt type needs less iterations than an elementary projection and contraction method. However, the method of Levenberg-Marquardt type has to calculate the inverse of a matrix and hence it is unsuitable for large problems. In this paper, using the special structure of the constraint matrix, we present a PC method of Levenberg-Marquardt type for LVI arising from transportation problem without calculating any inverse matrices.Several computational experiments are presentded to indicate that the methods is good for solving the transportation problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号