首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An iterative method for the minimization of convex functions f :n , called a Newton Bracketing (NB) method, is presented. The NB method proceeds by using Newton iterations to improve upper and lower bounds on the minimum value. The NB method is valid for n = 1, and in some cases for n > 1 (sufficient conditions given here). The NB method is applied to large scale Fermat–Weber location problems.  相似文献   

2.
The proximal point algorithm is classical and popular in the community of optimization. In practice, inexact proximal point algorithms which solve the involved proximal subproblems approximately subject to certain inexact criteria are truly implementable. In this paper, we first propose an inexact proximal point algorithm with a new inexact criterion for solving convex minimization, and show its O(1/k) iteration-complexity. Then we show that this inexact proximal point algorithm is eligible for being accelerated by some influential acceleration schemes proposed by Nesterov. Accordingly, an accelerated inexact proximal point algorithm with an iteration-complexity of O(1/k 2) is proposed.  相似文献   

3.
This paper studies convergence properties of regularized Newton methods for minimizing a convex function whose Hessian matrix may be singular everywhere. We show that if the objective function is LC2, then the methods possess local quadratic convergence under a local error bound condition without the requirement of isolated nonsingular solutions. By using a backtracking line search, we globalize an inexact regularized Newton method. We show that the unit stepsize is accepted eventually. Limited numerical experiments are presented, which show the practical advantage of the method.  相似文献   

4.
基于一个光滑函数,就单调对称锥互补问题,给出了一种解决高维对称锥互补问题的非精确光滑牛顿算法.在适当条件下,证明了该算法具有全局收敛性和局部二次收敛性.数值试验证实了算法对大规模对称锥互补问题的可行性和有效性.  相似文献   

5.
Motivated by the method of Martinez and Qi (Ref. 1), we propose in this paper a globally convergent inexact generalized Newton method to solve unconstrained optimization problems in which the objective functions have Lipschitz continuous gradient functions, but are not twice differentiable. This method is implementable, globally convergent, and produces monotonically decreasing function values. We prove that the method has locally superlinear convergence or even quadratic convergence rate under some mild conditions, which do not assume the convexity of the functions.  相似文献   

6.
We describe an infeasible interior point algorithm for convex minimization problems. The method uses quasi-Newton techniques for approximating the second derivatives and providing superlinear convergence. We propose a new feasibility control of the iterates by introducing shift variables and by penalizing them in the barrier problem. We prove global convergence under standard conditions on the problem data, without any assumption on the behavior of the algorithm.  相似文献   

7.
An Inexact Newton Method Derived from Efficiency Analysis   总被引:1,自引:0,他引:1  
We consider solving an unconstrained optimization problem by Newton-PCG like methods in which the preconditioned conjugate gradient method is applied to solve the Newton equations. The main question to be investigated is how efficient Newton-PCG like methods can be from theoretical point of view. An algorithmic model with several parameters is established. Furthermore, a lower bound of the efficiency measure of the algorithmic model is derived as a function of the parameters. By maximizing this lower bound function, the parameters are specified and therefore an implementable algorithm is obtained. The efficiency of the implementable algorithm is compared with Newtons method by theoretical analysis and numerical experiments. The results show that this algorithm is competitive.Mathematics Subject Classification: 90C30, 65K05.This work was supported by the National Science Foundation of China Grant No. 10371131, and Hong Kong Competitive Earmarked Research Grant CityU 1066/00P from Hong Kong University Grant Council  相似文献   

8.
9.
In this paper, we consider the problem of minimizing the sum of two convex functions subject to linear linking constraints. The classical alternating direction type methods usually assume that the two convex functions have relatively easy proximal mappings. However, many problems arising from statistics, image processing and other fields have the structure that while one of the two functions has an easy proximal mapping, the other function is smoothly convex but does not have an easy proximal mapping. Therefore, the classical alternating direction methods cannot be applied. To deal with the difficulty, we propose in this paper an alternating direction method based on extragradients. Under the assumption that the smooth function has a Lipschitz continuous gradient, we prove that the proposed method returns an \(\epsilon \)-optimal solution within \(O(1/\epsilon )\) iterations. We apply the proposed method to solve a new statistical model called fused logistic regression. Our numerical experiments show that the proposed method performs very well when solving the test problems. We also test the performance of the proposed method through solving the lasso problem arising from statistics and compare the result with several existing efficient solvers for this problem; the results are very encouraging.  相似文献   

10.
11.
《Optimization》2012,61(2):211-217
We study a certain class of min-max optimization problems and we show that the solutions of such problems are, in some sense, degenerate, Two examples are given:minimization of the spectral radius of a symmetric matrix, minimization of the largest eigenvalue of a symmetric matrix  相似文献   

12.
It is well known that the gradient-projection algorithm plays an important role in solving minimization problems. In this paper, we will use the idea of regularization to establish a general method so that the sequence generated by the general method can be strongly convergent to a minimizer of constrained convex minimization problems, which solves a variational inequality under suitable conditions.  相似文献   

13.
We show that if U* is a hypercover of a topological space X then the natural map hocolim U* X is a weak equivalence. This fact is used to construct topological realization functors for the 1-homotopy theory of schemes over real and complex fields. In an appendix, we also prove a theorem about computing homotopy colimits of spaces that are not cofibrant.Mathematics Subject Classification (2000):55U35, 14F20, 14F42The second author was supported by an NSF Postdoctoral Research Fellowship  相似文献   

14.
15.
We propose an exterior Newton method for strictly convex quadratic programming (QP) problems. This method is based on a dual formulation: a sequence of points is generated which monotonically decreases the dual objective function. We show that the generated sequence converges globally and quadratically to the solution (if the QP is feasible and certain nondegeneracy assumptions are satisfied). Measures for detecting infeasibility are provided. The major computation in each iteration is to solve a KKT-like system. Therefore, given an effective symmetric sparse linear solver, the proposed method is suitable for large sparse problems. Preliminary numerical results are reported.  相似文献   

16.
In this paper, we apply a new version of the Brézis, Nirenberg, and Stampacchia theorem; we use pseudomonotonicity and some coercivity conditions to establish some existence result for a solution of generalized vector equilibrium problems for multivalued bifunctions. The proper quasiconvexity of multivalued bifunctions is introduced and existence theorems for generalized vector equilibrium problems related to multivalued mappings with the KKM property are obtained. The new results extend and modify various existence theorems for similar problems.  相似文献   

17.
We study the convergence of a general perturbation of the Newton method for solving a nonlinear system of equations. As an application, we show that the augmented Lagrangian successive quadratic programming is locally and q-quadratically convergent in the variable x to the solution of an equality constrained optimization problem, under a mild condition on the penalty parameter and the choice of the Lagrange multipliers.  相似文献   

18.
19.
This paper presents a readily implementable algorthm for solvingconstrained minimization problems invlving (not necessarilysmooth) convex functions. The algorithm minimizes an exact penaltyfunction via the aggregater subgradient method for unconstrainedminimization, A scheme for automatic limitaion of penalty growthis given. The algorithm is globally convergent under mild assumptions.  相似文献   

20.
Chen  Shuang  Pang  Li-ping  Li  Dan  Wang  Jin-he 《应用数学学报(英文版)》2019,35(3):591-606
Acta Mathematicae Applicatae Sinica, English Series - For the variational inequality with symmetric cone constraints problem, we consider using the inexact modified Newton method to efficiently...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号