共查询到20条相似文献,搜索用时 15 毫秒
1.
A special variable metric method is given for finding minima of convex functions that are not necessarily differentiable. Time-consuming quadratic programming subproblems do not need to be solved. Global convergence of the method is established. Some encouraging numerical experience is reported. 相似文献
2.
Motivated by the method of Martinez and Qi (Ref. 1), we propose in this paper a globally convergent inexact generalized Newton method to solve unconstrained optimization problems in which the objective functions have Lipschitz continuous gradient functions, but are not twice differentiable. This method is implementable, globally convergent, and produces monotonically decreasing function values. We prove that the method has locally superlinear convergence or even quadratic convergence rate under some mild conditions, which do not assume the convexity of the functions. 相似文献
3.
In the asymptotic analysis of the minimization problem for a nonsmooth convex function on a closed convex set X in n, one can consider the corresponding problem of minimizing a smooth convex function F on n, where F denotes the Moreau–Yosida regularization of f. We study the interrelationship between the minimizing/stationary sequence for f and that for F. An algorithm is given to generate iteratively a possibly unbounded sequence, which is shown to be a minimizing sequence of f under certain regularity and uniform continuity assumptions. 相似文献
4.
In this article, we propose a strongly convergent variant on the projected subgradient method for constrained convex minimization problems in Hilbert spaces. The advantage of the proposed method is that it converges strongly when the problem has solutions, without additional assumptions. The method also has the following desirable property: the sequence converges to the solution of the problem which lies closest to the initial iterate. 相似文献
5.
Globally Convergent Algorithms for Unconstrained Optimization 总被引:2,自引:0,他引:2
Yixun Shi 《Computational Optimization and Applications》2000,16(3):295-308
A new globalization strategy for solving an unconstrained minimization problem is proposed based on the idea of combining Newton's direction and the steepest descent direction WITHIN each iteration. Global convergence is guaranteed with an arbitrary initial point. The search direction in each iteration is chosen to be as close to the Newton's direction as possible and could be the Newton's direction itself. Asymptotically the Newton step will be taken in each iteration and thus the local convergence is quadratic. Numerical experiments are also reported. Possible combination of a Quasi-Newton direction with the steepest descent direction is also considered in our numerical experiments. The differences between the proposed strategy and a few other strategies are also discussed. 相似文献
6.
Alexander J. Zaslavski 《Numerical Functional Analysis & Optimization》2013,34(5):616-633
We study the convergence of the projected subgradient method for constrained convex optimization in a Hilbert space. Our goal is to obtain an ε-approximate solution of the problem in the presence of computational errors, where ε is a given positive number. The results that we obtain are important in practice because computations always introduce numerical errors. 相似文献
7.
本文结合次梯度选取技术及割平面法和强次可行方向法的思想,提出了一个求解目标函数非光滑约束优化问题的强次可行方向算法.通过设计一个新的寻找搜索方向子问题和构造新型线搜索,算法不仅能接受不可行的初始点,而且能保持迭代点的强次可行性,同时避免在可行域外目标函数值的不适度增加.算法具备全局收敛性,且初步的数值试验表明算法是稳定有效的. 相似文献
8.
We deal with extended-valued nonsmooth convex vector optimization problems in infinite-dimensional spaces where the solution set (the weakly efficient set) may be empty. We characterize the class of convex vector functions having the property that every scalarly stationary sequence is a weakly-efficient sequence. We generalize the results obained in the scalar case by Auslender and Crouzeix about asymptotically well-behaved convex functions and improve substantially the few results known in the vector case. 相似文献
9.
10.
Paul Armand 《Computational Optimization and Applications》2003,26(1):5-34
We describe an infeasible interior point algorithm for convex minimization problems. The method uses quasi-Newton techniques for approximating the second derivatives and providing superlinear convergence. We propose a new feasibility control of the iterates by introducing shift variables and by penalizing them in the barrier problem. We prove global convergence under standard conditions on the problem data, without any assumption on the behavior of the algorithm. 相似文献
11.
A Globally and Superlinearly Convergent SQP Algorithm for Nonlinear Constrained Optimization 总被引:2,自引:0,他引:2
Based on a continuously differentiable exact penalty function and a regularization technique for dealing with the inconsistency of subproblems in the SQP method, we present a new SQP algorithm for nonlinear constrained optimization problems. The proposed algorithm incorporates automatic adjustment rules for the choice of the parameters and makes use of an approximate directional derivative of the merit function to avoid the need to evaluate second order derivatives of the problem functions. Under mild assumptions the algorithm is proved to be globally convergent, and in particular the superlinear convergence rate is established without assuming that the strict complementarity condition at the solution holds. Numerical results reported show that the proposed algorithm is promising. 相似文献
12.
A Globally Convergent Smoothing Newton Method for Nonsmooth Equations and Its Application to Complementarity Problems 总被引:3,自引:0,他引:3
The complementarity problem is theoretically and practically useful, and has been used to study and formulate various equilibrium problems arising in economics and engineerings. Recently, for solving complementarity problems, various equivalent equation formulations have been proposed and seem attractive. However, such formulations have the difficulty that the equation arising from complementarity problems is typically nonsmooth. In this paper, we propose a new smoothing Newton method for nonsmooth equations. In our method, we use an approximation function that is smooth when the approximation parameter is positive, and which coincides with original nonsmooth function when the parameter takes zero. Then, we apply Newton's method for the equation that is equivalent to the original nonsmooth equation and that includes an approximation parameter as a variable. The proposed method has the advantage that it has only to deal with a smooth function at any iteration and that it never requires a procedure to decrease an approximation parameter. We show that the sequence generated by the proposed method is globally convergent to a solution, and that, under semismooth assumption, its convergence rate is superlinear. Moreover, we apply the method to nonlinear complementarity problems. Numerical results show that the proposed method is practically efficient. 相似文献
13.
This paper introduces the notion of piecewise partially separable functions and studies their properties. We also consider
some of many applications of these functions. Finally, we consider the problem of minimizing of piecewise partially separable
functions and develop an algorithm for its solution. This algorithm exploits the structure of such functions. We present the
results of preliminary numerical experiments. 相似文献
14.
15.
This work focuses on convergence analysis of the projected gradient method for solving constrained convex minimization problems in Hilbert spaces. We show that the sequence of points generated by the method employing the Armijo line search converges weakly to a solution of the considered convex optimization problem. Weak convergence is established by assuming convexity and Gateaux differentiability of the objective function, whose Gateaux derivative is supposed to be uniformly continuous on bounded sets. Furthermore, we propose some modifications in the classical projected gradient method in order to obtain strong convergence. The new variant has the following desirable properties: the sequence of generated points is entirely contained in a ball with diameter equal to the distance between the initial point and the solution set, and the whole sequence converges strongly to the solution of the problem that lies closest to the initial iterate. Convergence analysis of both methods is presented without Lipschitz continuity assumption. 相似文献
16.
Recently Fukushima and Qi proposed a proximal Newton method for minimizating a nonsmooth convex function. An alternative global convergence proof for that method is presented in this paper. Global convergence was established without any additional assumption on the objective function. We also show that the infimum of a convex function is always equal to the infimun of its Moreau—Yosida regularization 相似文献
17.
An iterative method for the minimization of convex functions f :n , called a Newton Bracketing (NB) method, is presented. The NB method proceeds by using Newton iterations to improve upper and lower bounds on the minimum value. The NB method is valid for n = 1, and in some cases for n > 1 (sufficient conditions given here). The NB method is applied to large scale Fermat–Weber location problems. 相似文献
18.
19.
We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a (probably nonconvex) smooth function and a (probably nonsmooth) convex function. The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions. Therefore, the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods. When the accuracy of the model function at the solution of the subproblem is not sufficient, we add a safeguard on the stepsizes for improving the accuracy. For a class of functions that can be "truncated'', an additional truncation step is defined and a stepsize modification strategy is designed. The overall scheme converges globally and we establish fast local convergence under suitable assumptions. In particular, using a connection with a smooth Riemannian trust-region method, we prove local quadratic convergence for partly smooth functions under a strict complementary condition. Preliminary numerical results on a family of $\ell_1$-optimization problems are reported and demonstrate the efficiency of our approach. 相似文献
20.
S. Yamada T. Tanino M. Inuiguchi 《Journal of Optimization Theory and Applications》2000,107(2):355-389
In this paper, we consider a reverse convex programming problem constrained by a convex set and a reverse convex set, which is defined by the complement of the interior of a compact convex set X. We propose an inner approximation method to solve the problem in the case where X is not necessarily a polytope. The algorithm utilizes an inner approximation of X by a sequence of polytopes to generate relaxed problems. It is shown that every accumulation point of the sequence of optimal solutions of the relaxed problems is an optimal solution of the original problem. 相似文献