首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The purpose of this paper is to derive, in a unified way, second order necessary and sufficient optimality criteria, for four types of nonsmooth minimization problems: thediscrete minimax problem, thediscrete l 1-approximation, the minimization of theexact penalty function and the minimization of theclassical exterior penalty function. Our results correct and supplement conditions obtained by various authors in recent papers.  相似文献   

2.
An algorithm based on a combination of the polyhedral and quadratic approximation is given for finding stationary points for unconstrained minimization problems with locally Lips-chitz problem functions that are not necessarily convex or differentiable. Global convergence of the algorithm is established. Under additional assumptions, it is shown that the algorithm generates Newton iterations and that the convergence is superlinear. Some encouraging numerical experience is reported. This work was supported by the grant No. 201/96/0918 given by the Czech Republic Grant Agency.  相似文献   

3.
In this paper we present an algorithm for solving nonlinear programming problems where the objective function contains a possibly nonsmooth convex term. The algorithm successively solves direction finding subproblems which are quadratic programming problems constructed by exploiting the special feature of the objective function. An exact penalty function is used to determine a step-size, once a search direction thus obtained is judged to yield a sufficient reduction in the penalty function value. The penalty parameter is adjusted to a suitable value automatically. Under appropriate assumptions, the algorithm is shown to produce an approximate optimal solution to the problem with any desirable accuracy in a finite number of iterations.  相似文献   

4.
5.
A coordinate gradient descent method for nonsmooth separable minimization   总被引:1,自引:0,他引:1  
We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with ?1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method. The local Lipschitzian error bound holds under assumptions analogous to those for constrained smooth optimization, e.g., the convex function is polyhedral and the smooth function is (nonconvex) quadratic or is the composition of a strongly convex function with a linear mapping. We report numerical experience with solving the ?1-regularization of unconstrained optimization problems from Moré et al. in ACM Trans. Math. Softw. 7, 17–41, 1981 and from the CUTEr set (Gould and Orban in ACM Trans. Math. Softw. 29, 373–394, 2003). Comparison with L-BFGS-B and MINOS, applied to a reformulation of the ?1-regularized problem as a bound-constrained optimization problem, is also reported.  相似文献   

6.
7.
We introduce a trust region algorithm for minimization of nonsmooth functions with linear constraints. At each iteration, the objective function is approximated by a model function that satisfies a set of assumptions stated recently by Qi and Sun in the context of unconstrained nonsmooth optimization. The trust region iteration begins with the resolution of an “easy problem”, as in recent works of Martínez and Santos and Friedlander, Martínez and Santos, for smooth constrained optimization. In practical implementations we use the infinity norm for defining the trust region, which fits well with the domain of the problem. We prove global convergence and report numerical experiments related to a parameter estimation problem. Supported by FAPESP (Grant 90/3724-6), FINEP and FAEP-UNICAMP. Supported by FAPESP (Grant 90/3724-6 and grant 93/1515-9).  相似文献   

8.
A readily implementable algorithm is given for minimizing a (possibly nondifferentiable and nonconvex) locally Lipschitz continuous functionf subject to linear constraints. At each iteration a polyhedral approximation tof is constructed from a few previously computed subgradients and an aggregate subgradient, which accumulates the past subgradient information. This aproximation and the linear constraints generate constraints in the search direction finding subproblem that is a quadratic programming problem. Then a stepsize is found by an approximate line search. All the algorithm's accumulation points are stationary. Moreover, the algorithm converges whenf happens to be convex.  相似文献   

9.
10.
We introduce a proximal alternating linearized minimization (PALM) algorithm for solving a broad class of nonconvex and nonsmooth minimization problems. Building on the powerful Kurdyka–?ojasiewicz property, we derive a self-contained convergence analysis framework and establish that each bounded sequence generated by PALM globally converges to a critical point. Our approach allows to analyze various classes of nonconvex-nonsmooth problems and related nonconvex proximal forward–backward algorithms with semi-algebraic problem’s data, the later property being shared by many functions arising in a wide variety of fundamental applications. A by-product of our framework also shows that our results are new even in the convex setting. As an illustration of the results, we derive a new and simple globally convergent algorithm for solving the sparse nonnegative matrix factorization problem.  相似文献   

11.
An aggregate subgradient method for nonsmooth convex minimization   总被引:2,自引:0,他引:2  
A class of implementable algorithms is described for minimizing any convex, not necessarily differentiable, functionf of several variables. The methods require only the calculation off and one subgradient off at designated points. They generalize Lemarechal's bundle method. More specifically, instead of using all previously computed subgradients in search direction finding subproblems that are quadratic programming problems, the methods use an aggregate subgradient which is recursively updated as the algorithms proceed. Each algorithm yields a minimizing sequence of points, and iff has any minimizers, then this sequence converges to a solution of the problem. Particular members of this algorithm class terminate whenf is piecewise linear. The methods are easy to implement and have flexible storage requirements and computational effort per iteration that can be controlled by a user. Research sponsored by the Institute of Automatic Control, Technical University of Warsaw, Poland, under Project R.I.21.  相似文献   

12.
In this paper, we study a generalized quasi-variational inequality(GQVI for short) with two multivalued operators and two bifunctions in a Banach space setting. A coupling of the Tychonov fixed point principle and the Katutani-Ky Fan theorem for multivalued maps is employed to prove a new existence theorem for the GQVI. We also study a nonlinear optimal control problem driven by the GQVI and give sufficient conditions ensuring the existence of an optimal control. Finally, we illustrate the appli...  相似文献   

13.
In this paper, we consider a class of optimal control problems which is governed by nonsmooth functional inequality constraints involving convolution. First, we transform it into an equivalent optimal control problem with smooth functional inequality constraints at the expense of doubling the dimension of the control variables. Then, using the Chebyshev polynomial approximation of the control variables, we obtain an semi-infinite quadratic programming problem. At last, we use the dual parametrization technique to solve the problem.  相似文献   

14.
Newton's method for a class of nonsmooth functions   总被引:1,自引:0,他引:1  
This paper presents and justifies a Newton iterative process for finding zeros of functions admitting a certain type of approximation. This class includes smooth functions as well as nonsmooth reformulations of variational inequalities. We prove for this method an analogue of the fundamental local convergence theorem of Kantorovich including optimal error bounds.The research reported here was sponsored by the National Science Foundation under Grants CCR-8801489 and CCR-9109345, by the Air Force Systems Command, USAF, under Grants AFOSR-88-0090 and F49620-93-1-0068, by the U. S. Army Research Office under Grant No. DAAL03-92-G-0408, and by the U. S. Army Space and Strategic Defense Command under Contract No. DASG60-91-C-0144. The U. S. Government has certain rights in this material, and is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.  相似文献   

15.
A modified gradient method is developed for minimization of nonsmooth exact penalty functions. The complexity of each iteration in the proposed method is lower than in the original method.Translated from Vychislitel'naya i Prikladnaya Matematika, No. 73, pp. 108–112, 1992.  相似文献   

16.
17.
A nonsmooth Levenberg-Marquard (LM) method with double parameter adjusting strategies is presented for solving vertical complementarity problems based on the computation of an element of a vextor-valued minimum function’s B-differential in this paper. At each iteration, the LM parameter is adjusted based on the norm of the vector-valued minimum function and the ratio between the actual reduction and the predicted reduction. Under the local error bound condition, which is strictly weaker than nonsingular assumption, the local convergence rate is discussed. Finally, the numerical tests indicate that the present algorithm is effective.  相似文献   

18.
研究了一类非光滑多目标规划问题.这类多目标规划问题的目标函数为锥凸函数与可微函数之和,其约束条件是Euclidean空间中的锥约束.在满足广义Abadie约束规格下,利用广义Farkas引理和多目标函数标量化,给出了这一类多目标规划问题的锥弱有效解最优性必要条件.  相似文献   

19.
We introduce a new and very simple algorithm for a class of smooth convex constrained minimization problems which is an iterative scheme related to sequential quadratically constrained quadratic programming methods, called sequential simple quadratic method (SSQM). The computational simplicity of SSQM, which uses first-order information, makes it suitable for large scale problems. Theoretical results under standard assumptions are given proving that the whole sequence built by the algorithm converges to a solution and becomes feasible after a finite number of iterations. When in addition the objective function is strongly convex then asymptotic linear rate of convergence is established.  相似文献   

20.
This paper presents a readily implementable algorithm for minimizing a locally Lipschitz continuous function that is not necessarily convex or differentiable. This extension of the aggregate subgradient method differs from one developed by the author in the treatment of nonconvexity. Subgradient aggregation allows the user to control the number of constraints in search direction finding subproblems and, thus, trade-off subproblem solution effort for rate of convergence. All accumulation points of the algorithm are stationary. Moreover, the algorithm converges when the objective function happens to be convex.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号