首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a bundle trust-region algorithm to minimize locally Lipschitz functions which are potentially nonsmooth and nonconvex. We prove global convergence of our method and show by way of an example that the classical convergence argument in trust-region methods based on the Cauchy point fails in the nonsmooth setting. Our method is tested experimentally on three problems in automatic control.  相似文献   

2.
In this paper, we design a numerical algorithm for solving a simple bilevel program where the lower level program is a nonconvex minimization problem with a convex set constraint. We propose to solve a combined problem where the first order condition and the value function are both present in the constraints. Since the value function is in general nonsmooth, the combined problem is in general a nonsmooth and nonconvex optimization problem. We propose a smoothing augmented Lagrangian method for solving a general class of nonsmooth and nonconvex constrained optimization problems. We show that, if the sequence of penalty parameters is bounded, then any accumulation point is a Karush-Kuch-Tucker (KKT) point of the nonsmooth optimization problem. The smoothing augmented Lagrangian method is used to solve the combined problem. Numerical experiments show that the algorithm is efficient for solving the simple bilevel program.  相似文献   

3.
We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a (probably nonconvex) smooth function and a (probably nonsmooth) convex function. The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions. Therefore, the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods. When the accuracy of the model function at the solution of the subproblem is not sufficient, we add a safeguard on the stepsizes for improving the accuracy. For a class of functions that can be "truncated'', an additional truncation step is defined and a stepsize modification strategy is designed. The overall scheme converges globally and we establish fast local convergence under suitable assumptions. In particular, using a connection with a smooth Riemannian trust-region method, we prove local quadratic convergence for partly smooth functions under a strict complementary condition. Preliminary numerical results on a family of $\ell_1$-optimization problems are reported and demonstrate the efficiency of our approach.  相似文献   

4.
In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true for both nonsmooth and smooth problems. For the latter class, we present also an accelerated scheme with the expected rate of convergence \(O\Big ({n^2 \over k^2}\Big )\), where k is the iteration counter. For stochastic optimization, we propose a zero-order scheme and justify its expected rate of convergence \(O\Big ({n \over k^{1/2}}\Big )\). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, for both smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.  相似文献   

5.
Distributed consensus optimization has received considerable attention in recent years and several distributed consensus-based algorithms have been proposed for (nonsmooth) convex and (smooth) nonconvex objective functions. However, the behavior of these distributed algorithms on nonconvex, nonsmooth and stochastic objective functions is not understood. Such class of functions and distributed setting are motivated by several applications, including problems in machine learning and signal processing. This paper presents the first convergence analysis of the decentralized stochastic subgradient method for such classes of problems, over networks modeled as undirected, fixed, graphs.  相似文献   

6.
《Optimization》2012,61(7):1439-1469
In the article we use abstract convexity theory in order to unify and generalize many different concepts of nonsmooth analysis. We introduce the concepts of abstract codifferentiability, abstract quasidifferentiability and abstract convex (concave) approximations of a nonsmooth function mapping a topological vector space to an order complete topological vector lattice. We study basic properties of these notions, construct elaborate calculus of abstract codifferentiable functions and discuss continuity of abstract codifferential. We demonstrate that many classical concepts of nonsmooth analysis, such as subdifferentiability and quasidifferentiability, are particular cases of the concepts of abstract codifferentiability and abstract quasidifferentiability. We also show that abstract convex and abstract concave approximations are a very convenient tool for the study of nonsmooth extremum problems. We use these approximations in order to obtain various necessary optimality conditions for nonsmooth nonconvex optimization problems with the abstract codifferentiable or abstract quasidifferentiable objective function and constraints. Then, we demonstrate how these conditions can be transformed into simpler and more constructive conditions in some particular cases.  相似文献   

7.
Gradient methods for minimizing composite functions   总被引:1,自引:0,他引:1  
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure. Despite the absence of good properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the first part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (with convergence rate $O\left({1 \over k}\right)$ ), and an accelerated multistep version with convergence rate $O\left({1 \over k^2}\right)$ , where $k$ is the iteration counter. For nonconvex problems with this structure, we prove convergence to a point from which there is no descent direction. In contrast, we show that for general nonsmooth, nonconvex problems, even resolving the question of whether a descent direction exists from a point is NP-hard. For all methods, we suggest some efficient “line search” procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.  相似文献   

8.
Multiobjective DC optimization problems arise naturally, for example, in data classification and cluster analysis playing a crucial role in data mining. In this paper, we propose a new multiobjective double bundle method designed for nonsmooth multiobjective optimization problems having objective and constraint functions which can be presented as a difference of two convex (DC) functions. The method is of the descent type and it generalizes the ideas of the double bundle method for multiobjective and constrained problems. We utilize the special cutting plane model angled for the DC improvement function such that the convex and the concave behaviour of the function is captured. The method is proved to be finitely convergent to a weakly Pareto stationary point under mild assumptions. Finally, we consider some numerical experiments and compare the solutions produced by our method with the method designed for general nonconvex multiobjective problems. This is done in order to validate the usage of the method aimed specially for DC objectives instead of a general nonconvex method.  相似文献   

9.
去除脉冲噪声是图像复原中的重要任务之一.我们提出一类非光滑非凸模型来恢复模糊和脉冲噪声污染的图像,该模型具有灵活的先验信息引入机制,如盒子约束或低秩等.为了求解所提非凸问题,我们采用近端线性化最小化算法.对于算法中的子问题,我们运用交替方向乘子法.在目标函数满足Kurdyka-Lojasiewicz性质的假设下,我们证明所提算法的全局收敛性.数值实验表明,在主观和客观质量评价方面,我们的方法优于$\ell_{1}$TV和非凸TV模型.  相似文献   

10.
We propose an inexact proximal bundle method for constrained nonsmooth nonconvex optimization problems whose objective and constraint functions are known through oracles which provide inexact information. The errors in function and subgradient evaluations might be unknown, but are merely bounded. To handle the nonconvexity, we first use the redistributed idea, and consider even more difficulties by introducing inexactness in the available information. We further examine the modified improvement function for a series of difficulties caused by the constrained functions. The numerical results show the good performance of our inexact method for a large class of nonconvex optimization problems. The approach is also assessed on semi-infinite programming problems, and some encouraging numerical experiences are provided.  相似文献   

11.
This paper proposes an algorithm for the unconstrained minimization of a class of nonsmooth and nonconvex functions that can be written as finite-max functions. A gradient and function-based sampling method is proposed which, under special circumstances, either moves superlinearly to a minimizer of the problem of interest or improves the optimality certificate. Global and local convergence analysis are presented, as well as examples that illustrate the obtained theoretical results.  相似文献   

12.
In this paper, a new algorithm to locally minimize nonsmooth functions represented as a difference of two convex functions (DC functions) is proposed. The algorithm is based on the concept of codifferential. It is assumed that DC decomposition of the objective function is known a priori. We develop an algorithm to compute descent directions using a few elements from codifferential. The convergence of the minimization algorithm is studied and its comparison with different versions of the bundle methods using results of numerical experiments is given.  相似文献   

13.
In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka–?ojasiewicz inequality. This assumption allows to cover a wide range of problems, including nonsmooth semi-algebraic (or more generally tame) minimization. The specialization of our result to different kinds of structured problems provides several new convergence results for inexact versions of the gradient method, the proximal method, the forward–backward splitting algorithm, the gradient projection and some proximal regularization of the Gauss–Seidel method in a nonconvex setting. Our results are illustrated through feasibility problems, or iterative thresholding procedures for compressive sensing.  相似文献   

14.
In this paper, we study the Kurdyka–?ojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we develop various calculus rules to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents. In addition, we show that the well-studied Luo–Tseng error bound together with a mild assumption on the separation of stationary values implies that the KL exponent is \(\frac{1}{2}\). The Luo–Tseng error bound is known to hold for a large class of concrete structured optimization problems, and thus we deduce the KL exponent of a large class of functions whose exponents were previously unknown. Building upon this and the calculus rules, we are then able to show that for many convex or nonconvex optimization models for applications such as sparse recovery, their objective function’s KL exponent is \(\frac{1}{2}\). This includes the least squares problem with smoothly clipped absolute deviation regularization or minimax concave penalty regularization and the logistic regression problem with \(\ell _1\) regularization. Since many existing local convergence rate analysis for first-order methods in the nonconvex scenario relies on the KL exponent, our results enable us to obtain explicit convergence rate for various first-order methods when they are applied to a large variety of practical optimization models. Finally, we further illustrate how our results can be applied to establishing local linear convergence of the proximal gradient algorithm and the inertial proximal algorithm with constant step sizes for some specific models that arise in sparse recovery.  相似文献   

15.
M. V. Dolgopolik 《Optimization》2017,66(10):1577-1622
In this article, we develop a general theory of exact parametric penalty functions for constrained optimization problems. The main advantage of the method of parametric penalty functions is the fact that a parametric penalty function can be both smooth and exact unlike the standard (i.e. non-parametric) exact penalty functions that are always nonsmooth. We obtain several necessary and/or sufficient conditions for the exactness of parametric penalty functions, and for the zero duality gap property to hold true for these functions. We also prove some convergence results for the method of parametric penalty functions, and derive necessary and sufficient conditions for a parametric penalty function to not have any stationary points outside the set of feasible points of the constrained optimization problem under consideration. In the second part of the paper, we apply the general theory of exact parametric penalty functions to a class of parametric penalty functions introduced by Huyer and Neumaier, and to smoothing approximations of nonsmooth exact penalty functions. The general approach adopted in this article allowed us to unify and significantly sharpen many existing results on parametric penalty functions.  相似文献   

16.
Typically, exact information of the whole subdifferential is not available for intrinsically nonsmooth objective functions such as for marginal functions. Therefore, the semismoothness of the objective function cannot be proved or is even violated. In particular, in these cases standard nonsmooth methods cannot be used. In this paper, we propose a new approach to develop a converging descent method for this class of nonsmooth functions. This approach is based on continuous outer subdifferentials introduced by us. Further, we introduce on this basis a conceptual optimization algorithm and prove its global convergence. This leads to a constructive approach enabling us to create a converging descent method. Within the algorithmic framework, neither semismoothness nor calculation of exact subgradients are required. This is in contrast to other approaches which are usually based on the assumption of semismoothness of the objective function.  相似文献   

17.
包含FR方法的一类无约束极小化方法的全局收敛性   总被引:5,自引:0,他引:5  
本文对包含Fletcher-Reeves共轭梯度法的一类无约束最优化方法的全局收敛性进行了研究.Fletcher-Reeves方法的某些性质在收敛性分析中起着重要的作用.我们以一种简单的方式证明了这类方法在一种Wolfe型非精确线搜索条件下对光滑的非凸函数具有下降性和全局收敛性.全局收敛性结果也被推广到了一种广义Wolfe型非精确线搜索.  相似文献   

18.
In this paper, we analyze the convergence of the Peaceman-Rachford splitting method (PRSM) for a type of nonconvex and nonsmooth optimization with linear constraints, whose objective function is the sum of a proper lower semicontinuous function and a strongly convex differential function. When a suitable penalty parameter is chosen and the iterative point sequence is bounded, we show the global convergence of the PRSM. Furthermore, under the assumption that the associated function satisfies the Kurdyka-Łojasiewicz property, we prove the strong convergence of the PRSM. We also provide sufficient conditions guaranteeing the boundedness of the generated sequence. Finally, we present some preliminary numerical results to show the effectiveness of the PRSM and also give a comparison with the Douglas-Rachford splitting method.  相似文献   

19.
N. Karmitsa 《Optimization》2016,65(8):1599-1614
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases, the usual subgradient-based optimization methods cannot be used. However, the derivative free methods are applicable since they do not use explicit computation of subgradients. In this paper, we propose an efficient diagonal discrete gradient bundle method for derivative-free, possibly nonconvex, nonsmooth minimization. The convergence of the proposed method is proved for semismooth functions, which are not necessarily differentiable or convex. The method is implemented using Fortran 95, and the numerical experiments confirm the usability and efficiency of the method especially in case of large-scale problems.  相似文献   

20.
In this paper, we investigate the convergence of the proximal iteratively reweighted algorithm for a class of nonconvex and nonsmooth problems. Such problems actually include numerous models in the area of signal processing and machine learning research. Two extensions of the algorithm are also studied. We provide a unified scheme for these three algorithms. With the Kurdyka–?ojasiewicz property, we prove that the unified algorithm globally converges to a critical point of the objective function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号