首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We investigate the convergence of subgradient-oriented descent methods in non-smooth non-convex optimization. We prove convergence in the sense of subsequences for functions with a strict standard model, and we show that convergence to a single critical point may be guaranteed if the Kurdyka–?ojasiewicz inequality is satisfied. We show, by way of an example, that the Kurdyka–?ojasiewicz inequality alone is not sufficient to prove the convergence to critical points.  相似文献   

2.
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka–?ojasiewicz property.  相似文献   

3.
In this paper, we investigate the convergence of the proximal iteratively reweighted algorithm for a class of nonconvex and nonsmooth problems. Such problems actually include numerous models in the area of signal processing and machine learning research. Two extensions of the algorithm are also studied. We provide a unified scheme for these three algorithms. With the Kurdyka–?ojasiewicz property, we prove that the unified algorithm globally converges to a critical point of the objective function.  相似文献   

4.
In this paper we propose an alternating block version of a variable metric linesearch proximal gradient method. This algorithm addresses problems where the objective function is the sum of a smooth term, whose variables may be coupled, plus a separable part given by the sum of two or more convex, possibly nonsmooth functions, each depending on a single block of variables. Our approach is characterized by the possibility of performing several proximal gradient steps for updating every block of variables and by the Armijo backtracking linesearch for adaptively computing the steplength parameter. Under the assumption that the objective function satisfies the Kurdyka-?ojasiewicz property at each point of its domain and the gradient of the smooth part is locally Lipschitz continuous, we prove the convergence of the iterates sequence generated by the method. Numerical experience on an image blind deconvolution problem show the improvements obtained by adopting a variable number of inner block iterations combined with a variable metric in the computation of the proximal operator.  相似文献   

5.
The forward–backward splitting method (FBS) for minimizing a nonsmooth composite function can be interpreted as a (variable-metric) gradient method over a continuously differentiable function which we call forward–backward envelope (FBE). This allows to extend algorithms for smooth unconstrained optimization and apply them to nonsmooth (possibly constrained) problems. Since the FBE can be computed by simply evaluating forward–backward steps, the resulting methods rely on a similar black-box oracle as FBS. We propose an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka–?ojasiewicz property at its critical points. Moreover, when using quasi-Newton directions the proposed method achieves superlinear convergence provided that usual second-order sufficiency conditions on the FBE hold at the limit point of the generated sequence. Such conditions translate into milder requirements on the original function involving generalized second-order differentiability. We show that BFGS fits our framework and that the limited-memory variant L-BFGS is well suited for large-scale problems, greatly outperforming FBS or its accelerated version in practice, as well as ADMM and other problem-specific solvers. The analysis of superlinear convergence is based on an extension of the Dennis and Moré theorem for the proposed algorithmic scheme.  相似文献   

6.
《Optimization》2012,61(6):1075-1105
ABSTRACT

In this paper, we consider a class of sparse inverse semidefinite quadratic programming problems, in which a nonconvex alternating direction method of multiplier is investigated. Under mild conditions, we establish convergence results of our algorithm and the corresponding non-ergodic iteration-complexity is also considered under the assumption that the potential function satisfies the famous Kurdyka–?ojasiewicz property. Numerical results show that our algorithm is suitable to solve the given sparse inverse semidefinite quadratic programming problems.  相似文献   

7.
Feasible Direction Interior-Point Technique for Nonlinear Optimization   总被引:5,自引:0,他引:5  
We propose a feasible direction approach for the minimization by interior-point algorithms of a smooth function under smooth equality and inequality constraints. It consists of the iterative solution in the primal and dual variables of the Karush–Kuhn–Tucker first-order optimality conditions. At each iteration, a descent direction is defined by solving a linear system. In a second stage, the linear system is perturbed so as to deflect the descent direction and obtain a feasible descent direction. A line search is then performed to get a new interior point and ensure global convergence. Based on this approach, first-order, Newton, and quasi-Newton algorithms can be obtained. To introduce the method, we consider first the inequality constrained problem and present a globally convergent basic algorithm. Particular first-order and quasi-Newton versions of this algorithm are also stated. Then, equality constraints are included. This method, which is simple to code, does not require the solution of quadratic programs and it is neither a penalty method nor a barrier method. Several practical applications and numerical results show that our method is strong and efficient.  相似文献   

8.
Huang  Wen  Wei  Ke 《Mathematical Programming》2022,194(1-2):371-413

In the Euclidean setting the proximal gradient method and its accelerated variants are a class of efficient algorithms for optimization problems with decomposable objective. In this paper, we develop a Riemannian proximal gradient method (RPG) and its accelerated variant (ARPG) for similar problems but constrained on a manifold. The global convergence of RPG is established under mild assumptions, and the O(1/k) is also derived for RPG based on the notion of retraction convexity. If assuming the objective function obeys the Rimannian Kurdyka–?ojasiewicz (KL) property, it is further shown that the sequence generated by RPG converges to a single stationary point. As in the Euclidean setting, local convergence rate can be established if the objective function satisfies the Riemannian KL property with an exponent. Moreover, we show that the restriction of a semialgebraic function onto the Stiefel manifold satisfies the Riemannian KL property, which covers for example the well-known sparse PCA problem. Numerical experiments on random and synthetic data are conducted to test the performance of the proposed RPG and ARPG.

  相似文献   

9.
We introduce a proximal alternating linearized minimization (PALM) algorithm for solving a broad class of nonconvex and nonsmooth minimization problems. Building on the powerful Kurdyka–?ojasiewicz property, we derive a self-contained convergence analysis framework and establish that each bounded sequence generated by PALM globally converges to a critical point. Our approach allows to analyze various classes of nonconvex-nonsmooth problems and related nonconvex proximal forward–backward algorithms with semi-algebraic problem’s data, the later property being shared by many functions arising in a wide variety of fundamental applications. A by-product of our framework also shows that our results are new even in the convex setting. As an illustration of the results, we derive a new and simple globally convergent algorithm for solving the sparse nonnegative matrix factorization problem.  相似文献   

10.
In this article, by slightly modifying the search direction of the nonmonotone Hestenes–Stiefel method, a variant Hestenes–Stiefel conjugate gradient method is proposed that satisfies the su?cient descent condition independent of any line search. This algorithm also possesses information about the gradient value and the function value. We establish the global convergence of our methods without the assumption that the steplength is bounded away from zero. Numerical results illustrate that our method can e?ciently solve the test problems, and therefore is promising.  相似文献   

11.
In this paper we develop random block coordinate descent methods for minimizing large-scale linearly constrained convex problems over networks. Since coupled constraints appear in the problem, we devise an algorithm that updates in parallel at each iteration at least two random components of the solution, chosen according to a given probability distribution. Those computations can be performed in a distributed fashion according to the structure of the network. Complexity per iteration of the proposed methods is usually cheaper than that of the full gradient method when the number of nodes in the network is much larger than the number of updated components. On smooth convex problems, we prove that these methods exhibit a sublinear worst-case convergence rate in the expected value of the objective function. Moreover, this convergence rate depends linearly on the number of components to be updated. On smooth strongly convex problems we prove that our methods converge linearly. We also focus on how to choose the probabilities to make our randomized algorithms converge as fast as possible, which leads us to solving a sparse semidefinite program. We then describe several applications that fit in our framework, in particular the convex feasibility problem. Finally, numerical experiments illustrate the behaviour of our methods, showing in particular that updating more than two components in parallel accelerates the method.  相似文献   

12.
We propose feasible descent methods for constrained minimization that do not make explicit use of the derivative of the objective function. The methods iteratively sample the objective function value along a finite set of feasible search arcs and decrease the sampling stepsize if an improved objective function value is not sampled. The search arcs are obtained by projecting search direction rays onto the feasible set and the search directions are chosen such that a subset approximately generates the cone of first-order feasible variations at the current iterate. We show that these methods have desirable convergence properties under certain regularity assumptions on the constraints. In the case of linear constraints, the projections are redundant and the regularity assumptions hold automatically. Numerical experience with the methods in the linearly constrained case is reported. Received: November 12, 1999 / Accepted: April 6, 2001?Published online October 26, 2001  相似文献   

13.
《Optimization》2012,61(12):1383-1403
We consider a proximal algorithm with quasi distance applied to nonconvex and nonsmooth functions involving analytic properties for a minimization problem. We show the behavioural importance of this proximal point model for the formation of habit in Decision and Making Sciences. The convergence of the sequence generated by our algorithm to a critical-limit point is guaranteed under standard conditions of coercivity and by using the Kurdyka–?ojasiewicz inequality. We present a definition of habit that contemplates that kind of convergence.  相似文献   

14.
In this paper, we propose two proximal-gradient algorithms for fractional programming problems in real Hilbert spaces, where the numerator is a proper, convex and lower semicontinuous function and the denominator is a smooth function, either concave or convex. In the iterative schemes, we perform a proximal step with respect to the nonsmooth numerator and a gradient step with respect to the smooth denominator. The algorithm in case of a concave denominator has the particularity that it generates sequences which approach both the (global) optimal solutions set and the optimal objective value of the underlying fractional programming problem. In case of a convex denominator the numerical scheme approaches the set of critical points of the objective function, provided the latter satisfies the Kurdyka-?ojasiewicz property.  相似文献   

15.
In this paper, we prove a theorem of convergence to a point for descent minimization methods. When the objective function is differentiable, the convergence point is a stationary point. The theorem, however, is applicable also to nondifferentiable functions. This theorem is then applied to prove convergence of some nongradient algorithms.  相似文献   

16.
董丽  周金川 《数学杂志》2015,35(1):173-179
本文研究了无约束优化问题.利用当前和前面迭代点的信息以及曲线搜索技巧产生新的迭代点,得到了一个新的求解无约束优化问题的下降方法.在较弱条件下证明了算法具有全局收敛性.当目标函数为一致凸函数时,证明了算法具有线性收敛速率.初步的数值试验表明算法是有效的.  相似文献   

17.
Globally Convergent Algorithms for Unconstrained Optimization   总被引:2,自引:0,他引:2  
A new globalization strategy for solving an unconstrained minimization problem is proposed based on the idea of combining Newton's direction and the steepest descent direction WITHIN each iteration. Global convergence is guaranteed with an arbitrary initial point. The search direction in each iteration is chosen to be as close to the Newton's direction as possible and could be the Newton's direction itself. Asymptotically the Newton step will be taken in each iteration and thus the local convergence is quadratic. Numerical experiments are also reported. Possible combination of a Quasi-Newton direction with the steepest descent direction is also considered in our numerical experiments. The differences between the proposed strategy and a few other strategies are also discussed.  相似文献   

18.
In this paper, we consider the minimization of a class of nonconvex composite functions with difference of convex structure under linear constraints. While this kind of problems in theory can be solved by the celebrated alternating direction method of multipliers (ADMM), a direct application of ADMM often leads to difficult nonconvex subproblems. To address this issue, we propose to convexify the subproblems through a linearization technique as done in the difference of convex functions algorithm (DCA). By assuming the Kurdyka-?ojasiewicz property, we prove that the resulting algorithm sequentially converges to a critical point. It turns out that in the applications of signal and image processing such as compressed sensing and image denoising, the proposed algorithm usually enjoys closed-form solutions of the subproblems and thus can be very efficient. We provide numerical experiments to demonstrate the effectiveness of our algorithm.  相似文献   

19.
In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka–?ojasiewicz inequality. This assumption allows to cover a wide range of problems, including nonsmooth semi-algebraic (or more generally tame) minimization. The specialization of our result to different kinds of structured problems provides several new convergence results for inexact versions of the gradient method, the proximal method, the forward–backward splitting algorithm, the gradient projection and some proximal regularization of the Gauss–Seidel method in a nonconvex setting. Our results are illustrated through feasibility problems, or iterative thresholding procedures for compressive sensing.  相似文献   

20.
We present a new continuous approach based on the DC (difference of convex functions) programming and DC algorithms (DCA) to the problem of supply chain design at the strategic level when production of a new market opportunity has to be launched among a set of qualified partners. A well known formulation of this problem is the mixed integer linear program. In this paper, we reformulate this problem as a DC program by using an exact penalty technique. The proposed algorithm is a combination of DCA and Branch and Bound scheme. It works in a continuous domain but provides mixed integer solutions. Numerical simulations on many empirical data sets show the efficiency of our approach with respect to the standard Branch and Bound algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号