首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a new first-order splitting algorithm for solving jointly the primal and dual formulations of large-scale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitting approach, in the sense that the gradient and the linear operators involved are applied explicitly without any inversion, while the nonsmooth functions are processed individually via their proximity operators. This work brings together and notably extends several classical splitting schemes, like the forward–backward and Douglas–Rachford methods, as well as the recent primal–dual method of Chambolle and Pock designed for problems with linear composite terms.  相似文献   

2.
In this paper, we study the local linear convergence properties of a versatile class of Primal–Dual splitting methods for minimizing composite non-smooth convex optimization problems. Under the assumption that the non-smooth components of the problem are partly smooth relative to smooth manifolds, we present a unified local convergence analysis framework for these methods. More precisely, in our framework, we first show that (i) the sequences generated by Primal–Dual splitting methods identify a pair of primal and dual smooth manifolds in a finite number of iterations, and then (ii) enter a local linear convergence regime, which is characterized based on the structure of the underlying active smooth manifolds. We also show how our results for Primal–Dual splitting can be specialized to cover existing ones on Forward–Backward splitting and Douglas–Rachford splitting/ADMM (alternating direction methods of multipliers). Moreover, based on these obtained local convergence analysis result, several practical acceleration techniques are discussed. To exemplify the usefulness of the obtained result, we consider several concrete numerical experiments arising from fields including signal/image processing, inverse problems and machine learning. The demonstration not only verifies the local linear convergence behaviour of Primal–Dual splitting methods, but also the insights on how to accelerate them in practice.  相似文献   

3.
The forward–backward splitting method (FBS) for minimizing a nonsmooth composite function can be interpreted as a (variable-metric) gradient method over a continuously differentiable function which we call forward–backward envelope (FBE). This allows to extend algorithms for smooth unconstrained optimization and apply them to nonsmooth (possibly constrained) problems. Since the FBE can be computed by simply evaluating forward–backward steps, the resulting methods rely on a similar black-box oracle as FBS. We propose an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka–?ojasiewicz property at its critical points. Moreover, when using quasi-Newton directions the proposed method achieves superlinear convergence provided that usual second-order sufficiency conditions on the FBE hold at the limit point of the generated sequence. Such conditions translate into milder requirements on the original function involving generalized second-order differentiability. We show that BFGS fits our framework and that the limited-memory variant L-BFGS is well suited for large-scale problems, greatly outperforming FBS or its accelerated version in practice, as well as ADMM and other problem-specific solvers. The analysis of superlinear convergence is based on an extension of the Dennis and Moré theorem for the proposed algorithmic scheme.  相似文献   

4.
The Douglas–Rachford and alternating direction method of multipliers are two proximal splitting algorithms designed to minimize the sum of two proper lower semi-continuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local linear convergence behaviour of Douglas–Rachford (resp. alternating direction method of multipliers) when the involved functions (resp. their Legendre–Fenchel conjugates) are moreover partly smooth. More precisely, when the two functions (resp. their conjugates) are partly smooth relative to their respective smooth submanifolds, we show that Douglas–Rachford (resp. alternating direction method of multipliers) (i) identifies these manifolds in finite time; (ii) enters a local linear convergence regime. When both functions are locally polyhedral, we show that the optimal convergence radius is given in terms of the cosine of the Friedrichs angle between the tangent spaces of the identified submanifolds. Under polyhedrality of both functions, we also provide conditions sufficient for finite convergence. The obtained results are illustrated by several concrete examples and supported by numerical experiments.  相似文献   

5.
We introduce and study a geometric modification of the Douglas–Rachford method called the Circumcentered–Douglas–Rachford method. This method iterates by taking the intersection of bisectors of reflection steps for solving certain classes of feasibility problems. The convergence analysis is established for best approximation problems involving two (affine) subspaces and both our theoretical and numerical results compare favorably to the original Douglas–Rachford method. Under suitable conditions, it is shown that the linear rate of convergence of the Circumcentered–Douglas–Rachford method is at least the cosine of the Friedrichs angle between the (affine) subspaces, which is known to be the sharp rate for the Douglas–Rachford method. We also present a preliminary discussion on the Circumcentered–Douglas–Rachford method applied to the many set case and to examples featuring non-affine convex sets.  相似文献   

6.
The Douglas–Rachford algorithm is a popular method for finding zeros of sums of monotone operators. By its definition, the Douglas–Rachford operator is not symmetric with respect to the order of the two operators. In this paper we provide a systematic study of the two possible Douglas–Rachford operators. We show that the reflectors of the underlying operators act as bijections between the fixed points sets of the two Douglas–Rachford operators. Some elegant formulae arise under additional assumptions. Various examples illustrate our results.  相似文献   

7.
Recently, several authors have shown local and global convergence rate results for Douglas–Rachford splitting under strong monotonicity, Lipschitz continuity, and cocoercivity assumptions. Most of these focus on the convex optimization setting. In the more general monotone inclusion setting, Lions and Mercier showed a linear convergence rate bound under the assumption that one of the two operators is strongly monotone and Lipschitz continuous. We show that this bound is not tight, meaning that no problem from the considered class converges exactly with that rate. In this paper, we present tight global linear convergence rate bounds for that class of problems. We also provide tight linear convergence rate bounds under the assumptions that one of the operators is strongly monotone and cocoercive, and that one of the operators is strongly monotone and the other is cocoercive. All our linear convergence results are obtained by proving the stronger property that the Douglas–Rachford operator is contractive.  相似文献   

8.

The optimisation of nonsmooth, nonconvex functions without access to gradients is a particularly challenging problem that is frequently encountered, for example in model parameter optimisation problems. Bilevel optimisation of parameters is a standard setting in areas such as variational regularisation problems and supervised machine learning. We present efficient and robust derivative-free methods called randomised Itoh–Abe methods. These are generalisations of the Itoh–Abe discrete gradient method, a well-known scheme from geometric integration, which has previously only been considered in the smooth setting. We demonstrate that the method and its favourable energy dissipation properties are well defined in the nonsmooth setting. Furthermore, we prove that whenever the objective function is locally Lipschitz continuous, the iterates almost surely converge to a connected set of Clarke stationary points. We present an implementation of the methods, and apply it to various test problems. The numerical results indicate that the randomised Itoh–Abe methods can be superior to state-of-the-art derivative-free optimisation methods in solving nonsmooth problems while still remaining competitive in terms of efficiency.

  相似文献   

9.
In this work we propose a new splitting technique, namely Asymmetric Forward–Backward–Adjoint splitting, for solving monotone inclusions involving three terms, a maximally monotone, a cocoercive and a bounded linear operator. Our scheme can not be recovered from existing operator splitting methods, while classical methods like Douglas–Rachford and Forward–Backward splitting are special cases of the new algorithm. Asymmetric preconditioning is the main feature of Asymmetric Forward–Backward–Adjoint splitting, that allows us to unify, extend and shed light on the connections between many seemingly unrelated primal-dual algorithms for solving structured convex optimization problems proposed in recent years. One important special case leads to a Douglas–Rachford type scheme that includes a third cocoercive operator.  相似文献   

10.
In this paper, we present two Douglas–Rachford inspired iteration schemes which can be applied directly to N-set convex feasibility problems in Hilbert space. Our main results are weak convergence of the methods to a point whose nearest point projections onto each of the N sets coincide. For affine subspaces, convergence is in norm. Initial results from numerical experiments, comparing our methods to the classical (product-space) Douglas–Rachford scheme, are promising.  相似文献   

11.
本文研究了含有向量参数的非光滑优化问题的极值函数或叫做边缘函数的连续性及某种意义下的微分性质。给出了目标函数及不等式约束为李普希兹函数,等式约束为连续可微函数,并且带有闭凸约束集C的非凸非光滑问题的最优值函数的几种方向导数的界,把[4],[1]中关于一个参数的单边扰动推广到向量参数的扰动,亦可认为是把[2]由光滑函数类推广到李普希兹函数类。  相似文献   

12.
We study infinitesimal properties of nonsmooth (nondifferentiable) functions on smooth manifolds. The eigenvalue function of a matrix on the manifold of symmetric matrices gives a natural example of such a nonsmooth function.

A subdifferential calculus for lower semicontinuous functions is developed here for studying constrained optimization problems, nonclassical problems of calculus of variations, and generalized solutions of first-order partial differential equations on manifolds. We also establish criteria for monotonicity and invariance of functions and sets with respect to solutions of differential inclusions.

  相似文献   


13.
In this paper, we study the generalized Douglas–Rachford algorithm and its cyclic variants which include many projection-type methods such as the classical Douglas–Rachford algorithm and the alternating projection algorithm. Specifically, we establish several local linear convergence results for the algorithm in solving feasibility problems with finitely many closed possibly nonconvex sets under different assumptions. Our findings not only relax some regularity conditions but also improve linear convergence rates in the literature. In the presence of convexity, the linear convergence is global.  相似文献   

14.
In order to accelerate the Douglas–Rachford method we recently developed the circumcentered-reflection method, which provides the closest iterate to the solution among all points relying on successive reflections, for the best approximation problem related to two affine subspaces. We now prove that this is still the case when considering a family of finitely many affine subspaces. This property yields linear convergence and incites embedding of circumcenters within classical reflection and projection based methods for more general feasibility problems.  相似文献   

15.
M. V. Dolgopolik 《Optimization》2017,66(10):1577-1622
In this article, we develop a general theory of exact parametric penalty functions for constrained optimization problems. The main advantage of the method of parametric penalty functions is the fact that a parametric penalty function can be both smooth and exact unlike the standard (i.e. non-parametric) exact penalty functions that are always nonsmooth. We obtain several necessary and/or sufficient conditions for the exactness of parametric penalty functions, and for the zero duality gap property to hold true for these functions. We also prove some convergence results for the method of parametric penalty functions, and derive necessary and sufficient conditions for a parametric penalty function to not have any stationary points outside the set of feasible points of the constrained optimization problem under consideration. In the second part of the paper, we apply the general theory of exact parametric penalty functions to a class of parametric penalty functions introduced by Huyer and Neumaier, and to smoothing approximations of nonsmooth exact penalty functions. The general approach adopted in this article allowed us to unify and significantly sharpen many existing results on parametric penalty functions.  相似文献   

16.
Douglas–Rachford method is a splitting algorithm for finding a zero of the sum of two maximal monotone operators. Weak convergence in this method to a solution of the underlying monotone inclusion problem in the general case remained an open problem for 30 years and was proved by the author 7 years ago. That proof was cluttered with technicalities because we considered the inexact version with summable errors. In this short communication we present a streamlined proof of this result.  相似文献   

17.
We discuss recent positive experiences applying convex feasibility algorithms of Douglas–Rachford type to highly combinatorial and far from convex problems.  相似文献   

18.
Recently, the convergence of the Douglas–Rachford splitting method (DRSM) was established for minimizing the sum of a nonsmooth strongly convex function and a nonsmooth hypoconvex function under the assumption that the strong convexity constant \(\beta \) is larger than the hypoconvexity constant \(\omega \). Such an assumption, implying the strong convexity of the objective function, precludes many interesting applications. In this paper, we prove the convergence of the DRSM for the case \(\beta =\omega \), under relatively mild assumptions compared with some existing work in the literature.  相似文献   

19.
We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a (probably nonconvex) smooth function and a (probably nonsmooth) convex function. The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions. Therefore, the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods. When the accuracy of the model function at the solution of the subproblem is not sufficient, we add a safeguard on the stepsizes for improving the accuracy. For a class of functions that can be "truncated'', an additional truncation step is defined and a stepsize modification strategy is designed. The overall scheme converges globally and we establish fast local convergence under suitable assumptions. In particular, using a connection with a smooth Riemannian trust-region method, we prove local quadratic convergence for partly smooth functions under a strict complementary condition. Preliminary numerical results on a family of $\ell_1$-optimization problems are reported and demonstrate the efficiency of our approach.  相似文献   

20.
Image deblurring techniques based on convex optimization formulations, such as total-variation deblurring, often use specialized first-order methods for large-scale nondifferentiable optimization. A key property exploited in these methods is spatial invariance of the blurring operator, which makes it possible to use the fast Fourier transform (FFT) when solving linear equations involving the operator. In this paper we extend this approach to two popular models for space-varying blurring operators, the Nagy–O’Leary model and the efficient filter flow model. We show how splitting methods derived from the Douglas–Rachford algorithm can be implemented with a low complexity per iteration, dominated by a small number of FFTs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号