首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
We analyze the convergence rate of the alternating direction method of multipliers (ADMM) for minimizing the sum of two or more nonsmooth convex separable functions subject to linear constraints. Previous analysis of the ADMM typically assumes that the objective function is the sum of only two convex functions defined on two separable blocks of variables even though the algorithm works well in numerical experiments for three or more blocks. Moreover, there has been no rate of convergence analysis for the ADMM without strong convexity in the objective function. In this paper we establish the global R-linear convergence of the ADMM for minimizing the sum of any number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small. Such an error bound condition is satisfied for example when the feasible set is a compact polyhedron and the objective function consists of a smooth strictly convex function composed with a linear mapping, and a nonsmooth \(\ell _1\) regularizer. This result implies the linear convergence of the ADMM for contemporary applications such as LASSO without assuming strong convexity of the objective function.  相似文献   

2.
We study the Proximal Alternating Predictor–Corrector (PAPC) algorithm introduced recently by Drori, Sabach and Teboulle [8] to solve nonsmooth structured convex–concave saddle point problems consisting of the sum of a smooth convex function, a finite collection of nonsmooth convex functions and bilinear terms. We introduce the notion of pointwise quadratic supportability, which is a relaxation of a standard strong convexity assumption and allows us to show that the primal sequence is R-linearly convergent to an optimal solution and the primal-dual sequence is globally Q-linearly convergent. We illustrate the proposed method on total variation denoising problems and on locally adaptive estimation in signal/image deconvolution and denoising with multiresolution statistical constraints.  相似文献   

3.
Necessary conditions are established for the continuity of finite sums of ridge functions defined on convex subsets E of the space Rn. It is shown that under some constraints imposed on the summed functions ?i, in the case when E is open, the continuity of the sum implies the continuity of all ?i. In the case when E is a convex body with nonsmooth boundary, a logarithmic estimate is obtained for the growth of the functions ?i in the neighborhoods of the boundary points of their domains of definition. In addition, an example is constructed that demonstrates the accuracy of the estimate obtained.  相似文献   

4.
A coordinate gradient descent method for nonsmooth separable minimization   总被引:1,自引:0,他引:1  
We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with ?1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method. The local Lipschitzian error bound holds under assumptions analogous to those for constrained smooth optimization, e.g., the convex function is polyhedral and the smooth function is (nonconvex) quadratic or is the composition of a strongly convex function with a linear mapping. We report numerical experience with solving the ?1-regularization of unconstrained optimization problems from Moré et al. in ACM Trans. Math. Softw. 7, 17–41, 1981 and from the CUTEr set (Gould and Orban in ACM Trans. Math. Softw. 29, 373–394, 2003). Comparison with L-BFGS-B and MINOS, applied to a reformulation of the ?1-regularized problem as a bound-constrained optimization problem, is also reported.  相似文献   

5.
We consider a class of unconstrained nonsmooth convex optimization problems, in which the objective function is the sum of a convex smooth function on an open subset of matrices and a separable convex function on a set of matrices. This problem includes the covariance selection problem that can be expressed as an 1-penalized maximum likelihood estimation problem. In this paper, we propose a block coordinate gradient descent method (abbreviated as BCGD) for solving this class of nonsmooth separable problems with the coordinate block chosen by a Gauss-Seidel rule. The method is simple, highly parallelizable, and suited for large-scale problems. We establish global convergence and, under a local Lipschizian error bound assumption, linear rate of convergence for this method. For the covariance selection problem, the method can terminate in O(n3/e){O(n^3/\epsilon)} iterations with an e{\epsilon}-optimal solution. We compare the performance of the BCGD method with the first-order methods proposed by Lu (SIAM J Optim 19:1807–1827, 2009; SIAM J Matrix Anal Appl 31:2000–2016, 2010) for solving the covariance selection problem on randomly generated instances. Our numerical experience suggests that the BCGD method can be efficient for large-scale covariance selection problems with constraints.  相似文献   

6.
A class of constrained nonsmooth convex optimization problems, that is, piecewise C2 convex objectives with smooth convex inequality constraints are transformed into unconstrained nonsmooth convex programs with the help of exact penalty function. The objective functions of these unconstrained programs are particular cases of functions with primal-dual gradient structure which has connection with VU space decomposition. Then a VU space decomposition method for solving this unconstrained program is presented. This method is proved to converge with local superlinear rate under certain assumptions. An illustrative example is given to show how this method works.  相似文献   

7.
In this paper, we introduce a new class of nonsmooth convex functions called SOS-convex semialgebraic functions extending the recently proposed notion of SOS-convex polynomials. This class of nonsmooth convex functions covers many common nonsmooth functions arising in the applications such as the Euclidean norm, the maximum eigenvalue function and the least squares functions with ? 1-regularization or elastic net regularization used in statistics and compressed sensing. We show that, under commonly used strict feasibility conditions, the optimal value and an optimal solution of SOS-convex semialgebraic programs can be found by solving a single semidefinite programming problem (SDP). We achieve the results by using tools from semialgebraic geometry, convex-concave minimax theorem and a recently established Jensen inequality type result for SOS-convex polynomials. As an application, we show that robust SOS-convex optimization proble ms under restricted spectrahedron data uncertainty enjoy exact SDP relaxations. This extends the existing exact SDP relaxation result for restricted ellipsoidal data uncertainty and answers an open question in the literature on how to recover a robust solution of uncertain SOS-convex polynomial programs from its semidefinite programming relaxation in this broader setting.  相似文献   

8.
It is shown that, on a closed convex subset X of a real Hausdorff locally convex space E, a continuous linear functional x′ on E has an extremum at an extreme point of X, provided X contains no line and X ∩ (x′)?1 (λ0) is non-empty and weakly compact for some real λ0. It is also shown that any weakly locally compact closed convex subset of E that contains no line is the sum of its asymptotic cone and the closed convex hull of its extreme points.  相似文献   

9.
We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorithm). In particular, the iterates in our method can be infeasible throughout the whole procedure. Nevertheless, we provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. One convergence result deals with step size sequences that are fixed a priori. Two other results handle dynamic Polyak-type step sizes depending on a lower or upper estimate of the optimal objective function value, respectively. Additionally, we briefly sketch two applications: Optimization with convex chance constraints, and finding the minimum ? 1-norm solution to an underdetermined linear system, an important problem in Compressed Sensing.  相似文献   

10.
An algorithm is developed for finding the global minimum of a continuously differentiable function on a compact interval in R1. The function is assumed to be the sum of a convex and a concave function, each of which belongs to C1[a, b]. Any one-dimensional function with a bounded second derivative can be so written and, therefore, such functions generally have many local minima. The algorithm utilizes the structure of the objective to produce an ?-optimal solution by a sequence of simple one-dimensional convex programs.  相似文献   

11.
Recently, the convergence of the Douglas–Rachford splitting method (DRSM) was established for minimizing the sum of a nonsmooth strongly convex function and a nonsmooth hypoconvex function under the assumption that the strong convexity constant \(\beta \) is larger than the hypoconvexity constant \(\omega \). Such an assumption, implying the strong convexity of the objective function, precludes many interesting applications. In this paper, we prove the convergence of the DRSM for the case \(\beta =\omega \), under relatively mild assumptions compared with some existing work in the literature.  相似文献   

12.
In this paper, we study a nonlinear elliptic problem at resonance driven by the p-Laplacian and with a nonsmooth potential (hemivariational inequality). Our approach is variational and it is based on the nonsmooth critical point theory for locally Lipschitz functions due to Chang. We prove a theorem guaranteeing the existence of one solution which is smooth and strictly positive. Then by strengthening the assumptions, we establish a multiplicity result providing the existence of at least two distinct solutions. Our hypotheses permit resonance and asymmetric behavior at +∞ and −∞. As a byproduct of our analysis we obtain an nonlinear and nonsmooth generalization of a result of Brézis–Nirenberg about H01 versus C01 minimizers of a smooth functional.  相似文献   

13.
Let X and Y be Banach spaces and ψ a continuous convex function on the unit interval [0,1] satisfying certain conditions. Let XψY be the direct sum of X and Y equipped with the associated norm with ψ. We show that XψY is uniformly convex if and only if X,Y are uniformly convex and ψ is strictly convex. As a corollary we obtain that the ?p,q-direct sum (not p=q=1 nor ∞), is uniformly convex if and only if X,Y are, where ?p,q is the Lorentz sequence space. These results extend the well-known fact for the ?p-sum . Some other examples are also presented.  相似文献   

14.
We study a nonlinear eigenvalue problem with a nonsmooth potential. The subgradients of the potential are only positive near the origin (from above) and near +∞. Also the subdifferential is not necessarily monotone (i.e. the potential is not convex). Using variational techniques and the method of upper and lower solutions, we establish the existence of at least two strictly positive smooth solutions for all the parameters in an interval. Our approach uses the nonsmooth critical point theory for locally Lipschitz functions. A byproduct of our analysis is a generalization of a result of Brezis-Nirenberg (CRAS, 317 (1993)) on H10 versus C10 minimizers of a C1-functional.  相似文献   

15.
We consider a semilinear eigenvalue problem with a nonsmooth potential (hemivariational inequality). Using a nonsmooth analog of the local Ambrosetti–Rabinowitz condition (AR-condition), we show that the problem has a nontrivial smooth solution. In the scalar case, we show that we can relax the local AR-condition. Finally, for the resonant λ?=?λ 1 problem, using the nonsmooth version of the local linking theorem, we show that the problem has at least two nontrivial solutions. Our approach is variational, using minimax methods from the nonsmooth critical point theory.  相似文献   

16.
Denote by pk(M) or υk(M) the number of k-gonal faces or k-valent of the convex 3-polytope M, respectively. Completely solving a problem by B. Grünbaum, the following theorem is proved: Given sequences of nonnegative integers p = (p3, p4,…pm), υ = (υ3, υ4,…,υn) satisfying ∑k?3(6−k)pk + 2∑k?3(3−kk = 12, there exists a convex 3-polytope M with pk(M) = pk for all k ≠ 6 and υk for all k ≠ 3 if and only if for the sequences p, υ the following does not hold: ∑pi = 0 for i odd and ∑υi = 1 for i ? 0 (mod 3).  相似文献   

17.
Variational methods have become an important kind of methods in signal and image restoration—a typical inverse problem. One important minimization model consists of the squared ?_2 data fidelity(corresponding to Gaussian noise) and a regularization term constructed by a potential function composed of first order difference operators. It is well known that total variation(TV) regularization, although achieved great successes,suffers from a contrast reduction effect. Using a typical signal, we show that, actually all convex regularizers and most nonconvex regularizers have this effect. With this motivation, we present a general truncated regularization framework. The potential function is a truncation of existing nonsmooth potential functions and thus flat on(τ, +∞) for some positive τ. Some analysis in 1 D theoretically demonstrate the good contrast-preserving ability of the framework. We also give optimization algorithms with convergence verification in 2 D, where global minimizers of each subproblem(either convex or nonconvex) are calculated. Experiments numerically show the advantages of the framework.  相似文献   

18.
A characterization of d.c. functions f:ΩR in terms of the quasidifferentials of f is obtained, where Ω is an open convex set in a real Banach space. Recall that f is called d.c. (difference of convex) if it can be represented as a difference of two finite convex functions. The relation of the obtained results with known characterizations is discussed, specifically the ones from [R. Ellaia, A. Hassouni, Characterization of nonsmooth functions through their generalized gradients, Optimization 22 (1991), 401-416] in the finite-dimensional case and [A. Elhilali Alaoui, Caractérisation des fonctions DC, Ann. Sci. Math. Québec 20 (1996), 1-13] in the case of a Banach space.  相似文献   

19.
Li  Dan  Shen  Jie  Lu  Yuan  Pang  Li-Ping  Xia  Zun-Quan 《应用数学学报(英文版)》2019,35(2):435-443
Acta Mathematicae Applicatae Sinica, English Series - We consider the problems of minimizing the sum of a continuously differentiable convex function and a nonsmooth convex function in this paper....  相似文献   

20.
A class of iterative methods??filling methods??for polyhedral approximation of convex compact bodies is introduced and studied. In contrast to augmentation methods, the vertices of the approximating polytope can lie not only on the boundary of the body but also inside it. Within the proposed class, Hausdorff or H-methods of filling are singled out, for which the convergence rates (asymptotic and at the initial stage of the approximation) are estimated. For the approximation of nonsmooth convex compact bodies, the resulting convergence rate estimates coincide with those for augmentation H-methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号