首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A customized Douglas-Rachford splitting method (DRSM) was recently proposed to solve two-block separable convex optimization problems with linear constraints and simple abstract constraints. The algorithm has advantage over the well-known alternating direction method of multipliers (ADMM), the dual application of DRSM to the two-block convex minimization problem, in the sense that the subproblems can have larger opportunity of possessing closed-form solutions since they are unconstrained. In this paper, we further study along this way by considering the primal application of DRSM for the general case m≥3, i.e., we consider the multi-block separable convex minimization problem with linear constraints where the objective function is separable into m individual convex functions without coupled variables. The resulting method fully exploits the separable structure and enjoys decoupled subproblems which can be solved simultaneously. Both the exact and inexact versions of the new method are presented in a unified framework. Under mild conditions, we manage to prove the global convergence of the algorithm. Preliminary numerical experiments for extracting the background from corrupted surveillance video verify the encouraging efficiency of the new algorithm.  相似文献   

2.
We analyze the convergence rate of the alternating direction method of multipliers (ADMM) for minimizing the sum of two or more nonsmooth convex separable functions subject to linear constraints. Previous analysis of the ADMM typically assumes that the objective function is the sum of only two convex functions defined on two separable blocks of variables even though the algorithm works well in numerical experiments for three or more blocks. Moreover, there has been no rate of convergence analysis for the ADMM without strong convexity in the objective function. In this paper we establish the global R-linear convergence of the ADMM for minimizing the sum of any number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small. Such an error bound condition is satisfied for example when the feasible set is a compact polyhedron and the objective function consists of a smooth strictly convex function composed with a linear mapping, and a nonsmooth \(\ell _1\) regularizer. This result implies the linear convergence of the ADMM for contemporary applications such as LASSO without assuming strong convexity of the objective function.  相似文献   

3.
Recently, the alternating direction method of multipliers (ADMM) has found many efficient applications in various areas; and it has been shown that the convergence is not guaranteed when it is directly extended to the multiple-block case of separable convex minimization problems where there are m ≥ 3 functions without coupled variables in the objective. This fact has given great impetus to investigate various conditions on both the model and the algorithm’s parameter that can ensure the convergence of the direct extension of ADMM (abbreviated as “e-ADMM”). Despite some results under very strong conditions (e.g., at least (m ? 1) functions should be strongly convex) that are applicable to the generic case with a general m, some others concentrate on the special case of m = 3 under the relatively milder condition that only one function is assumed to be strongly convex. We focus on extending the convergence analysis from the case of m = 3 to the more general case of m ≥ 3. That is, we show the convergence of e-ADMM for the case of m ≥ 3 with the assumption of only (m ? 2) functions being strongly convex; and establish its convergence rates in different scenarios such as the worst-case convergence rates measured by iteration complexity and the globally linear convergence rate under stronger assumptions. Thus the convergence of e-ADMM for the general case of m ≥ 4 is proved; this result seems to be still unknown even though it is intuitive given the known result of the case of m = 3. Even for the special case of m = 3, our convergence results turn out to be more general than the existing results that are derived specifically for the case of m = 3.  相似文献   

4.
The alternating direction method of multipliers (ADMM) has been proved to be effective for solving separable convex optimization subject to linear constraints. In this paper, we propose a generalized symmetric ADMM (GS-ADMM), which updates the Lagrange multiplier twice with suitable stepsizes, to solve the multi-block separable convex programming. This GS-ADMM partitions the data into two group variables so that one group consists of p block variables while the other has q block variables, where \(p \ge 1\) and \(q \ge 1\) are two integers. The two grouped variables are updated in a Gauss–Seidel scheme, while the variables within each group are updated in a Jacobi scheme, which would make it very attractive for a big data setting. By adding proper proximal terms to the subproblems, we specify the domain of the stepsizes to guarantee that GS-ADMM is globally convergent with a worst-case \({\mathcal {O}}(1/t)\) ergodic convergence rate. It turns out that our convergence domain of the stepsizes is significantly larger than other convergence domains in the literature. Hence, the GS-ADMM is more flexible and attractive on choosing and using larger stepsizes of the dual variable. Besides, two special cases of GS-ADMM, which allows using zero penalty terms, are also discussed and analyzed. Compared with several state-of-the-art methods, preliminary numerical experiments on solving a sparse matrix minimization problem in the statistical learning show that our proposed method is effective and promising.  相似文献   

5.
In this paper, we develop a parameterized proximal point algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a worst-case O(1 / t) convergence rate, where t denotes the iteration number. By properly choosing the algorithm parameters, numerical experiments on solving a sparse optimization problem arising from statistical learning show that our P-PPA could perform significantly better than other state-of-the-art methods, such as the alternating direction method of multipliers and the relaxed proximal point algorithm.  相似文献   

6.
A parallel Nesterov algorithm, for solving unconstrained minimization of large scale partially separable convex functions, is presented. The problem is first transformed into a linearly constrained minimization of a separable function. A fast projected gradient (Nesterov) method is then applied to obtain a decomposition method with \(O(1/k^2)\) rate of convergence (where k is the iteration number). Preliminary numerical experiments show the efficiency of the proposed approach.  相似文献   

7.
In this paper, we study an inexact version of the alternating direction method of multipliers (ADMM) for solving two-block separable linearly constrained convex optimization problems. Specifically, the two subproblems in the classic ADMM are allowed to be solved inexactly by certain relative error criteria, in the sense that only two parameters are needed to control the inexactness. Related convergence analysis are established under the assumption that the solution set to the KKT system of the problem is not empty. Numerical results on solving a class of sparse signal recovery problems are also provided to demonstrate the efficiency of the proposed algorithm.  相似文献   

8.
We consider the linearly constrained separable convex minimization problem, whose objective function consists of the sum of \(m\) individual convex functions in the absence of any coupling variables. While augmented Lagrangian-based decomposition methods have been well developed in the literature for solving such problems, a noteworthy requirement of these methods is that an additional correction step is a must to guarantee their convergence. This note shows that a straightforward Jacobian decomposition of the augmented Lagrangian method is globally convergent if the involved functions are further assumed to be strongly convex.  相似文献   

9.
In this paper, we propose and analyze an accelerated augmented Lagrangian method (denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k 2) while the convergence rate of the classical augmented Lagrangian method (ALM) is O(1/k). Numerical experiments on the linearly constrained l 1?l 2 minimization problem are presented to demonstrate the effectiveness of AALM.  相似文献   

10.
The problem of convex optimization is studied. Usually in convex optimization the minimization is over a d-dimensional domain. Very often the convergence rate of an optimization algorithm depends on the dimension d. The algorithms studied in this paper utilize dictionaries instead of a canonical basis used in the coordinate descent algorithms. We show how this approach allows us to reduce dimensionality of the problem. Also, we investigate which properties of a dictionary are beneficial for the convergence rate of typical greedy-type algorithms.  相似文献   

11.
We consider a quadratic programming (QP) problem (Π) of the form min x T C x subject to Axb, x ≥ 0 where \({C\in {\mathbb R}^{n \times n}_+, {\rm rank}(C)=1}\) and \({A\in {\mathbb R}^{m \times n}, b\in {\mathbb R}^m}\) . We present an fully polynomial time approximation scheme (FPTAS) for this problem by reformulating the QP (Π) as a parameterized LP and “rounding” the optimal solution. Furthermore, our algorithm returns an extreme point solution of the polytope. Therefore, our results apply directly to 0–1 problems for which the convex hull of feasible integer solutions is known such as spanning tree, matchings and sub-modular flows. They also apply to problems for which the convex hull of the dominant of the feasible integer solutions is known such as s, t-shortest paths and s, t-min-cuts. For the above discrete problems, the quadratic program Π models the problem of obtaining an integer solution that minimizes the product of two linear non-negative cost functions.  相似文献   

12.
Let f be a function from \({\mathbb{R}_{+}}\) into itself. A classic theorem of K. Löwner says that f is operator monotone if and only if all matrices of the form \({\left [\frac{f(p_i) - f(p_j)}{p_i-p_j}\right ]_{\vphantom {X_{X_1}}}}\) are positive semidefinite. We show that f is operator convex if and only if all such matrices are conditionally negative definite and that f (t) = t g(t) for some operator convex function g if and only if these matrices are conditionally positive definite. Elementary proofs are given for the most interesting special cases f (t) = t r , and f (t) = t log t. Several consequences are derived.  相似文献   

13.
The majority of first-order methods for large-scale convex–concave saddle point problems and variational inequalities with monotone operators are proximal algorithms. To make such an algorithm practical, the problem’s domain should be proximal-friendly—admit a strongly convex function with easy to minimize linear perturbations. As a by-product, this domain admits a computationally cheap linear minimization oracle (LMO) capable to minimize linear forms. There are, however, important situations where a cheap LMO indeed is available, but the problem domain is not proximal-friendly, which motivates search for algorithms based solely on LMO. For smooth convex minimization, there exists a classical algorithm using LMO—conditional gradient. In contrast, known to us similar techniques for other problems with convex structure (nonsmooth convex minimization, convex–concave saddle point problems, even as simple as bilinear ones, and variational inequalities with monotone operators, even as simple as affine) are quite recent and utilize common approach based on Fenchel-type representations of the associated objectives/vector fields. The goal of this paper was to develop alternative (and seemingly much simpler) decomposition techniques based on LMO for bilinear saddle point problems and for variational inequalities with affine monotone operators.  相似文献   

14.
Let X i = {X i (t), tT} be i.i.d. copies of a centered Gaussian process X = {X(t), tT} with values in\( {\mathbb{R}^d} \) defined on a separable metric space T. It is supposed that X is bounded. We consider the asymptotic behavior of convex hulls
$ {W_n} = {\text{conv}}\left\{ {{X_1}(t), \ldots, {X_n}(t),\,\,t \in T} \right\} $
and show that, with probability 1,
$ \mathop {{\lim }}\limits_{n \to \infty } \frac{1}{{\sqrt {{2\ln n}} }}{W_n} = W $
(in the sense of Hausdorff distance), where the limit shape W is defined by the covariance structure of X: W = conv{K t , tT}, Kt being the concentration ellipsoid of X(t). We also study the asymptotic behavior of the mathematical expectations E f(W n ), where f is an homogeneous functional.
  相似文献   

15.
We consider the motion of an object t in the space ?2, where a bodily bounded set G with piecewise smooth boundary hinders the motion and visibility. In a neighborhood of convex parts of the boundary, there are observers, which can hide from t in a shading set s(t) ? ?2 G in the case of danger from t. We find characteristic properties of a trajectory T of the object that maximizes the value min{ρ(t, s(t)): tT}.  相似文献   

16.
We study a projection-difference method for approximately solving the Cauchy problem u′(t) + A(t)u(t) + K(t)u(t) = h(t), u(0) = 0 for a linear differential-operator equation in a Hilbert space, where A(t) is a self-adjoint operator and K(t) is an operator subordinate to A(t). Time discretization is based on a three-level difference scheme, and space discretization is carried out by the Galerkin method. Under certain smoothness conditions on the function h(t), we obtain estimates for the convergence rate of the approximate solutions to the exact solution.  相似文献   

17.
The paper studies the global convergence of the Jacobi method for symmetric matrices of size 4. We prove global convergence for all 720 cyclic pivot strategies. Precisely, we show that inequality S(A [t+3]) ≤ γ S(A [t]), t ≥ 1, holds with the constant γ < 1 that depends neither on the matrix A nor on the pivot strategy. Here, A [t] stands for the matrix obtained from A after t full cycles of the Jacobi method and S(A) is the off-diagonal norm of A. We show why three consecutive cycles have to be considered. The result has a direct application on the J-Jacobi method.  相似文献   

18.
We consider the Euler-Maruyama discretization of stochastic volatility model dSt = σtStdWt, dσt = ωσtdZt, t ∈ [0, T], which has been widely used in financial practice, where Wt,Zt, t ∈ [0, T], are two uncorrelated standard Brownian motions. Using asymptotic analysis techniques, the moderate deviation principles for log Sn (or log |Sn| in case Sn is negative) are obtained as n → ∞ under different discretization schemes for the asset price process St and the volatility process σt. Numerical simulations are presented to compare the convergence speeds in different schemes.  相似文献   

19.
We study the inverse problem of the reconstruction of the coefficient ?(x, t) = ?0(x, t) + r(x) multiplying ut in a nonstationary parabolic equation. Here ?0(x, t) ≥ ?0 > 0 is a given function, and r(x) ≥ 0 is an unknown function of the class L(Ω). In addition to the initial and boundary conditions (the data of the direct problem), we pose the problem of nonlocal observation in the form ∫0Tu(x, t) (t) = χ(x) with a known measure (t) and a function χ(x). We separately consider the case (t) = ω(t)dt of integral observation with a smooth function ω(t). We obtain sufficient conditions for the existence and uniqueness of the solution of the inverse problem, which have the form of ready-to-verify inequalities. We suggest an iterative procedure for finding the solution and prove its convergence. Examples of particular inverse problems for which the assumptions of our theorems hold are presented.  相似文献   

20.
We supposeK(w) to be the boundary of the closed convex hull of a sample path ofZ t(w), 0 ≦t ≦ 1 of Brownian motion ind-dimensions. A combinatorial result of Baxter and Borndorff Neilson on the convex hull of a random walk, and a limiting process utilizing results of P. Levy on the continuity properties ofZ t(w) are used to show that the curvature ofK(w) is concentrated on a metrically small set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号