首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
During the last decade, the state-of-the-art alternating direction method of multipliers (ADMM) has successfully been used to solve many two-block separable convex minimization problems arising from several applied areas such as signal/image processing and statistical and machine learning. It however remains an interesting problem of how to implement ADMM to three-block separable convex minimization problems as required by the situation where many objective functions in the above-mentioned areas are actually more conveniently decomposed to the sum of three convex functions, due also to the observation that the straightforward extension of ADMM from the two-block case to the three-block case is apparently not convergent. In this paper, we shall introduce a new algorithm that is called a partially isochronous splitting algorithm (PISA) in order to implement ADMM for the three-block separable model. The main idea of our algorithm is to incorporate only one proximal term into the last subproblem of the extended ADMM so that the resulting algorithm maximally inherits the promising properties of ADMM. A remarkable superiority over the extended ADMM is that we can simultaneously solve two of the subproblems, thereby taking advantages of the separable structure and parallel architectures. Theoretically, we will establish the global convergence of our algorithm under standard conditions, and also the O(1/t) rate of convergence in both ergodic and nonergodic senses, where t is the iteration counter. The computational competitiveness of our algorithm is shown by numerical experiments on an application to the well-tested robust principal component analysis model.  相似文献   

2.
Recently, the alternating direction method of multipliers (ADMM) has found many efficient applications in various areas; and it has been shown that the convergence is not guaranteed when it is directly extended to the multiple-block case of separable convex minimization problems where there are m ≥ 3 functions without coupled variables in the objective. This fact has given great impetus to investigate various conditions on both the model and the algorithm’s parameter that can ensure the convergence of the direct extension of ADMM (abbreviated as “e-ADMM”). Despite some results under very strong conditions (e.g., at least (m ? 1) functions should be strongly convex) that are applicable to the generic case with a general m, some others concentrate on the special case of m = 3 under the relatively milder condition that only one function is assumed to be strongly convex. We focus on extending the convergence analysis from the case of m = 3 to the more general case of m ≥ 3. That is, we show the convergence of e-ADMM for the case of m ≥ 3 with the assumption of only (m ? 2) functions being strongly convex; and establish its convergence rates in different scenarios such as the worst-case convergence rates measured by iteration complexity and the globally linear convergence rate under stronger assumptions. Thus the convergence of e-ADMM for the general case of m ≥ 4 is proved; this result seems to be still unknown even though it is intuitive given the known result of the case of m = 3. Even for the special case of m = 3, our convergence results turn out to be more general than the existing results that are derived specifically for the case of m = 3.  相似文献   

3.
In this paper, we develop a parameterized proximal point algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a worst-case O(1 / t) convergence rate, where t denotes the iteration number. By properly choosing the algorithm parameters, numerical experiments on solving a sparse optimization problem arising from statistical learning show that our P-PPA could perform significantly better than other state-of-the-art methods, such as the alternating direction method of multipliers and the relaxed proximal point algorithm.  相似文献   

4.
The alternating direction method of multipliers (ADMM) has been proved to be effective for solving separable convex optimization subject to linear constraints. In this paper, we propose a generalized symmetric ADMM (GS-ADMM), which updates the Lagrange multiplier twice with suitable stepsizes, to solve the multi-block separable convex programming. This GS-ADMM partitions the data into two group variables so that one group consists of p block variables while the other has q block variables, where \(p \ge 1\) and \(q \ge 1\) are two integers. The two grouped variables are updated in a Gauss–Seidel scheme, while the variables within each group are updated in a Jacobi scheme, which would make it very attractive for a big data setting. By adding proper proximal terms to the subproblems, we specify the domain of the stepsizes to guarantee that GS-ADMM is globally convergent with a worst-case \({\mathcal {O}}(1/t)\) ergodic convergence rate. It turns out that our convergence domain of the stepsizes is significantly larger than other convergence domains in the literature. Hence, the GS-ADMM is more flexible and attractive on choosing and using larger stepsizes of the dual variable. Besides, two special cases of GS-ADMM, which allows using zero penalty terms, are also discussed and analyzed. Compared with several state-of-the-art methods, preliminary numerical experiments on solving a sparse matrix minimization problem in the statistical learning show that our proposed method is effective and promising.  相似文献   

5.
In this paper, we study an inexact version of the alternating direction method of multipliers (ADMM) for solving two-block separable linearly constrained convex optimization problems. Specifically, the two subproblems in the classic ADMM are allowed to be solved inexactly by certain relative error criteria, in the sense that only two parameters are needed to control the inexactness. Related convergence analysis are established under the assumption that the solution set to the KKT system of the problem is not empty. Numerical results on solving a class of sparse signal recovery problems are also provided to demonstrate the efficiency of the proposed algorithm.  相似文献   

6.
We analyze the convergence rate of the alternating direction method of multipliers (ADMM) for minimizing the sum of two or more nonsmooth convex separable functions subject to linear constraints. Previous analysis of the ADMM typically assumes that the objective function is the sum of only two convex functions defined on two separable blocks of variables even though the algorithm works well in numerical experiments for three or more blocks. Moreover, there has been no rate of convergence analysis for the ADMM without strong convexity in the objective function. In this paper we establish the global R-linear convergence of the ADMM for minimizing the sum of any number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small. Such an error bound condition is satisfied for example when the feasible set is a compact polyhedron and the objective function consists of a smooth strictly convex function composed with a linear mapping, and a nonsmooth \(\ell _1\) regularizer. This result implies the linear convergence of the ADMM for contemporary applications such as LASSO without assuming strong convexity of the objective function.  相似文献   

7.
The proximal point algorithm is classical and popular in the community of optimization. In practice, inexact proximal point algorithms which solve the involved proximal subproblems approximately subject to certain inexact criteria are truly implementable. In this paper, we first propose an inexact proximal point algorithm with a new inexact criterion for solving convex minimization, and show its O(1/k) iteration-complexity. Then we show that this inexact proximal point algorithm is eligible for being accelerated by some influential acceleration schemes proposed by Nesterov. Accordingly, an accelerated inexact proximal point algorithm with an iteration-complexity of O(1/k 2) is proposed.  相似文献   

8.
We consider applying the Douglas–Rachford splitting method (DRSM) to the convex minimization problem with linear constraints and a separable objective function. The dual application of DRSM has been well studied in the literature, resulting in the well known alternating direction method of multipliers (ADMM). In this paper, we show that the primal application of DRSM in combination with an appropriate decomposition can yield an efficient structure-exploiting algorithm for the model under consideration, whose subproblems could be easier than those of ADMM. Both the exact and inexact versions of this customized DRSM are studied; and their numerical efficiency is demonstrated by some preliminary numerical results.  相似文献   

9.
A parallel Nesterov algorithm, for solving unconstrained minimization of large scale partially separable convex functions, is presented. The problem is first transformed into a linearly constrained minimization of a separable function. A fast projected gradient (Nesterov) method is then applied to obtain a decomposition method with \(O(1/k^2)\) rate of convergence (where k is the iteration number). Preliminary numerical experiments show the efficiency of the proposed approach.  相似文献   

10.
We consider a scheduling problem where a set of n jobs has to be processed on a set of m machines and arbitrary precedence constraints between operations are given. Moreover, for any two operations i and j values a ij >0 and a ji >0 may be given where a ij is the minimal difference between the starting times of operations i and j when operation i is processed first. Often, the objective is to minimize the makespan but we consider also arbitrary regular criteria. Even the special cases of the classical job shop problem J//Cmax belong to the set of NP-hard problems. Therefore, approximation or heuristic algorithms are necessary to handle large-dimension problems. Based on the mixed graph model we give a heuristic decomposition algorithm for such a problem, i.e. the initial problem is partitioned into subproblems that can be solved exactly or approximately with a small error bound. These subproblems are obtained by a relaxation of a subset of the set of undirected edges of the mixed graph. The subproblems are successively solved and a proportion of the results obtained for one subproblem is kept for further subproblem definitions. Numerical results of the algorithm presented here are given.  相似文献   

11.
The majority of first-order methods for large-scale convex–concave saddle point problems and variational inequalities with monotone operators are proximal algorithms. To make such an algorithm practical, the problem’s domain should be proximal-friendly—admit a strongly convex function with easy to minimize linear perturbations. As a by-product, this domain admits a computationally cheap linear minimization oracle (LMO) capable to minimize linear forms. There are, however, important situations where a cheap LMO indeed is available, but the problem domain is not proximal-friendly, which motivates search for algorithms based solely on LMO. For smooth convex minimization, there exists a classical algorithm using LMO—conditional gradient. In contrast, known to us similar techniques for other problems with convex structure (nonsmooth convex minimization, convex–concave saddle point problems, even as simple as bilinear ones, and variational inequalities with monotone operators, even as simple as affine) are quite recent and utilize common approach based on Fenchel-type representations of the associated objectives/vector fields. The goal of this paper was to develop alternative (and seemingly much simpler) decomposition techniques based on LMO for bilinear saddle point problems and for variational inequalities with affine monotone operators.  相似文献   

12.
Generalizing both mixed-integer linear optimization and convex optimization, mixed-integer convex optimization possesses broad modeling power but has seen relatively few advances in general-purpose solvers in recent years. In this paper, we intend to provide a broadly accessible introduction to our recent work in developing algorithms and software for this problem class. Our approach is based on constructing polyhedral outer approximations of the convex constraints, resulting in a global solution by solving a finite number of mixed-integer linear and continuous convex subproblems. The key advance we present is to strengthen the polyhedral approximations by constructing them in a higher-dimensional space. In order to automate this extended formulation we rely on the algebraic modeling technique of disciplined convex programming (DCP), and for generality and ease of implementation we use conic representations of the convex constraints. Although our framework requires a manual translation of existing models into DCP form, after performing this transformation on the MINLPLIB2 benchmark library we were able to solve a number of unsolved instances and on many other instances achieve superior performance compared with state-of-the-art solvers like Bonmin, SCIP, and Artelys Knitro.  相似文献   

13.
We consider quadratic programs with pure general integer variables. The objective function is quadratic and convex and the constraints are linear. An exact solution approach is proposed. It is decomposed into two phases. In the first phase, the initial problem is reformulated into an equivalent problem with a separable objective function. This is done by use of a Gauss decomposition of the Hessian matrix of the initial problem and requires the addition of some continuous variables and constraints. In the second phase, the reformulated problem is linearized by an approximation of each squared term by a set of K linear functions that correspond to the tangents of a hyperbola in K points. We give a proof of the intuitive property that when K is large enough, the optimal value of the obtained linear program is very close to optimal value of the two previous problems, the initial problem and the reformulated separable problem. The reminder is dedicated to the implementation of a branch-and-bound algorithm for the solution of linearized problem, and its application to a set of instances. Several points are considered among which choice of the right value for parameter K and the implementation of a sophisticated heuristic solution algorithm. The numerical comparison is done with CPLEX 12.2 since, in this case, the initial problem as well as the problem reformulated by the first step can be solved by CPLEX. We show that with our approach, the total CPU time is divided by a factor ranging from 1.2 to 131.6 for instances with 40–60 variables.  相似文献   

14.
We propose two series of number-theory problems with explicitly marked out parameters related to discongruences modulo m. We find parameter constraints that provide the NP completeness for any problem of every series. For any m > 2, we prove the NP completeness of the verification problem for the consistency of a system of linear discongruences modulo m such that any discongruence contains exactly three variables, including the case where its coefficients belong to {–1, 1}. For any m > 3, we prove the NP completeness of the verification problem for the consistency of a system of linear discongruences modulo m such that any discongruence contains exactly 2 variables. If P ≠ NP, then one cannot change the term 2-discongruence for the term 1-discongruence in the statements of the proven theorems.  相似文献   

15.
The optimal solution set of the interval linear programming problems   总被引:1,自引:0,他引:1  
Several methods exist for solving the interval linear programming (ILP) problem. In most of these methods, we can only obtain the optimal value of the objective function of the ILP problem. In this paper we determine the optimal solution set of the ILP as the intersection of some regions, by the best and the worst case (BWC) methods, when the feasible solution components of the best problem are positive. First, we convert the ILP problem to the convex combination problem by coefficients 0 ≤ λ j , μ ij , μ i  ≤ 1, for i = 1, 2, . . . , m and j = 1, 2, . . . , n. If for each i, jμ ij  = μ i  = λ j  = 0, then the best problem has been obtained (in case of minimization problem). We move from the best problem towards the worst problem by tiny variations of λ j μ ij and μ i from 0 to 1. Then we solve each of the obtained problems. All of the optimal solutions form a region that we call the optimal solution set of the ILP. Our aim is to determine this optimal solution set by the best and the worst problem constraints. We show that some theorems to validity of this optimal solution set.  相似文献   

16.
The elastic energy of a convex curve bounding a planar convex body \(\Omega \) is defined by \(E(\Omega )=\frac{1}{2}\,\int _{\partial \Omega } k^2(s)\,ds\) where k(s) is the curvature of the boundary. In this paper we are interested in the minimization problem of \(E(\Omega )\) with a constraint on the inradius of \(\Omega \). In contrast to all the other minimization problems involving this elastic energy (with a perimeter, area, diameter, or circumradius constraints) for which the solution is always the disk, we prove here that the solution of this minimization problem is not the disk and we completely characterize it in terms of elementary functions.  相似文献   

17.
We give a unified method to obtain the conservativeness of a class of Markov processes associated with lower bounded semi-Dirichlet forms on L 2(X;m), including symmetric diffusion processes, some non-symmetric diffusion processes and jump type Markov processes on X, where X is a locally compact separable metric space and m is a positive Radon measure on X with full topological support. Using the method, we give an example in each section, providing the conservativeness of the processes, that are given by the “increasingness of the volume of some sets(balls)” and “that of the coefficients on the sets” of the Markov processes.  相似文献   

18.
We study the min-cost chain-constrained spanning-tree (MCCST) problem: find a min-cost spanning tree in a graph subject to degree constraints on a nested family of node sets. We devise the first polytime algorithm that finds a spanning tree that (i) violates the degree constraints by at most a constant factor and (ii) whose cost is within a constant factor of the optimum. Previously, only an algorithm for unweighted CCST was known (Olver and Zenklusen in Proceedings of the 16th IPCO, pp 324–335, 2013), which satisfied (i) but did not yield any cost bounds. This also yields the first result that obtains an O(1)-factor for both the cost approximation and violation of degree constraints for any spanning-tree problem with general degree bounds on node sets, where an edge participates in a super-constant number of degree constraints. A notable feature of our algorithm is that we reduce MCCST to unweighted CCST (and then utilize Olver and Zenklusen in Proceedings of the 16th IPCO, pp 324–335, 2013) via a novel application of Lagrangian duality to simplify the cost structure of the underlying problem and obtain a decomposition into certain uniform-cost subproblems. We show that this Lagrangian-relaxation based idea is in fact applicable more generally and, for any cost-minimization problem with packing side-constraints, yields a reduction from the weighted to the unweighted problem. We believe that this reduction is of independent interest. As another application of our technique, we consider the k -budgeted matroid basis problem, where we build upon a recent rounding algorithm of Bansal and Nagarajan (Proceedings of IPCO 2016. arXiv:1512.02254, 2015) to obtain an improved \(n^{O(k^{1.5}/\epsilon )}\)-time algorithm that returns a solution that satisfies (any) one of the budget constraints exactly and incurs a \((1+\epsilon )\)-violation of the other budget constraints.  相似文献   

19.
Let m,m′, n be positive integers such that mm′. Let A be an mth order n-dimensional tensor, and let ? be an m′th order n-dimensional tensor. λ ∈ ? is called a ?-eigenvalue of A if A xm?1 = λ?xm′?1 and ?xm′= 1 for some x ∈ ?n\{0}. In this paper, we propose a linear homotopy method for solving this eigenproblem. We prove that the method finds all isolated ?-eigenpairs. Moreover, it is easy to implement. Numerical results are provided to show the efficiency of the proposed method.  相似文献   

20.
In this paper, we propose and analyze an accelerated augmented Lagrangian method (denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k 2) while the convergence rate of the classical augmented Lagrangian method (ALM) is O(1/k). Numerical experiments on the linearly constrained l 1?l 2 minimization problem are presented to demonstrate the effectiveness of AALM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号