首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper focuses on a distributed optimization problem associated with a time‐varying multi‐agent network with quantized communication, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on subgradient methods, we propose a distributed algorithm to solve this problem under the additional constraint that agents can only communicate quantized information through the network. We consider two kinds of quantizers and analyze the quantization effects on the convergence of the algorithm. Furthermore, we provide explicit error bounds on the convergence rates that highlight the dependence on the quantization levels. Finally, some simulation results on a l1‐regression problem are presented to demonstrate the performance of the algorithm. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set.  相似文献   

3.
This paper deals with iterative gradient and subgradient methods with random feasibility steps for solving constrained convex minimization problems, where the constraint set is specified as the intersection of possibly infinitely many constraint sets. Each constraint set is assumed to be given as a level set of a convex but not necessarily differentiable function. The proposed algorithms are applicable to the situation where the whole constraint set of the problem is not known in advance, but it is rather learned in time through observations. Also, the algorithms are of interest for constrained optimization problems where the constraints are known but the number of constraints is either large or not finite. We analyze the proposed algorithm for the case when the objective function is differentiable with Lipschitz gradients and the case when the objective function is not necessarily differentiable. The behavior of the algorithm is investigated both for diminishing and non-diminishing stepsize values. The almost sure convergence to an optimal solution is established for diminishing stepsize. For non-diminishing stepsize, the error bounds are established for the expected distances of the weighted averages of the iterates from the constraint set, as well as for the expected sub-optimality of the function values along the weighted averages.  相似文献   

4.
In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient method when the constraint set is compact or the objective function has a set of generalized weak sharp minima. In both cases, using the constant and diminishing stepsize rules, we describe convergence results in both objective values and iterates, and finite convergence to approximate optimality. We also investigate efficiency estimates of iterates and apply the inexact subgradient algorithm to solve the Cobb–Douglas production efficiency problem. The numerical results verify our theoretical analysis and show the high efficiency of our proposed algorithm, especially for the large-scale problems.  相似文献   

5.
This paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm by utilizing the subgradient information with stochastic errors. We firstly analyze the effects of stochastic errors on the convergence of the algorithm and then provide an explicit bound on the convergence rate as a function of the error bound and number of iterations. Our results show that the algorithm asymptotically converges to the optimal value of the problem within an error level, when there are stochastic errors in the subgradient evaluations. The proposed algorithm can be viewed as a generalization of the distributed subgradient projection methods since it utilizes more general Bregman divergence instead of the Euclidean squared distance. Finally, some simulation results on a regularized hinge regression problem are presented to illustrate the effectiveness of the algorithm.  相似文献   

6.
We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.  相似文献   

7.
We consider convex stochastic optimization problems under different assumptions on the properties of available stochastic subgradient. It is known that, if the value of the objective function is available, one can obtain, in parallel, several independent approximate solutions in terms of the objective residual expectation. Then, choosing the solution with the minimum function value, one can control the probability of large deviation of the objective residual. On the contrary, in this short paper, we address the situation, when the value of the objective function is unavailable or is too expensive to calculate. Under "‘light-tail"’ assumption for stochastic subgradient and in general case with moderate large deviation probability, we show that parallelization combined with averaging gives bounds for probability of large deviation similar to a serial method. Thus, in these cases, one can benefit from parallel computations and reduce the computational time without loss in the solution quality.  相似文献   

8.
In this article, we consider a mini‐max multi‐agent optimization problem where multiple agents cooperatively optimize a sum of local convex–concave functions, each of which is available to one specific agent in a network. To solve the problem, we propose a distributed optimization method by extending classical mirror descent algorithms to the distributed setting. We obtain the convergence of the algorithm under wild conditions that the agent communication follows a directed graph and the related weighted matrices are row stochastic. In particular, when the weighted matrices are restricted to be doubly stochastic, we provide the explicit convergence rate of the algorithm by choosing the stepsize in a suitable way. The proposed algorithm can be viewed as a generalization of the subgradient projection methods since it utilizes a customized Bregman divergence instead of the usual Euclidean squared distance. Finally, some simulation results on a matrix game are presented to illustrate the performance of the algorithm. © 2016 Wiley Periodicals, Inc. Complexity 21: 178–190, 2016  相似文献   

9.
In this paper, we study the influence of noise on subgradient methods for convex constrained optimization. The noise may be due to various sources, and is manifested in inexact computation of the subgradients and function values. Assuming that the noise is deterministic and bounded, we discuss the convergence properties for two cases: the case where the constraint set is compact, and the case where this set need not be compact but the objective function has a sharp set of minima (for example the function is polyhedral). In both cases, using several different stepsize rules, we prove convergence to the optimal value within some tolerance that is given explicitly in terms of the errors. In the first case, the tolerance is nonzero, but in the second case, the optimal value can be obtained exactly, provided the size of the error in the subgradient computation is below some threshold. We then extend these results to objective functions that are the sum of a large number of convex functions, in which case an incremental subgradient method can be used.  相似文献   

10.
To prove convergence of numerical methods for stiff initial value problems, stability is needed but also estimates for the local errors which are not affected by stiffness. In this paper global error bounds are derived for one-leg and linear multistep methods applied to classes of arbitrarily stiff, nonlinear initial value problems. It will be shown that under suitable stability assumptions the multistep methods are convergent for stiff problems with the same order of convergence as for nonstiff problems, provided that the stepsize variation is sufficiently regular.  相似文献   

11.
A readily implementable algorithm is given for minimizing a (possibly nondifferentiable and nonconvex) locally Lipschitz continuous functionf subject to linear constraints. At each iteration a polyhedral approximation tof is constructed from a few previously computed subgradients and an aggregate subgradient, which accumulates the past subgradient information. This aproximation and the linear constraints generate constraints in the search direction finding subproblem that is a quadratic programming problem. Then a stepsize is found by an approximate line search. All the algorithm's accumulation points are stationary. Moreover, the algorithm converges whenf happens to be convex.  相似文献   

12.
A stochastic subgradient algorithm for solving convex stochastic approximation problems is considered. In the algorithm, the stepsize coefficients are controlled on-line on the basis of information gathered in the course of computations according to a new, complete feedback rule derived from the concept of regularized improvement function. Convergence with probability 1 of the method is established.This work was supported by Project No. CPBP/02.15.  相似文献   

13.
In this paper, we consider the split feasibility problem (SFP) in infinite‐dimensional Hilbert spaces and propose some subgradient extragradient‐type algorithms for finding a common element of the fixed‐point set of a strict pseudocontraction mapping and the solution set of a split feasibility problem by adopting Armijo‐like stepsize rule. We derive convergence results under mild assumptions. Our results improve some known results from the literature. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, upper and lower bounds are established for the Dini directional derivatives of the marginal function of an inequality-constrained mathematical program with right-hand-side perturbations. A nonsmooth analogue of the Cottle constraint qualification is assumed, but the objective and constraint functions are not assumed to be differentiable, convex, or locally Lipschitzian. Our upper bound sharpens previous results from the locally Lipschitzian case by means of a subgradient smaller than the Clarke generalized gradient. Examples demonstrate, however, that a corresponding strengthening of the lower bound is not possible. Corollaries of this work include general criteria for exactness of penalty functions as well as information on the relationship between calmness and other constraint qualifications in nonsmooth optimization.The author is grateful for the helpful comments of a referee.  相似文献   

15.
We obtain local estimates of the distance to a set defined by equality constraints under assumptions which are weaker than those previously used in the literature. Specifically, we assume that the constraints mapping has a Lipschitzian derivative, and satisfies a certain 2-regularity condition at the point under consideration. This setting directly subsumes the classical regular case and the twice differentiable 2-regular case, for which error bounds are known, but it is significantly richer than either of these two cases. When applied to a certain equation-based reformulation of the nonlinear complementarity problem, our results yield an error bound under an assumption more general than b-regularity. The latter appears to be the weakest assumption under which a local error bound for complementarity problems was previously available. We also discuss an application of our results to the convergence rate analysis of the exterior penalty method for solving irregular problems. Received: February 2000 / Accepted: November 2000?Published online January 17, 2001  相似文献   

16.
Piecewise affine functions arise from Lagrangian duals of integer programming problems, and optimizing them provides good bounds for use in a branch and bound method. Methods such as the subgradient method and bundle methods assume only one subgradient is available at each point, but in many situations there is more information available. We present a new method for optimizing such functions, which is related to steepest descent, but uses an outer approximation to the subdifferential to avoid some of the numerical problems with the steepest descent approach. We provide convergence results for a class of outer approximations, and then develop a practical algorithm using such an approximation for the compact dual to the linear programming relaxation of the uncapacitated facility location problem. We make a numerical comparison of our outer approximation method with the projection method of Conn and Cornuéjols, and the bundle method of Schramm and Zowe. Received September 10, 1998 / Revised version received August 1999?Published online December 15, 1999  相似文献   

17.
《Optimization》2012,61(11):2099-2124
ABSTRACT

In this paper, we propose new subgradient extragradient methods for finding a solution of a strongly monotone equilibrium problem over the solution set of another monotone equilibrium problem which usually is called monotone bilevel equilibrium problem in Hilbert spaces. The first proposed algorithm is based on the subgradient extragradient method presented by Censor et al. [Censor Y, Gibali A, Reich S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J Optim Theory Appl. 2011;148:318–335]. The strong convergence of the algorithm is established under monotone assumptions of the cost bifunctions with Lipschitz-type continuous conditions recently presented by Mastroeni in the auxiliary problem principle. We also present a modification of the algorithm for solving an equilibrium problem, where the constraint domain is the common solution set of another equilibrium problem and a fixed point problem. Several fundamental experiments are provided to illustrate the numerical behaviour of the algorithms and to compare with others.  相似文献   

18.
Consider a multiclass stochastic network with state-dependent service rates and arrival rates describing bandwidth-sharing mechanisms as well as admission control and/or load balancing schemes. Given Poisson arrival and exponential service requirements, the number of customers in the network evolves as a multi-dimensional birth-and-death process on a finite subset of ℕ k . We assume that the death (i.e., service) rates and the birth rates depending on the whole state of the system satisfy a local balance condition. This makes the resulting network a Whittle network, and the stochastic process describing the state of the network is reversible with an explicit stationary distribution that is in fact insensitive to the service time distribution. Given routing constraints, we are interested in the optimal performance of such networks. We first construct bounds for generic performance criteria that can be evaluated using recursive procedures, these bounds being attained in the case of a unique arrival process. We then study the case of several arrival processes, focusing in particular on the instance with admission control only. Building on convexity properties, we characterize the optimal policy, and give criteria on the service rates for which our bounds are again attained.  相似文献   

19.
Abstract

Quasi-convex optimization is fundamental to the modelling of many practical problems in various fields such as economics, finance and industrial organization. Subgradient methods are practical iterative algorithms for solving large-scale quasi-convex optimization problems. In the present paper, focusing on quasi-convex optimization, we develop an abstract convergence theorem for a class of sequences, which satisfy a general basic inequality, under some suitable assumptions on parameters. The convergence properties in both function values and distances of iterates from the optimal solution set are discussed. The abstract convergence theorem covers relevant results of many types of subgradient methods studied in the literature, for either convex or quasi-convex optimization. Furthermore, we propose a new subgradient method, in which a perturbation of the successive direction is employed at each iteration. As an application of the abstract convergence theorem, we obtain the convergence results of the proposed subgradient method under the assumption of the Hölder condition of order p and by using the constant, diminishing or dynamic stepsize rules, respectively. A preliminary numerical study shows that the proposed method outperforms the standard, stochastic and primal-dual subgradient methods in solving the Cobb–Douglas production efficiency problem.  相似文献   

20.
非平衡拓扑和随机干扰情形下多自主体系统的趋同条件   总被引:1,自引:0,他引:1  
研究了具有一般有向通信拓扑和高斯通信噪声的多自主体系统的趋同条件.这里所研究的有向拓扑不仅包含有向平衡图,而且包含非平衡图,后者是本文的重点.我们利用马氏链的结果得到了一个网络节点的互通类;通过对噪声影响的细化,给出了不同噪声情形下系统趋同条件:(1)对互通类的自主体获取信息受到噪声干扰情形,给出了系统均方趋同的充要条件,并证明该条件也保证以概率1 趋同;(2)对互通类的自主体获取信息未受到噪声干扰但其余自主体获取信息受到干扰情形,给出了系统均方趋同的充分条件,并证明该条件在一定意义下也是必要的;(3)对整个系统无噪声情形,给出了系统趋同的充要条件.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号