首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本文,针对由非线性不等式系统构成的凸可行问题,提出了序列块迭代次梯度投影算法和平行块迭代次梯度投影算法.将非线性不等式系统分成若干个子系统,然后将当前迭代点在子系统各个子集上的次梯度投影的凸组合作为当前迭代点在这个子系统上的近似投影.在较弱条件下证明了两种算法的收敛性.  相似文献   

2.
对凸可行问题提出了包括上松弛的平行近似次梯度投影算法和加速平行近似次梯度投影算法.与序列近似次梯度投影算法相比, 平行近似次梯度投影算法(每次迭代同时运用多个凸集的近似次梯度超平面上的投影)能够保证迭代序列收敛到离各个凸集最近的点. 上松弛的迭代技术和含有外推因子的加速技术的应用, 减少了数据存储量, 提高了收 敛速度. 最后在较弱的条件下证明了算法的收敛性, 数值实验结果验证了算法的有效性和优越性.  相似文献   

3.
本文讨论[4]提出的族次梯度投影算法的收敛性.  相似文献   

4.
This paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm by utilizing the subgradient information with stochastic errors. We firstly analyze the effects of stochastic errors on the convergence of the algorithm and then provide an explicit bound on the convergence rate as a function of the error bound and number of iterations. Our results show that the algorithm asymptotically converges to the optimal value of the problem within an error level, when there are stochastic errors in the subgradient evaluations. The proposed algorithm can be viewed as a generalization of the distributed subgradient projection methods since it utilizes more general Bregman divergence instead of the Euclidean squared distance. Finally, some simulation results on a regularized hinge regression problem are presented to illustrate the effectiveness of the algorithm.  相似文献   

5.
We generalize the subgradient optimization method for nondifferentiable convex programming to utilize conditional subgradients. Firstly, we derive the new method and establish its convergence by generalizing convergence results for traditional subgradient optimization. Secondly, we consider a particular choice of conditional subgradients, obtained by projections, which leads to an easily implementable modification of traditional subgradient optimization schemes. To evaluate the subgradient projection method we consider its use in three applications: uncapacitated facility location, two-person zero-sum matrix games, and multicommodity network flows. Computational experiments show that the subgradient projection method performs better than traditional subgradient optimization; in some cases the difference is considerable. These results suggest that our simply modification may improve subgradient optimization schemes significantly. This finding is important as such schemes are very popular, especially in the context of Lagrangean relaxation.  相似文献   

6.
In this paper,we present an extrapolated parallel subgradient projection method with the centering technique for the convex feasibility problem,the algorithm improves the convergence by reason of using centering techniques which reduce the oscillation of the corresponding sequence.To prove the convergence in a simply way,we transmit the parallel algorithm in the original space to a sequential one in a newly constructed product space.Thus,the convergence of the parallel algorithm is derived with the help of the sequential one under some suitable conditions.Numerical results show that the new algorithm has better convergence than the existing algorithms.  相似文献   

7.
In the present paper, we use subgradient projection algorithms for solving convex feasibility problems. We show that almost all iterates, generated by a subgradient projection algorithm in a Hilbert space, are approximate solutions. Moreover, we obtain an estimate of the number of iterates which are not approximate solutions. In a finite-dimensional case, we study the behavior of the subgradient projection algorithm in the presence of computational errors. Provided computational errors are bounded, we prove that our subgradient projection algorithm generates a good approximate solution after a certain number of iterates.  相似文献   

8.
We introduce an explicit algorithm for solving nonsmooth equilibrium problems in finite-dimensional spaces. A particular iteration proceeds in two phases. In the first phase, an orthogonal projection onto the feasible set is replaced by projections onto suitable hyperplanes. In the second phase, a projected subgradient type iteration is replaced by a specific projection onto a halfspace. We prove, under suitable assumptions, convergence of the whole generated sequence to a solution of the problem. The proposed algorithm has a low computational cost per iteration and, some numerical results are reported.  相似文献   

9.
We study subgradient projection type methods for solving non-differentiable convex minimization problems and monotone variational inequalities. The methods can be viewed as a natural extension of subgradient projection type algorithms, and are based on using non-Euclidean projection-like maps, which generate interior trajectories. The resulting algorithms are easy to implement and rely on a single projection per iteration. We prove several convergence results and establish rate of convergence estimates under various and mild assumptions on the problem’s data and the corresponding step-sizes. We dedicate this paper to Boris Polyak on the occasion of his 70th birthday.  相似文献   

10.
In this paper, we establish a strong convergence theorem regarding a regularized variant of the projected subgradient method for nonsmooth, nonstrictly convex minimization in real Hilbert spaces. Only one projection step is needed per iteration and the involved stepsizes are controlled so that the algorithm is of practical interest. To this aim, we develop new techniques of analysis which can be adapted to many other non-Fejérian methods.  相似文献   

11.
This paper deals with a continuous time, subgradient projection algorithm, shown to generate trajectories that accumulate to the solution set. Under a strong convexity assumption we show that convergence is exponential in norm. A sharpness condition yields convergence in finite time, and the necessary lapse is estimated. Invoking a constraint qualification and a non-degeneracy assumption, we demonstrate that optimally active constraints are identified in finite time.This research has been partially supported by Rutgers University, RUTCOR, New Brunswick, NJ 08903, USA, and by the Memorial Fund of Wilhelm Kheilhau.  相似文献   

12.
Abstract

This paper presents an algorithm, named adaptive projected subgradient method that can minimize asymptotically a certain sequence of nonnegative convex functions over a closed convex set in a real Hilbert space. The proposed algorithm is a natural extension of the Polyak's subgradient algorithm, for nonsmooth convex optimization problem with a fixed target value, to the case where the convex objective itself keeps changing in the whole process. The main theorem, showing the strong convergence of the algorithm as well as the asymptotic optimality of the sequence generated by the algorithm, can serve as a unified guiding principle of a wide range of set theoretic adaptive filtering schemes for nonstationary random processes. These include not only the existing adaptive filtering techniques; e.g., NLMS, Projected NLMS, Constrained NLMS, APA, and Adaptive parallel outer projection algorithm etc., but also new techniques; e.g., Adaptive parallel min-max projection algorithm, and their embedded constraint versions. Numerical examples show that the proposed techniques are well-suited for robust adaptive signal processing problems.  相似文献   

13.
In this paper, we propose a projection subgradient method for solving some classical variational inequality problem over the set of solutions of mixed variational inequalities. Under the conditions that $T$ is a $\Theta $ -pseudomonotone mapping and $A$ is a $\rho $ -strongly pseudomonotone mapping, we prove the convergence of the algorithm constructed by projection subgradient method. Our algorithm can be applied for instance to some mathematical programs with complementarity constraints.  相似文献   

14.
The multiple-sets split equality problem, a generalization and extension of the split feasibility problem, has a variety of specific applications in real world, such as medical care, image reconstruction, and signal processing. It can be a model for many inverse problems where constraints are imposed on the solutions in the domains of two linear operators as well as in the operators’ ranges simultaneously. Although, for the split equality problem, there exist many algorithms, there are but few algorithms for the multiple-sets split equality problem. Hence, in this paper, we present a relaxed two points projection method to solve the problem; under some suitable conditions, we show the weak convergence and give a remark for the strong convergence method in the Hilbert space. The interest of our algorithm is that we transfer the problem to an optimization problem, then, based on the model, we present a modified gradient projection algorithm by selecting two different initial points in different sets for the problem (we call the algorithm as two points algorithm). During the process of iteration, we employ subgradient projections, not use the orthogonal projection, which makes the method implementable. Numerical experiments manifest the algorithm is efficient.  相似文献   

15.
In this paper, we propose a new algorithm for solving a bilevel equilibrium problem in a real Hilbert space. In contrast to most other projection-type algorithms, which require to solve subproblems at each iteration, the subgradient method proposed in this paper requires only to calculate, at each iteration, two subgradients of convex functions and one projection onto a convex set. Hence, our algorithm has a low computational cost. We prove a strong convergence theorem for the proposed algorithm and apply it for solving the equilibrium problem over the fixed point set of a nonexpansive mapping. Some numerical experiments and comparisons are given to illustrate our results. Also, an application to Nash–Cournot equilibrium models of a semioligopolistic market is presented.  相似文献   

16.
Subgradient projectors play an important role in optimization and for solving convex feasibility problems. For every locally Lipschitz function, we can define a subgradient projector via generalized subgradients even if the function is not convex. The paper consists of three parts. In the first part, we study basic properties of subgradient projectors and give characterizations when a subgradient projector is a cutter, a local cutter, or a quasi-nonexpansive mapping. We present global and local convergence analyses of subgradent projectors. Many examples are provided to illustrate the theory. In the second part, we investigate the relationship between the subgradient projector of a prox-regular function and the subgradient projector of its Moreau envelope. We also characterize when a mapping is the subgradient projector of a convex function. In the third part, we focus on linearity properties of subgradient projectors. We show that, under appropriate conditions, a linear operator is a subgradient projector of a convex function if and only if it is a convex combination of the identity operator and a projection operator onto a subspace. In general, neither a convex combination nor a composition of subgradient projectors of convex functions is a subgradient projector of a convex function.  相似文献   

17.
Piecewise affine functions arise from Lagrangian duals of integer programming problems, and optimizing them provides good bounds for use in a branch and bound method. Methods such as the subgradient method and bundle methods assume only one subgradient is available at each point, but in many situations there is more information available. We present a new method for optimizing such functions, which is related to steepest descent, but uses an outer approximation to the subdifferential to avoid some of the numerical problems with the steepest descent approach. We provide convergence results for a class of outer approximations, and then develop a practical algorithm using such an approximation for the compact dual to the linear programming relaxation of the uncapacitated facility location problem. We make a numerical comparison of our outer approximation method with the projection method of Conn and Cornuéjols, and the bundle method of Schramm and Zowe. Received September 10, 1998 / Revised version received August 1999?Published online December 15, 1999  相似文献   

18.
We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.  相似文献   

19.
In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient method when the constraint set is compact or the objective function has a set of generalized weak sharp minima. In both cases, using the constant and diminishing stepsize rules, we describe convergence results in both objective values and iterates, and finite convergence to approximate optimality. We also investigate efficiency estimates of iterates and apply the inexact subgradient algorithm to solve the Cobb–Douglas production efficiency problem. The numerical results verify our theoretical analysis and show the high efficiency of our proposed algorithm, especially for the large-scale problems.  相似文献   

20.
In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the proposed algorithm with two different versions of the subgradient method using the results of numerical experiments. These results demonstrate the superiority of the proposed algorithm over the subgradient method.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号