首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we first characterize finite convergence of an arbitrary iterative algorithm for solving the variational inequality problem (VIP), where the finite convergence means that the algorithm can find an exact solution of the problem in a finite number of iterations. By using this result, we obtain that the well-known proximal point algorithm possesses finite convergence if the solution set of VIP is weakly sharp. As an extension, we show finite convergence of the inertial proximal method for solving the general variational inequality problem under the condition of weak g-sharpness.  相似文献   

2.
The strictly contractive Peaceman-Rachford splitting method is one of effective methods for solving separable convex optimization problem, and the inertial proximal Peaceman-Rachford splitting method is one of its important variants. It is known that the convergence of the inertial proximal Peaceman-Rachford splitting method can be ensured if the relaxation factor in Lagrangian multiplier updates is underdetermined, which means that the steps for the Lagrangian multiplier updates are shrunk conservatively. Although small steps play an important role in ensuring convergence, they should be strongly avoided in practice. In this article, we propose a relaxed inertial proximal Peaceman-Rachford splitting method, which has a larger feasible set for the relaxation factor. Thus, our method provides the possibility to admit larger steps in the Lagrangian multiplier updates. We establish the global convergence of the proposed algorithm under the same conditions as the inertial proximal Peaceman-Rachford splitting method. Numerical experimental results on a sparse signal recovery problem in compressive sensing and a total variation based image denoising problem demonstrate the effectiveness of our method.  相似文献   

3.
In this paper we present the relaxed inertial proximal algorithm for Ky Fan minimax inequalities. Based on Opial lemma, we propose a weak convergence result to a solution of the problem by eliminating in the algorithm (RIPAFAN) the Browder–Halpern’s factor of contraction. We present after, a first result of strong convergence by adding a strong monotonicity condition. Secondly, we eliminate the strong monotonicity and add a Browder–Halpern’s contraction factor in the algorithm (RIPAFAN) and then ensure the strong convergence to a selected solution with respect to the contraction factor. Some examples are proposed. The first one concerns the convex minimization where the objective function is only controlled with a provided well conditioning. In the second one, we propose monotone set-valued variational inequalities. The last example deals with the problem of fixed point for a nonexpansive set-valued operator.  相似文献   

4.
In this article, we incorporate inertial terms in the hybrid proximal-extragradient algorithm and investigate the convergence properties of the resulting iterative scheme designed to find the zeros of a maximally monotone operator in real Hilbert spaces. The convergence analysis relies on extended Fejér monotonicity techniques combined with the celebrated Opial Lemma. We also show that the classical hybrid proximal-extragradient algorithm and the inertial versions of the proximal point, the forward-backward and the forward-backward-forward algorithms can be embedded into the framework of the proposed iterative scheme.  相似文献   

5.
In this paper we prove strong convergence of the Browder-Tikhonov regularization method and the regularization inertial proximal point algorithm to a solution of nonlinear ill-posed equations involving m-accretive mappings in real, reflexive, and strictly convex Banach spaces with a uniformly Gâteaux differentiable norm without weak sequential continuous duality mapping.  相似文献   

6.
We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradient-type step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximal-algorithm-based framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forward-backward splitting algorithm of Tseng falls within the presented general framework.  相似文献   

7.
This paper investigates an enhanced proximal algorithm with interesting practical features and convergence properties for solving non-smooth convex minimization problems, or approximating zeroes of maximal monotone operators, in Hilbert spaces. The considered algorithm involves a recent inertial-type extrapolation technique, the use of enlargement of operators and also a recently proposed hybrid strategy, which combines inexact computation of the proximal iteration with a projection. Compared to other existing related methods, the resulting algorithm inherits the good convergence properties of the inertial-type extrapolation and the relaxed projection strategy. It also inherits the relative error tolerance of the hybrid proximal-projection method. As a special result, an update of inexact Newton-proximal method is derived and global convergence results are established.  相似文献   

8.
We propose a proximal Newton method for solving nondifferentiable convex optimization. This method combines the generalized Newton method with Rockafellar’s proximal point algorithm. At each step, the proximal point is found approximately and the regularization matrix is preconditioned to overcome inexactness of this approximation. We show that such a preconditioning is possible within some accuracy and the second-order differentiability properties of the Moreau-Yosida regularization are invariant with respect to this preconditioning. Based upon these, superlinear convergence is established under a semismoothness condition. This work is supported by the Australian Research Council.  相似文献   

9.
A local convergence result for an abstract descent method is proved. The sequence of iterates is attracted by a local (or global) minimum, stays in its neighborhood, and converges within this neighborhood. This result allows algorithms to exploit local properties of the objective function. In particular, the abstract theory in this paper applies to the inertial forward–backward splitting method: iPiano—a generalization of the Heavy-ball method. Moreover, it reveals an equivalence between iPiano and inertial averaged/alternating proximal minimization and projection methods. Key for this equivalence is the attraction to a local minimum within a neighborhood and the fact that, for a prox-regular function, the gradient of the Moreau envelope is locally Lipschitz continuous and expressible in terms of the proximal mapping. In a numerical feasibility problem, the inertial alternating projection method significantly outperforms its non-inertial variants.  相似文献   

10.
Huanhuan Cui 《Optimization》2017,66(5):793-809
The proximal point algorithm (PPA) is a classical method for finding zeros of maximal monotone operators. It is known that the algorithm only has weak convergence in a general Hilbert space. Recently, Wang, Wang and Xu proposed two modifications of the PPA and established strong convergence theorems on these two algorithms. However, these two convergence theorems exclude an important case, namely, the over-relaxed case. In this paper, we extend the above convergence theorems from under-relaxed case to the over-relaxed case, which in turn improve the performance of these two algorithms. Preliminary numerical experiments show that the algorithm with over-relaxed parameter performs better than that with under-relaxed parameter.  相似文献   

11.
It is known, by Rockafellar (SIAM J Control Optim 14:877–898, 1976), that the proximal point algorithm (PPA) converges weakly to a zero of a maximal monotone operator in a Hilbert space, but it fails to converge strongly. Lehdili and Moudafi (Optimization 37:239–252, 1996) introduced the new prox-Tikhonov regularization method for PPA to generate a strongly convergent sequence and established a convergence property for it by using the technique of variational distance in the same space setting. In this paper, the prox-Tikhonov regularization method for the proximal point algorithm of finding a zero for an accretive operator in the framework of Banach space is proposed. Conditions which guarantee the strong convergence of this algorithm to a particular element of the solution set is provided. An inexact variant of this method with error sequence is also discussed.  相似文献   

12.
申远  李倩倩  吴坚 《计算数学》2018,40(1):85-95
本文考虑求解一种源于信号及图像处理问题的鞍点问题.基于邻近点算法的思想,我们对原始-对偶算法进行改进,构造一种对称正定且可变的邻近项矩阵,得到一种新的原始-对偶算法.新算法可以看成一种邻近点算法,因此它的收敛性易于分析,且无需较强的假设条件.初步实验结果表明,当新算法被应用于求解图像去模糊问题时,和其他几种主流的高效算法相比,新算法能得到较高质量的结果,且计算时间也是有竞争力的.  相似文献   

13.
In this article, we first introduce a modified inertial Mann algorithm and an inertial CQ-algorithm by combining the accelerated Mann algorithm and the CQ-algorithm with the inertial extrapolation, respectively. This strategy is intended to speed up the convergence of the given algorithms. Then we established the convergence theorems for two provided algorithms. For the inertial CQ-algorithm, the conditions on the inertial parameters are very weak. Finally, the numerical experiments are presented to illustrate that the modified inertial Mann algorithm and inertial CQ-algorithm may have a number of advantages over other methods in computing for some cases.  相似文献   

14.
In this paper, we introduce an algorithm as combination between the subgradient extragradient method and inertial method for solving variational inequality problems in Hilbert spaces. The weak convergence of the algorithm is established under standard assumptions imposed on cost operators. The proposed algorithm can be considered as an improvement of the previously known inertial extragradient method over each computational step. The performance of the proposed algorithm is also illustrated by several preliminary numerical experiments.  相似文献   

15.
The subject of this paper is the inexact proximal point algorithm of usual and Halpern type in non-positive curvature metric spaces. We study the convergence of the sequences given by the inexact proximal point algorithm with non-summable errors. We also prove the strong convergence of the Halpern proximal point algorithm to a minimum point of the convex function. The results extend several results in Hilbert spaces, Hadamard manifolds and non-positive curvature metric spaces.  相似文献   

16.
This paper deals with a general fixed point method which unifies relaxation factors and a two step inertial type extrapolation. These strategies are intended to improve the convergence of many existing algorithms. A convergence theorem, which improves the known ones, is established in this new setting.  相似文献   

17.
Approximate iterations in Bregman-function-based proximal algorithms   总被引:15,自引:0,他引:15  
This paper establishes convergence of generalized Bregman-function-based proximal point algorithms when the iterates are computed only approximately. The problem being solved is modeled as a general maximal monotone operator, and need not reduce to minimization of a function. The accuracy conditions on the iterates resemble those required for the classical “linear” proximal point algorithm, but are slightly stronger; they should be easier to verify or enforce in practice than conditions given in earlier analyses of approximate generalized proximal methods. Subjects to these practically enforceable accuracy restrictions, convergence is obtained under the same conditions currently established for exact Bregman-function-based proximal methods.  相似文献   

18.
In a finite-dimensional Euclidean space, we study the convergence of a proximal point method to a solution of the inclusion induced by a maximal monotone operator, under the presence of computational errors. Most results known in the literature establish the convergence of proximal point methods, when computational errors are summable. In the present paper, the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.  相似文献   

19.
In a Hilbert space, we study the convergence of a proximal point method to a common zero of a finite family of maximal monotone operators under the presence of computational errors. Most results known in the literature establish the convergence of proximal point methods, when computational errors are summable. In the present paper, the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.  相似文献   

20.
This paper deals with the convergence analysis of a general fixed point method which unifies KM-type (Krasnoselskii–Mann) iteration and inertial type extrapolation. This strategy is intended to speed up the convergence of algorithms in signal processing and image reconstruction that can be formulated as KM iterations. The convergence theorems established in this new setting improve known ones and some applications are given regarding convex feasibility problems, subgradient methods, fixed point problems and monotone inclusions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号