首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
《Optimization》2012,61(1):3-17
Two inexact versions of a Bregman-function-based proximal method for finding a zero of a maximal monotone operator, suggested in [J. Eckstein (1998). Approximate iterations in Bregman-function-based proximal algorithms. Math. Programming, 83, 113–123; P. da Silva, J. Eckstein and C. Humes (2001). Rescaling and stepsize selection in proximal methods using separable generalized distances. SIAM J. Optim., 12, 238–261], are considered. For a wide class of Bregman functions, including the standard entropy kernel and all strongly convex Bregman functions, convergence of these methods is proved under an essentially weaker accuracy condition on the iterates than in the original papers.

Also the error criterion of a logarithmic–quadratic proximal method, developed in [A. Auslender, M. Teboulle and S. Ben-Tiba (1999). A logarithmic-quadratic proximal method for variational inequalities. Computational Optimization and Applications, 12, 31–40], is relaxed, and convergence results for the inexact version of the proximal method with entropy-like distance functions are described.

For the methods mentioned, like in [R.T. Rockafellar (1976). Monotone operators and the proximal point algorithm. SIAM J. Control Optim., 14, 877–898] for the classical proximal point algorithm, only summability of the sequence of error vector norms is required.  相似文献   

2.
Abstract

In this article, we study Δ-convergence of iterations for a sequence of strongly quasi-nonexpansive mappings as well as the strong convergence of the Halpern type regularization of them in Hadamard spaces. Then, we give some their applications to iterative methods, convex and pseudo-convex minimization (proximal point algorithm), fixed point theory and equilibrium problems. The results extend several new results in the literature and some of them seem new even in Hilbert spaces. One of our motivations is to unify some usual iterative methods in fixed point theory and proximal methods using the iterations generated by a sequence of strongly nonexpansive mappings.  相似文献   

3.
Abstract

We present an interior proximal method for solving constrained nonconvex optimization problems where the objective function is given by the difference of two convex function (DC function). To this end, we consider a linearized proximal method with a proximal distance as regularization. Convergence analysis of particular choices of the proximal distance as second-order homogeneous proximal distances and Bregman distances are considered. Finally, some academic numerical results are presented for a constrained DC problem and generalized Fermat–Weber location problems.  相似文献   

4.
《Optimization》2012,61(10):1701-1716
ABSTRACT

In this paper, a hybrid proximal algorithm with inertial effect is introduced to solve a split variational inclusion problem in real Hilbert spaces. Under mild conditions on the parameters, we establish weak convergence results for the proposed algorithm. Unlike the earlier iterative methods, we do not impose any conditions on the sequence generated by the proposed algorithm. Also, we extend our results to find a common solution of a split variational inclusion problem and a fixed-point problem. Finally, some numerical examples are given to discuss the convergence and superiority of the proposed iterative methods.  相似文献   

5.
 In this paper a new class of proximal-like algorithms for solving monotone inclusions of the form T(x)∋0 is derived. It is obtained by applying linear multi-step methods (LMM) of numerical integration in order to solve the differential inclusion , which can be viewed as a generalization of the steepest decent method for a convex function. It is proved that under suitable conditions on the parameters of the LMM, the generated sequence converges weakly to a point in the solution set T −1 (0). The LMM is very similar to the classical proximal point algorithm in that both are based on approximately evaluating the resolvants of T. Consequently, LMM can be used to derive multi-step versions of many of the optimization methods based on the classical proximal point algorithm. The convergence analysis allows errors in the computation of the iterates, and two different error criteria are analyzed, namely, the classical scheme with summable errors, and a recently proposed more constructive criterion. Received: April 2001 / Accepted: November 2002 Published online: February 14, 2003 Key Words. proximal point algorithm – monotone operator – numerical integration – strong stability – relative error criterion Mathematics Subject Classification (1991): 20E28, 20G40, 20C20  相似文献   

6.
Abstract

We propose two forward–backward proximal point type algorithms with inertial/memory effects for determining weakly efficient solutions to a vector optimization problem consisting in vector-minimizing with respect to a given closed convex pointed cone the sum of a proper cone-convex vector function with a cone-convex differentiable one, both mapping from a Hilbert space to a Banach one. Inexact versions of the algorithms, more suitable for implementation, are provided as well, while as a byproduct one can also derive a forward–backward method for solving the mentioned problem. Numerical experiments with the proposed methods are carried out in the context of solving a portfolio optimization problem.  相似文献   

7.
《Quaestiones Mathematicae》2013,36(4):443-452
Abstract

The proximal limit spaces are introduced which fill the gap arising from the existence of proximity spaces, uniform spaces, and uniform limit spaces. It is shown that the proximal limit spaces can be considered as a bireflective subcategory of the topological category of uniform limit spaces. A limit space is induced by a proximal limit space if and only if it is a S1-limit space.  相似文献   

8.
 We present a unified convergence rate analysis of iterative methods for solving the variational inequality problem. Our results are based on certain error bounds; they subsume and extend the linear and sublinear rates of convergence established in several previous studies. We also derive a new error bound for $\gamma$-strictly monotone variational inequalities. The class of algorithms covered by our analysis in fairly broad. It includes some classical methods for variational inequalities, e.g., the extragradient, matrix splitting, and proximal point methods. For these methods, our analysis gives estimates not only for linear convergence (which had been studied extensively), but also sublinear, depending on the properties of the solution. In addition, our framework includes a number of algorithms to which previous studies are not applicable, such as the infeasible projection methods, a separation-projection method, (inexact) hybrid proximal point methods, and some splitting techniques. Finally, our analysis covers certain feasible descent methods of optimization, for which similar convergence rate estimates have been recently obtained by Luo [14]. Received: April 17, 2001 / Accepted: December 10, 2002 Published online: April 10, 2003 RID="⋆" ID="⋆" Research of the author is partially supported by CNPq Grant 200734/95–6, by PRONEX-Optimization, and by FAPERJ. Key Words. Variational inequality – error bound – rate of convergence Mathematics Subject Classification (2000): 90C30, 90C33, 65K05  相似文献   

9.
《Optimization》2012,61(11):1849-1868
ABSTRACT

The notions of perturbed optimization problem and perturbed distance function are introduced in Riemannian manifolds. Some characterizations of Fréchet and proximal subdifferentials of perturbed distance function in the context of Riemannian manifolds are given.  相似文献   

10.
This paper establishes a linear convergence rate for a class of epsilon-subgradient descent methods for minimizing certain convex functions on ℝ n . Currently prominent methods belonging to this class include the resolvent (proximal point) method and the bundle method in proximal form (considered as a sequence of serious steps). Other methods, such as a variant of the proximal point method given by Correa and Lemaréchal, can also fit within this framework, depending on how they are implemented. The convex functions covered by the analysis are those whose conjugates have subdifferentials that are locally upper Lipschitzian at the origin, a property generalizing classical regularity conditions. Received March 29, 1996 / Revised version received March 5, 1999? Published online June 11, 1999  相似文献   

11.
Abstract

In this paper, motivated by Moreau’s proximal algorithm, we give several algorithms and related weak and strong convergence theorems for minimization problems under suitable conditions. These algorithms and convergence theorems are different from the results in the literatures. Besides, we also study algorithms and convergence theorems for the split feasibility problem in real Hilbert spaces. Finally, we give numerical results for our main results.  相似文献   

12.
Abstract

This paper is devoted to the study of proximal distances defined over symmetric cones, which include the non-negative orthant, the second-order cone and the cone of positive semi-definite symmetric matrices. Specifically, our first aim is to provide two ways to build them. For this, we consider two classes of real-valued functions satisfying some assumptions. Then, we show that its corresponding spectrally defined function defines a proximal distance. In addition, we present several examples and some properties of this distance. Taking into account these properties, we analyse the convergence of proximal-type algorithms for solving convex symmetric cone programming (SCP) problems, and we study the asymptotic behaviour of primal central paths associated with a proximal distance. Finally, for linear SCP problems, we provide a relationship between the proximal sequence and the primal central path.  相似文献   

13.
In this note, a small gap is corrected in the proof of H.K. Xu [Theorem 3.3, A regularization method for the proximal point algorithm, J. Glob. Optim. 36, 115–125 (2006)], and some strict restriction is removed also.   相似文献   

14.
The proximal point mapping is the basis of many optimization techniques for convex functions. By means of variational analysis, the concept of proximal mapping was recently extended to nonconvex functions that are prox-regular and prox-bounded. In such a setting, the proximal point mapping is locally Lipschitz continuous and its set of fixed points coincide with the critical points of the original function. This suggests that the many uses of proximal points, and their corresponding proximal envelopes (Moreau envelopes), will have a natural extension from convex optimization to nonconvex optimization. For example, the inexact proximal point methods for convex optimization might be redesigned to work for nonconvex functions. In order to begin the practical implementation of proximal points in a nonconvex setting, a first crucial step would be to design efficient methods of approximating nonconvex proximal points. This would provide a solid foundation on which future design and analysis for nonconvex proximal point methods could flourish. In this paper we present a methodology based on the computation of proximal points of piecewise affine models of the nonconvex function. These models can be built with only the knowledge obtained from a black box providing, for each point, the function value and one subgradient. Convergence of the method is proved for the class of nonconvex functions that are prox-bounded and lower- ${\mathcal{C}}^2$ and encouraging preliminary numerical testing is reported.  相似文献   

15.
Sparse principal component analysis (PCA), an important variant of PCA, attempts to find sparse loading vectors when conducting dimension reduction. This paper considers the nonsmooth Riemannian optimization problem associated with the ScoTLASS model 1 for sparse PCA which can impose orthogonality and sparsity simultaneously. A Riemannian proximal method is proposed in the work of Chen et al. 9 for the efficient solution of this optimization problem. In this paper, two acceleration schemes are introduced. First and foremost, we extend the FISTA method from the Euclidean space to the Riemannian manifold to solve sparse PCA, leading to the accelerated Riemannian proximal gradient method. Since the Riemannian optimization problem for sparse PCA is essentially nonconvex, a restarting technique is adopted to stabilize the accelerated method without sacrificing the fast convergence. Second, a diagonal preconditioner is proposed for the Riemannian proximal subproblem which can further accelerate the convergence of the Riemannian proximal methods. Numerical evaluations establish the computational advantages of the proposed methods over the existing proximal gradient methods on a manifold. Additionally, a short result concerning the convergence of the Riemannian subgradients of a sequence is established, which, together with the result in the work of Chen et al., 9 can show the stationary point convergence of the Riemannian proximal methods.  相似文献   

16.
Abstract

In the present investigation, shooting methods are described for numerically solving nonlinear stochastic boundary-value problems. These stochastic shooting methods are analogous to standard shooting methods for numerical solution of ordinary deterministic boundary-value problems. It is shown that the shooting methods provide accurate approximations. An error analysis is performed and computational simulations are described.  相似文献   

17.
Abstract

Several discontinuous Galerkin (DG) methods are introduced for solving a frictional contact problem with normal compliance, which is modeled as a quasi-variational inequality. Consistency, boundedness, and stability are established for the DG methods. Two numerical examples are presented to illustrate the performance of the DG methods.  相似文献   

18.
《随机分析与应用》2013,31(5):1295-1314
Abstract

In the present investigation, numerical methods are developed for approximate solution of stochastic boundary-value problems. In particular, shooting methods are examined for numerically solving systems of Stratonovich boundary-value problems. It is proved that these methods accurately approximate the solutions of stochastic boundary-value problems. An error analysis of these methods is performed. Computational simulations are given.  相似文献   

19.
Algorithms are proposed for the approximate calculation of the matrix product $ \tilde C $ \tilde C ≈ C = A · B, where the matrices A and B are given by their tensor decompositions in either canonical or Tucker format of rank r. The matrix C is not calculated as a full array; instead, it is first represented by a similar decomposition with a redundant rank and is then reapproximated (compressed) within the prescribed accuracy to reduce the rank. The available reapproximation algorithms as applied to the above problem require that an array containing r 2d elements be stored, where d is the dimension of the corresponding space. Due to the memory and speed limitations, these algorithms are inapplicable even for the typical values d = 3 and r ∼ 30. In this paper, methods are proposed that approximate the mode factors of C using individually chosen accuracy criteria. As an application, the three-dimensional Coulomb potential is calculated. It is shown that the proposed methods are efficient if r can be as large as several hundreds and the reapproximation (compression) of C has low complexity compared to the preliminary calculation of the factors in the tensor decomposition of C with a redundant rank.  相似文献   

20.
Summary Standard analysis of multistep methods for ODE's assumes the application of an initialization routine that generates the starting points. Here ak-step method is considered directly as a mappingR kn R n . It is shown to approximate a mapping which is expressible directly in terms of the flow of the vector field. Some useful properties of that mapping are shown and for strictly stable methods these are applied to the question of invariant circles near a hyperbolic periodic solution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号