首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
《Optimization》2012,61(1):39-50
We extend the convergence analysis of a smoothing method [M. Fukushima and J.-S. Pang (2000). Convergence of a smoothing continuation method for mathematical programs with complementarity constraints. In: M. Théra and R. Tichatschke (Eds.), Ill-posed Variational Problems and Regularization Techniques, pp. 99–110. Springer, Berlin/Heidelberg.] to a general class of smoothing functions and show that a weak second-order necessary optimality condition holds at the limit point of a sequence of stationary points found by the smoothing method. We also show that convergence and stability results in [S. Scholtes (2001). Convergence properties of a regularization scheme for mathematical programs with complementarity constraints. SIAM J. Optim., 11, 918–936.] hold for a relaxation problem suggested by Scholtes [S. Scholtes (2003). Private communications.] using a class of smoothing functions. In addition, the relationship between two technical, yet critical, concepts in [M. Fukushima and J.-S. Pang (2000). Convergence of a smoothing continuation method for mathematical programs with complementarity constraints. In: M. Théra and R. Tichatschke (Eds.), Ill-posed Variational Problems and Regularization Techniques, pp. 99–110. Springer, Berlin/Heidelberg; S. Scholtes (2001). Convergence properties of a regularization scheme for mathematical programs with complementarity constraints. SIAM J. Optim., 11, 918–936.] for the convergence analysis of the smoothing and regularization methods is discussed and a counter-example is provided to show that the stability result in [S. Scholtes (2001). Convergence properties of a regularization scheme for mathematical programs with complementarity constraints. SIAM J. Optim., 11, 918–936.] cannot be extended to a weaker regularization.  相似文献   

2.
《Optimization》2012,61(7):1085-1105
We analyse proximal-type minimization methods with generalized Bregman functions by considering a general scheme based on the one studied by Kiwiel [K.C. Kiwiel, Proximal minimization methods with generalized Bregman functions, SIAM J. Control Optim. 35(4) (1997), pp. 1142–1168.] and on successive approximation methods. We apply this scheme to construct methods for generalized fractional programmes.  相似文献   

3.
To permit the stable solution of ill-posed problems, the Proximal Point Algorithm (PPA) was introduced by Martinet (RIRO 4:154–159, 1970) and further developed by Rockafellar (SIAM J Control Optim 14:877–898, 1976). Later on, the usual proximal distance function was replaced by the more general class of Bregman(-like) functions and related distances; see e.g. Chen and Teboulle (SIAM J Optim 3:538–543, 1993), Eckstein (Math Program 83:113–123, 1998), Kaplan and Tichatschke (Optimization 56(1–2):253–265, 2007), and Solodov and Svaiter (Math Oper Res 25:214–230, 2000). An adequate use of such generalized non-quadratic distance kernels admits to obtain an interior-point-effect, that is, the auxiliary problems may be treated as unconstrained ones. In the above mentioned works and nearly all other works related with this topic it was assumed that the operator of the considered variational inequality is a maximal monotone and paramonotone operator. The approaches of El-Farouq (JOTA 109:311–326, 2001), and Schaible et al. (Taiwan J Math 10(2):497–513, 2006) only need pseudomonotonicity (in the sense of Karamardian in JOTA 18:445–454, 1976); however, they make use of other restrictive assumptions which on the one hand contradict the desired interior-point-effect and on the other hand imply uniqueness of the solution of the problem. The present work points to the discussion of the Bregman algorithm under significantly weaker assumptions, namely pseudomonotonicity [and an additional assumption much less restrictive than the ones used by El-Farouq and Schaible et al. We will be able to show that convergence results known from the monotone case still hold true; some of them will be sharpened or are even new. An interior-point-effect is obtained, and for the generated subproblems we allow inexact solutions by means of a unified use of a summable-error-criterion and an error criterion of fixed-relative-error-type (this combination is also new in the literature).  相似文献   

4.
In this paper we introduce general iterative methods for finding zeros of a maximal monotone operator in a Hilbert space which unify two previously studied iterative methods: relaxed proximal point algorithm [H.K. Xu, Iterative algorithms for nonlinear operators, J. London Math Soc. 66 (2002) 240–256] and inexact hybrid extragradient proximal point algorithm [R.S. Burachik, S. Scheimberg, B.F. Svaiter, Robustness of the hybrid extragradient proximal-point algorithm, J. Optim. Theory Appl. 111 (2001) 117–136]. The paper establishes both weak convergence and strong convergence of the methods under suitable assumptions on the algorithm parameters.  相似文献   

5.
In this paper, the problem of identifying the active constraints for constrained nonlinear programming and minimax problems at an isolated local solution is discussed. The correct identification of active constraints can improve the local convergence behavior of algorithms and considerably simplify algorithms for inequality constrained problems, so it is a useful adjunct to nonlinear optimization algorithms. Facchinei et al. [F. Facchinei, A. Fischer, C. Kanzow, On the accurate identification of active constraints, SIAM J. Optim. 9 (1998) 14-32] introduced an effective technique which can identify the active set in a neighborhood of a solution for nonlinear programming. In this paper, we first improve this conclusion to be more suitable for infeasible algorithms such as the strongly sub-feasible direction method and the penalty function method. Then, we present the identification technique of active constraints for constrained minimax problems without strict complementarity and linear independence. Some numerical results illustrating the identification technique are reported.  相似文献   

6.
This paper proposes a descent method to solve a class of structured monotone variational inequalities. The descent directions are constructed from the iterates generated by a prediction-correction method [B.S. He, Y. Xu, X.M. Yuan, A logarithmic-quadratic proximal prediction-correction method for structured monotone variational inequalities, Comput. Optim. Appl. 35 (2006) 19-46], which is based on the logarithmic-quadratic proximal method. In addition, the optimal step-sizes along these descent directions are identified to accelerate the convergence of the new method. Finally, some numerical results for solving traffic equilibrium problems are reported.  相似文献   

7.
It is known, by Rockafellar (SIAM J Control Optim 14:877–898, 1976), that the proximal point algorithm (PPA) converges weakly to a zero of a maximal monotone operator in a Hilbert space, but it fails to converge strongly. Lehdili and Moudafi (Optimization 37:239–252, 1996) introduced the new prox-Tikhonov regularization method for PPA to generate a strongly convergent sequence and established a convergence property for it by using the technique of variational distance in the same space setting. In this paper, the prox-Tikhonov regularization method for the proximal point algorithm of finding a zero for an accretive operator in the framework of Banach space is proposed. Conditions which guarantee the strong convergence of this algorithm to a particular element of the solution set is provided. An inexact variant of this method with error sequence is also discussed.  相似文献   

8.
ABSTRACT

The authors' paper in Dempe et al. [Necessary optimality conditions in pessimistic bilevel programming. Optimization. 2014;63:505–533], was the first one to provide detailed optimality conditions for pessimistic bilevel optimization. The results there were based on the concept of the two-level optimal value function introduced and analysed in Dempe et al. [Sensitivity analysis for two-level value functions with applications to bilevel programming. SIAM J. Optim. 22 (2012), 1309–1343], for the case of optimistic bilevel programs. One of the basic assumptions in both of these papers is that the functions involved in the problems are at least continuously differentiable. Motivated by the fact that many real-world applications of optimization involve functions that are non-differentiable at some points of their domain, the main goal of the current paper is to extend the two-level value function approach by deriving new necessary optimality conditions for both optimistic and pessimistic versions in bilevel programming with non-smooth data.  相似文献   

9.
In this paper, we introduce an iterative sequence for finding a solution of a maximal monotone operator in a uniformly convex Banach space. Then we first prove a strong convergence theorem, using the notion of generalized projection. Assuming that the duality mapping is weakly sequentially continuous, we next prove a weak convergence theorem, which extends the previous results of Rockafellar [SIAM J. Control Optim. 14 (1976), 877–898] and Kamimura and Takahashi [J. Approx. Theory 106 (2000), 226–240]. Finally, we apply our convergence theorem to the convex minimization problem and the variational inequality problem.  相似文献   

10.
11.
In this paper, we construct an iterative scheme and prove strong convergence theorem of the sequence generated to an approximate solution to a multiple sets split feasibility problem in a p-uniformly convex and uniformly smooth real Banach space. Some numerical experiments are given to study the efficiency and implementation of our iteration method. Our result complements the results of F. Wang (A new algorithm for solving the multiple-sets split feasibility problem in Banach spaces, Numerical Functional Anal. Optim. 35 (2014), 99–110), F. Scho¨pfer et al. (An iterative regularization method for the solution of the split feasibility problem in Banach spaces, Inverse Problems 24 (2008), 055008) and many important recent results in this direction.  相似文献   

12.
A classical model of Newton iterations which takes into account some error terms is given by the quasi-Newton method, which assumes perturbed Jacobians at each step. Its high convergence orders were characterized by Dennis and Moré [Math. Comp. 28 (1974), 549-560]. The inexact Newton method constitutes another such model, since it assumes that at each step the linear systems are only approximately solved; the high convergence orders of these iterations were characterized by Dembo, Eisenstat and Steihaug [SIAM J. Numer. Anal. 19 (1982), 400-408]. We have recently considered the inexact perturbed Newton method [J. Optim. Theory Appl. 108 (2001), 543-570] which assumes that at each step the linear systems are perturbed and then they are only approximately solved; we have characterized the high convergence orders of these iterates in terms of the perturbations and residuals.

In the present paper we show that these three models are in fact equivalent, in the sense that each one may be used to characterize the high convergence orders of the other two. We also study the relationship in the case of linear convergence and we deduce a new convergence result.

  相似文献   


13.
《Optimization》2012,61(5):553-573
Implicit and explicit viscosity methods for finding common solutions of equilibrium and hierarchical fixed points are presented. These methods are used to solve systems of equilibrium problems and variational inequalities where the involving operators are complements of nonexpansive mappings. The results here are situated on the lines of the research of the corresponding results of Moudafi [Krasnoselski-Mann iteration for hierarchical fixed-point problems, Inverse Probl. 23 (2007), pp. 1635–1640; Weak convergence theorems for nonexpansive mappings and equilibrium problems, to appear in JNCA], Moudafi and Maingé [Towards viscosity approximations of hierarchical fixed-points problems, Fixed Point Theory Appl. Art ID 95453 (2006), 10 pp.; Strong convergence of an iterative method for hierarchical fixed point problems, Pac. J. Optim. 3 (2007), pp. 529–538; Coupling viscosity methods with the extragradient algorithm for solving equilibrium problems, to appear in JNCA], Yao and Liou [Weak and strong convergence of Krasnosel'ski?–Mann iteration for hierarchical fixed point problems, Inverse Probl. 24 (2008), 015015 8 pp.], S. Takahashi and W. Takahashi [Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces, J. Math. Anal. Appl. 331 (2006), pp. 506–515], Xu [Viscosity method for hierarchical fixed point approach to variational inequalities, preprint.], Combettes and Hirstoaga [Equilibrium programming in Hilbert spaces, J. Nonlinear Convex Anal. 6 (2005), pp. 117–136] and Plubtieng and Pumbaeang [A general iterative method for equilibrium problems and fixed point problems in Hilbert spaces, J. Math. Anal. Appl. 336 (2007), pp. 455–469.].  相似文献   

14.
In this paper we propose a nonmonotone trust region method. Unlike traditional nonmonotone trust region method, the nonmonotone technique applied to our method is based on the nonmonotone line search technique proposed by Zhang and Hager [A nonmonotone line search technique and its application to unconstrained optimization, SIAM J. Optim. 14(4) (2004) 1043–1056] instead of that presented by Grippo et al. [A nonmonotone line search technique for Newton's method, SIAM J. Numer. Anal. 23(4) (1986) 707–716]. So the method requires nonincreasing of a special weighted average of the successive function values. Global and superlinear convergence of the method are proved under suitable conditions. Preliminary numerical results show that the method is efficient for unconstrained optimization problems.  相似文献   

15.
In this paper, we investigate the strong convergence of an inexact proximal-point algorithm. It is known that the proximal-point algorithm converges weakly to a solution of a maximal monotone operator, but fails to converge strongly. Solodov and Svaiter (Math. Program. 87:189–202, 2000) introduced a new proximal-type algorithm to generate a strongly convergent sequence and established a convergence result in Hilbert space. Subsequently, Kamimura and Takahashi (SIAM J. Optim. 13:938–945, 2003) extended the Solodov and Svaiter result to the setting of uniformly convex and uniformly smooth Banach space. On the other hand, Rockafellar (SIAM J. Control Optim. 14:877–898, 1976) gave an inexact proximal-point algorithm which is more practical than the exact one. Our purpose is to extend the Kamimura and Takahashi result to a new inexact proximal-type algorithm. Moreover, this result is applied to the problem of finding the minimizer of a convex function on a uniformly convex and uniformly smooth Banach space. L.C. Zeng’s research was partially supported by the Teaching and Research Award Fund for Outstanding Young Teachers in Higher Education Institutions of MOE, China and by the Dawn Program Foundation in Shanghai. J.C. Yao’s research was partially supported by the National Science Council of the Republic of China.  相似文献   

16.
Deconvolution: a wavelet frame approach   总被引:1,自引:0,他引:1  
This paper devotes to analyzing deconvolution algorithms based on wavelet frame approaches, which has already appeared in Chan et al. (SIAM J. Sci. Comput. 24(4), 1408–1432, 2003; Appl. Comput. Hormon. Anal. 17, 91–115, 2004a; Int. J. Imaging Syst. Technol. 14, 91–104, 2004b) as wavelet frame based high resolution image reconstruction methods. We first give a complete formulation of deconvolution in terms of multiresolution analysis and its approximation, which completes the formulation given in Chan et al. (SIAM J. Sci. Comput. 24(4), 1408–1432, 2003; Appl. Comput. Hormon. Anal. 17, 91–115, 2004a; Int. J. Imaging Syst. Technol. 14, 91–104, 2004b). This formulation converts deconvolution to a problem of filling the missing coefficients of wavelet frames which satisfy certain minimization properties. These missing coefficients are recovered iteratively together with a built-in denoising scheme that removes noise in the data set such that noise in the data will not blow up while iterating. This approach has already been proven to be efficient in solving various problems in high resolution image reconstructions as shown by the simulation results given in Chan et al. (SIAM J. Sci. Comput. 24(4), 1408–1432, 2003; Appl. Comput. Hormon. Anal. 17, 91–115, 2004a; Int. J. Imaging Syst. Technol. 14, 91–104, 2004b). However, an analysis of convergence as well as the stability of algorithms and the minimization properties of solutions were absent in those papers. This paper is to establish the theoretical foundation of this wavelet frame approach. In particular, a proof of convergence, an analysis of the stability of algorithms and a study of the minimization property of solutions are given.  相似文献   

17.
Numerical Algorithms - We investigate the techniques and ideas used in Shefi and Teboulle (SIAM J Optim 24(1), 269–297, 2014) in the convergence analysis of two proximal ADMM algorithms for...  相似文献   

18.
We provide two types of semilocal convergence theorems for approximating a solution of an equation in a Banach space setting using an inexact Newton method [I.K. Argyros, Relation between forcing sequences and inexact Newton iterates in Banach spaces, Computing 63 (2) (1999) 134–144; I.K. Argyros, A new convergence theorem for the inexact Newton method based on assumptions involving the second Fréchet-derivative, Comput. Appl. Math. 37 (7) (1999) 109–115; I.K. Argyros, Forcing sequences and inexact Newton iterates in Banach space, Appl. Math. Lett. 13 (1) (2000) 77–80; I.K. Argyros, Local convergence of inexact Newton-like iterative methods and applications, Comput. Math. Appl. 39 (2000) 69–75; I.K. Argyros, Computational Theory of Iterative Methods, in: C.K. Chui, L. Wuytack (Eds.), in: Studies in Computational Mathematics, vol. 15, Elsevier Publ. Co., New York, USA, 2007; X. Guo, On semilocal convergence of inexact Newton methods, J. Comput. Math. 25 (2) (2007) 231–242]. By using more precise majorizing sequences than before [X. Guo, On semilocal convergence of inexact Newton methods, J. Comput. Math. 25 (2) (2007) 231–242; Z.D. Huang, On the convergence of inexact Newton method, J. Zheijiang University, Nat. Sci. Ed. 30 (4) (2003) 393–396; L.V. Kantorovich, G.P. Akilov, Functional Analysis, Pergamon Press, Oxford, 1982; X.H. Wang, Convergence on the iteration of Halley family in weak condition, Chinese Sci. Bull. 42 (7) (1997) 552–555; T.J. Ypma, Local convergence of inexact Newton methods, SIAM J. Numer. Anal. 21 (3) (1984) 583–590], we provide (under the same computational cost) under the same or weaker hypotheses: finer error bounds on the distances involved; an at least as precise information on the location of the solution. Moreover if the splitting method is used, we show that a smaller number of inner/outer iterations can be obtained.  相似文献   

19.
Zhao  Ting  Liu  Hongwei  Liu  Zexian 《Numerical Algorithms》2021,87(4):1501-1534

In this paper, two new subspace minimization conjugate gradient methods based on p-regularization models are proposed, where a special scaled norm in p-regularization model is analyzed. Different choices of special scaled norm lead to different solutions to the p-regularized subproblem. Based on the analyses of the solutions in a two-dimensional subspace, we derive new directions satisfying the sufficient descent condition. With a modified nonmonotone line search, we establish the global convergence of the proposed methods under mild assumptions. R-linear convergence of the proposed methods is also analyzed. Numerical results show that, for the CUTEr library, the proposed methods are superior to four conjugate gradient methods, which were proposed by Hager and Zhang (SIAM J. Optim. 16(1):170–192, 2005), Dai and Kou (SIAM J. Optim. 23(1):296–320, 2013), Liu and Liu (J. Optim. Theory. Appl. 180(3):879–906, 2019) and Li et al. (Comput. Appl. Math. 38(1):2019), respectively.

  相似文献   

20.
We study the local convergence of a proximal point method in a metric space under the presence of computational errors. We show that the proximal point method generates a good approximate solution if the sequence of computational errors is bounded from above by some constant. The principle assumption is a local error bound condition which relates the growth of an objective function to the distance to the set of minimizers introduced by Hager and Zhang (SIAM J Control Optim 46:1683–1704, 2007).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号