首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We study various error measures for approximate solution of proximal point regularizations of the variational inequality problem, and of the closely related problem of finding a zero of a maximal monotone operator. A new merit function is proposed for proximal point subproblems associated with the latter. This merit function is based on Burachik-Iusem-Svaiter’s concept of ε-enlargement of a maximal monotone operator. For variational inequalities, we establish a precise relationship between the regularized gap function, which is a natural error measure in this context, and our new merit function. Some error bounds are derived using both merit functions for the corresponding formulations of the proximal subproblem. We further use the regularized gap function to devise a new inexact proximal point algorithm for solving monotone variational inequalities. This inexact proximal point method preserves all the desirable global and local convergence properties of the classical exact/inexact method, while providing a constructive error tolerance criterion, suitable for further practical applications. The use of other tolerance rules is also discussed. Received: April 28, 1999 / Accepted: March 24, 2000?Published online July 20, 2000  相似文献   

2.
Forcing strong convergence of proximal point iterations in a Hilbert space   总被引:1,自引:1,他引:0  
This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinite-dimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinite-dimensional case. We propose a new proximal-type algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns. Received January 6, 1998 / Revised version received August 9, 1999?Published online November 30, 1999  相似文献   

3.
We consider a class of stochastic nonlinear programs for which an approximation to a locally optimal solution is specified in terms of a fractional reduction of the initial cost error. We show that such an approximate solution can be found by approximately solving a sequence of sample average approximations. The key issue in this approach is the determination of the required sequence of sample average approximations as well as the number of iterations to be carried out on each sample average approximation in this sequence. We show that one can express this requirement as an idealized optimization problem whose cost function is the computing work required to obtain the required error reduction. The specification of this idealized optimization problem requires the exact knowledge of a few problems and algorithm parameters. Since the exact values of these parameters are not known, we use estimates, which can be updated as the computation progresses. We illustrate our approach using two numerical examples from structural engineering design.  相似文献   

4.
This paper analyzes the rate of local convergence of the Log-Sigmoid nonlinear Lagrange method for nonconvex nonlinear second-order cone programming. Under the componentwise strict complementarity condition, the constraint nondegeneracy condition and the second-order sufficient condition, we show that the sequence of iteration points generated by the proposed method locally converges to a local solution when the penalty parameter is less than a threshold and the error bound of solution is proportional to the penalty parameter. Finally, we report numerical results to show the efficiency of the method.  相似文献   

5.
This paper addresses the convergence of two nonmonotone Levenberg-Marquardt algorithms for nonlinear complementarity problem. Under some mild assumptions, and requiring only the solution of a linear system at each iteration, the nonmonotone Levenberg-Marquardt algorithms are shown to be globally convergent.  相似文献   

6.
In this paper, the problem of identifying the active constraints for constrained nonlinear programming and minimax problems at an isolated local solution is discussed. The correct identification of active constraints can improve the local convergence behavior of algorithms and considerably simplify algorithms for inequality constrained problems, so it is a useful adjunct to nonlinear optimization algorithms. Facchinei et al. [F. Facchinei, A. Fischer, C. Kanzow, On the accurate identification of active constraints, SIAM J. Optim. 9 (1998) 14-32] introduced an effective technique which can identify the active set in a neighborhood of a solution for nonlinear programming. In this paper, we first improve this conclusion to be more suitable for infeasible algorithms such as the strongly sub-feasible direction method and the penalty function method. Then, we present the identification technique of active constraints for constrained minimax problems without strict complementarity and linear independence. Some numerical results illustrating the identification technique are reported.  相似文献   

7.
We develop and analyze a new affine scaling Levenberg–Marquardt method with nonmonotonic interior backtracking line search technique for solving bound-constrained semismooth equations under local error bound conditions. The affine scaling Levenberg–Marquardt equation is based on a minimization of the squared Euclidean norm of linear model adding a quadratic affine scaling matrix to find a solution that belongs to the bounded constraints on variable. The global convergence results are developed in a very general setting of computing trial directions by a semismooth Levenberg–Marquardt method where a backtracking line search technique projects trial steps onto the feasible interior set. We establish that close to the solution set the affine scaling interior Levenberg–Marquardt algorithm is shown to converge locally Q-superlinearly depending on the quality of the semismooth and Levenberg–Marquardt parameter under an error bound assumption that is much weaker than the standard nonsingularity condition, that is, BD-regular condition under nonsmooth case. A nonmonotonic criterion should bring about speed up the convergence progress in the contours of objective function with large curvature.  相似文献   

8.
Based on the notion of the ε -subgradient, we present a unified technique to establish convergence properties of several methods for nonsmooth convex minimization problems. Starting from the technical results, we obtain the global convergence of: (i) the variable metric proximal methods presented by Bonnans, Gilbert, Lemaréchal, and Sagastizábal, (ii) some algorithms proposed by Correa and Lemaréchal, and (iii) the proximal point algorithm given by Rockafellar. In particular, we prove that the Rockafellar—Todd phenomenon does not occur for each of the above mentioned methods. Moreover, we explore the convergence rate of {||x k || } and {f(x k ) } when {x k } is unbounded and {f(x k ) } is bounded for the non\-smooth minimization methods (i), (ii), and (iii). Accepted 15 October 1996  相似文献   

9.
This paper develops a new error criterion for the approximate minimization of augmented Lagrangian subproblems. This criterion is practical since it is readily testable given only a gradient (or subgradient) of the augmented Lagrangian. It is also “relative” in the sense of relative error criteria for proximal point algorithms: in particular, it uses a single relative tolerance parameter, rather than a summable parameter sequence. Our analysis first describes an abstract version of the criterion within Rockafellar’s general parametric convex duality framework, and proves a global convergence result for the resulting algorithm. Specializing this algorithm to a standard formulation of convex programming produces a version of the classical augmented Lagrangian method with a novel inexact solution condition for the subproblems. Finally, we present computational results drawn from the CUTE test set—including many nonconvex problems—indicating that the approach works well in practice.  相似文献   

10.
《Optimization》2012,61(1):3-17
Two inexact versions of a Bregman-function-based proximal method for finding a zero of a maximal monotone operator, suggested in [J. Eckstein (1998). Approximate iterations in Bregman-function-based proximal algorithms. Math. Programming, 83, 113–123; P. da Silva, J. Eckstein and C. Humes (2001). Rescaling and stepsize selection in proximal methods using separable generalized distances. SIAM J. Optim., 12, 238–261], are considered. For a wide class of Bregman functions, including the standard entropy kernel and all strongly convex Bregman functions, convergence of these methods is proved under an essentially weaker accuracy condition on the iterates than in the original papers.

Also the error criterion of a logarithmic–quadratic proximal method, developed in [A. Auslender, M. Teboulle and S. Ben-Tiba (1999). A logarithmic-quadratic proximal method for variational inequalities. Computational Optimization and Applications, 12, 31–40], is relaxed, and convergence results for the inexact version of the proximal method with entropy-like distance functions are described.

For the methods mentioned, like in [R.T. Rockafellar (1976). Monotone operators and the proximal point algorithm. SIAM J. Control Optim., 14, 877–898] for the classical proximal point algorithm, only summability of the sequence of error vector norms is required.  相似文献   

11.
This paper deals with a general nonlinear complementarity problem, where the underlying functions are assumed to be continuous. Based on a nonlinear complementarity function, it is transformed into a system of nonsmooth equations. Then, two kinds of approximate Newton methods for the nonsmooth equations are developed and their convergence are proved. Finally, numerical tests are also listed.  相似文献   

12.
For solving unconstrained minimization problems, quasi-Newton methods are popular iterative methods. The secant condition which employs only the gradient information is imposed on these methods. Several researchers paid attention to other secant conditions to get a better approximation of the Hessian matrix of the objective function. Recently, Zhang et al. [New quasi-Newton equation and related methods for unconstrained optimization, J. Optim. Theory Appl. 102 (1999) 147–167] and Zhang and Xu [Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations, J. Comput. Appl. Math. 137 (2001) 269–278] proposed the modified secant condition which uses both gradient and function value information in order to get a higher order accuracy in approximating the second curvature of the objective function. They showed the local and q-superlinear convergence property of the BFGS-like and DFP-like updates based on their proposed secant condition. In this paper, we incorporate one parameter into this secant condition to smoothly switch the standard secant condition and the secant condition of Zhang et al. We consider a modified Broyden family which includes the BFGS-like and the DFP-like updates proposed by Zhang et al. We prove the local and q-superlinear convergence of our method.  相似文献   

13.
We present a simple and unified technique to establish convergence of various minimization methods. These contain the (conceptual) proximal point method, as well as implementable forms such as bundle algorithms, including the classical subgradient relaxation algorithm with divergent series.An important research work of Phil Wolfe's concerned convex minimization. This paper is dedicated to him, on the occasion of his 65th birthday, in appreciation of his creative and pioneering work.  相似文献   

14.
Summary. In this paper we consider two aspects of the problem of designing efficient numerical methods for the approximation of semilinear boundary value problems. First we consider the use of two and multilevel algorithms for approximating the discrete solution. Secondly we consider adaptive mesh refinement based on feedback information from coarse level approximations. The algorithms are based on an a posteriori error estimate, where the error is estimated in terms of computable quantities only. The a posteriori error estimate is used for choosing appropriate spaces in the multilevel algorithms, mesh refinements, as a stopping criterion and finally it gives an estimate of the total error. Received April 8, 1997 / Revised version received July 27, 1998 / Published online September 24, 1999  相似文献   

15.
In this paper we consider the power utility maximization problem under partial information in a continuous semimartingale setting. Investors construct their strategies using the available information, which possibly may not even include the observation of the asset prices. Resorting to stochastic filtering, the problem is transformed into an equivalent one, which is formulated in terms of observable processes. The value process, related to the equivalent optimization problem, is then characterized as the unique bounded solution of a semimartingale backward stochastic differential equation (BSDE). This yields a unified characterization for the value process related to the power and exponential utility maximization problems, the latter arising as a particular case. The convergence of the corresponding optimal strategies is obtained by means of BSDEs. Finally, we study some particular cases where the value process admits an explicit expression.  相似文献   

16.
We analyze an algorithm for the problem minf(x) s.t.x 0 suggested, without convergence proof, by Eggermont. The iterative step is given by x j k+1 =x j k (1-kf(x k)j) with k > 0 determined through a line search. This method can be seen as a natural extension of the steepest descent method for unconstrained optimization, and we establish convergence properties similar to those known for steepest descent, namely weak convergence to a KKT point for a generalf, weak convergence to a solution for convexf and full convergence to the solution for strictly convexf. Applying this method to a maximum likelihood estimation problem, we obtain an additively overrelaxed version of the EM Algorithm. We extend the full convergence results known for EM to this overrelaxed version by establishing local Fejér monotonicity to the solution set.Research for this paper was partially supported by CNPq grant No 301280/86.  相似文献   

17.
Recently, numerical solutions of stochastic differential equations have received a great deal of attention. Numerical approximation schemes are invaluable tools for exploring their properties. In this paper, we introduce a class of stochastic age-dependent (vintage) capital system with Poisson jumps. We also give the discrete approximate solution with an implicit Euler scheme in time discretization. Using Gronwall’s lemma and Barkholder-Davis-Gundy’s inequality, some criteria are obtained for the exponential stability of numerical solutions to the stochastic age-dependent capital system with Poisson jumps. It is proved that the numerical approximation solutions converge to the analytic solutions of the equations under the given conditions, where information on the order of approximation is provided. These error bounds imply strong convergence as the timestep tends to zero. A numerical example is used to illustrate the theoretical results.  相似文献   

18.
A splitting method for two monotone operators A and B is an algorithm that attempts to converge to a zero of the sum A + B by solving a sequence of subproblems, each of which involves only the operator A, or only the operator B. Prior algorithms of this type can all in essence be categorized into three main classes, the Douglas/Peaceman-Rachford class, the forward-backward class, and the little-used double-backward class. Through a certain “extended” solution set in a product space, we construct a fundamentally new class of splitting methods for pairs of general maximal monotone operators in Hilbert space. Our algorithms are essentially standard projection methods, using splitting decomposition to construct separators. We prove convergence through Fejér monotonicity techniques, but showing Fejér convergence of a different sequence to a different set than in earlier splitting methods. Our projective algorithms converge under more general conditions than prior splitting methods, allowing the proximal parameter to vary from iteration to iteration, and even from operator to operator, while retaining convergence for essentially arbitrary pairs of operators. The new projective splitting class also contains noteworthy preexisting methods either as conventional special cases or excluded boundary cases. Dedicated to Clovis Gonzaga on the occassion of his 60th birthday.  相似文献   

19.
This paper concerns developing a numerical method of the Newton type to solve systems of nonlinear equations described by nonsmooth continuous functions. We propose and justify a new generalized Newton algorithm based on graphical derivatives, which have never been used to derive a Newton-type method for solving nonsmooth equations. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the algorithm, its local superlinear convergence, and its global convergence of the Kantorovich type. Our convergence results hold with no semismoothness and Lipschitzian assumptions, which is illustrated by examples. The algorithm and main results obtained in the paper are compared with well-recognized semismooth and B-differentiable versions of Newton’s method for nonsmooth Lipschitzian equations.  相似文献   

20.
In this paper, a simple feasible SQP method for nonlinear inequality constrained optimization is presented. At each iteration, we need to solve one QP subproblem only. After solving a system of linear equations, a new feasible descent direction is designed. The Maratos effect is avoided by using a high-order corrected direction. Under some suitable conditions the global and superlinear convergence can be induced. In the end, numerical experiments show that the method in this paper is effective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号