首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper considers local convergence and rate of convergence results for algorithms for minimizing the composite functionF(x)=f(x)+h(c(x)) wheref andc are smooth buth(c) may be nonsmooth. Local convergence at a second order rate is established for the generalized Gauss—Newton method whenh is convex and globally Lipschitz and the minimizer is strongly unique. Local convergence at a second order rate is established for a generalized Newton method when the minimizer satisfies nondegeneracy, strict complementarity and second order sufficiency conditions. Assuming the minimizer satisfies these conditions, necessary and sufficient conditions for a superlinear rate of convergence for curvature approximating methods are established. Necessary and sufficient conditions for a two-step superlinear rate of convergence are also established when only reduced curvature information is available. All these local convergence and rate of convergence results are directly applicable to nonlinearing programming problems.This work was done while the author was a Research fellow at the Mathematical Sciences Research Centre, Australian National University.  相似文献   

2.
We consider the convergence rate of the proximal point algorithm (PPA) for finding a minimizer of proper lower semicontinuous convex functions. In the Hilbert space setting, Güler showed that the big-O rate of the PPA can be improved to little-o when the sequence generated by the algorithm converges strongly to a minimizer. In this paper, we establish little-o rate of the PPA in Banach spaces without requiring this assumption. Then we apply the result to give new results on the convergence rate for sequences of alternating and averaged projections.  相似文献   

3.
An asynchronous parallel newton method   总被引:3,自引:0,他引:3  
A parallel Newton method is described for the minimization of a twice continuously differentiable uniformly convex functionF(x). The algorithm generates a sequence {x j } which converges superlinearly to the global minimizer ofF(x).  相似文献   

4.
It is shown that algorithms for minimizing an unconstrained functionF(x), x E n , which are solely methods of conjugate directions can be expected to exhibit only ann or (n–1) step superlinear rate of convergence to an isolated local minimizer. This is contrasted with quasi-Newton methods which can be expected to exhibit every step superlinear convergence. Similar statements about a quadratic rate of convergence hold when a Lipschitz condition is placed on the second derivatives ofF(x). Research was supported in part by Army Research Office, Contract Number DAHC 19-69-C-0017 and the Office of Naval Research, Contract Number N00014-71-C-0116 (NR 047-99).  相似文献   

5.
A new globalization procedure for solving a nonlinear system of equationsF(x)=0 is proposed based on the idea of combining Newton step and the steepest descent step WITHIN each iteration. Starting with an arbitrary initial point, the procedure converges either to a solution of the system or to a local minimizer off(x)=1/2F(x) T F(x). Each iteration is chosen to be as close to a Newton step as possible and could be the Newton step itself. Asymptotically the Newton step will be taken in each iteration and thus the convergence is quadratic. Numerical experiments yield positive results. Further generalizations of this procedure are also discussed in this paper.  相似文献   

6.
A method is described for minimizing a continuously differentiable functionF(x) ofn variables subject to linear inequality constraints. It can be applied under the same general assumptions as any method of feasible directions. IfF(x) is twice continuously differentiable and the Hessian matrix ofF(x) has certain properties, then the algorithm generates a sequence of points which converges superlinearly to the unique minimizer ofF(x). No computation of secondorder derivatives is required.Sponsored by the United States Army under Contract No. DA-31-124-ARO-D-462 and by the National Science Foundation under Research Grant GP.33033.  相似文献   

7.
In the Newton/log-barrier method, Newton steps are taken for the log-barrier function for a fixed value of the barrier parameter until a certain convergence criterion is satisfied. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton’s method does not exhibit superlinear convergence to the minimizer of each instance of the log-barrier function until it reaches a very small neighborhood, namely within O2) of the minimizer, where μ is the barrier parameter. By analyzing the structure of the barrier Hessian and gradient in terms of the subspace of active constraint gradients and the associated null space, we show that this neighborhood is in fact much larger –Oσ) for any σ∈(1,2] – thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton/log-barrier algorithm is superlinear in the number of function/derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the step length and convergence criteria for each Newton process. Received: October 10, 1997 / Accepted: September 10, 2000?Published online February 22, 2001  相似文献   

8.
Let H be a real Hilbert space and let T: H→2H be a maximal monotone operator. In this paper, we first introduce two algorithms of approximating solutions of maximal monotone operators. One of them is to generate a strongly convergent sequence with limit vT−10. The other is to discuss the weak convergence of the proximal point algorithm. Next, using these results, we consider the problem of finding a minimizer of a convex function. Our methods are motivated by Halpern's iteration and Mann's iteration.  相似文献   

9.
Problems in signal detection and image recovery can sometimes be formulated as a convex feasibility problem (CFP) of finding a vector in the intersection of a finite family of closed convex sets. Algorithms for this purpose typically employ orthogonal or generalized projections onto the individual convex sets. The simultaneous multiprojection algorithm of Censor and Elfving for solving the CFP, in which different generalized projections may be used at the same time, has been shown to converge for the case of nonempty intersection; still open is the question of its convergence when the intersection of the closed convex sets is empty.Motivated by the geometric alternating minimization approach of Csiszár and Tusnády and the product space formulation of Pierra, we derive a new simultaneous multiprojection algorithm that employs generalized projections of Bregman to solve the convex feasibility problem or, in the inconsistent case, to minimize a proximity function that measures the average distance from a point to all convex sets. We assume that the Bregman distances involved are jointly convex, so that the proximity function itself is convex. When the intersection of the convex sets is empty, but the closure of the proximity function has a unique global minimizer, the sequence of iterates converges to this unique minimizer. Special cases of this algorithm include the Expectation Maximization Maximum Likelihood (EMML) method in emission tomography and a new convergence result for an algorithm that solves the split feasibility problem.  相似文献   

10.
Abstract. This paper presents a trust region algorithm for nonlinear optimization with linear in-equality constraints. The global convergence of the algorithm is proved. Local quadratic con-vergence is obtained for a strong local minimizer.  相似文献   

11.
This paper describes and analyzes an algorithm which computes an interval of length t in which a minimizer (or a maximizer) of a periodical bimodal function h is located using a minimal number of evaluations of the function h. A dynamic programming approach is used in order to demonstrate the optimality of the algorithm.  相似文献   

12.
Recently an affine scaling, interior point algorithm ASL was developed for box constrained optimization problems with a single linear constraint (Gonzalez-Lima et al., SIAM J. Optim. 21:361–390, 2011). This note extends the algorithm to handle more general polyhedral constraints. With a line search, the resulting algorithm ASP maintains the global and R-linear convergence properties of ASL. In addition, it is shown that the unit step version of the algorithm (without line search) is locally R-linearly convergent at a nondegenerate local minimizer where the second-order sufficient optimality conditions hold. For a quadratic objective function, a sublinear convergence property is obtained without assuming either nondegeneracy or the second-order sufficient optimality conditions.  相似文献   

13.
This paper presents a family of projected descent direction algorithms with inexact line search for solving large-scale minimization problems subject to simple bounds on the decision variables. The global convergence of algorithms in this family is ensured by conditions on the descent directions and line search. Whenever a sequence constructed by an algorithm in this family enters a sufficiently small neighborhood of a local minimizer satisfying standard second-order sufficiency conditions, it gets trapped and converges to this local minimizer. Furthermore, in this case, the active constraint set at is identified in a finite number of iterations. This fact is used to ensure that the rate of convergence to a local minimizer, satisfying standard second-order sufficiency conditions, depends only on the behavior of the algorithm in the unconstrained subspace. As a particular example, we present projected versions of the modified Polak–Ribière conjugate gradient method and the limited-memory BFGS quasi-Newton method that retain the convergence properties associated with those algorithms applied to unconstrained problems.  相似文献   

14.
In this paper, we modify the proximal point algorithm for finding common fixed points in CAT(0) spaces for nonlinear multivalued mappings and a minimizer of a convex function and prove Δ‐convergence of the proposed algorithm. A numerical example is presented to illustrate the convergence result. Our results improve and extend the corresponding results in the literature.  相似文献   

15.
The symmetric tensor decomposition problem is a fundamental problem in many fields, which appealing for investigation. In general, greedy algorithm is used for tensor decomposition. That is, we first find the largest singular value and singular vector and subtract the corresponding component from tensor, then repeat the process. In this article, we focus on designing one effective algorithm and giving its convergence analysis. We introduce an exceedingly simple and fast algorithm for rank-one approximation of symmetric tensor decomposition. Throughout variable splitting, we solve symmetric tensor decomposition problem by minimizing a multiconvex optimization problem. We use alternating gradient descent algorithm to solve. Although we focus on symmetric tensors in this article, the method can be extended to nonsymmetric tensors in some cases. Additionally, we also give some theoretical analysis about our alternating gradient descent algorithm. We prove that alternating gradient descent algorithm converges linearly to global minimizer. We also provide numerical results to show the effectiveness of the algorithm.  相似文献   

16.
We study the convergence of a diagonal process for minimizing a closed proper convex function f, in which a proximal point iteration is applied to a sequence of functions approximating f. We prove that, when the approximation is sufficiently fast, and also when it is sufficiently slow, the sequence generated by the method converges toward a minimizer of f. Comparison to previous work is provided through examples in penalty methods for linear programming and Tikhonov regularization.  相似文献   

17.
Local convergence analysis for partitioned quasi-Newton updates   总被引:8,自引:0,他引:8  
Summary This paper considers local convergence properties of inexact partitioned quasi-Newton algorithms for the solution of certain non-linear equations and, in particular, the optimization of partially separable objective functions. Using the bounded deterioration principle, one obtains local and linear convergence, which impliesQ-superlinear convergence under the usual conditions on the quasi-Newton updates. For the optimization case, these conditions are shown to be satisfied by any sequence of updates within the convex Broyden class, even if some Hessians are singular at the minimizer. Finally, local andQ-superlinear convergence is established for an inexact partitioned variable metric method under mild assumptions on the initial Hessian approximations.Work supported by a research grant of the Deutsche Forschungsgemeinschaft, Bonn and carried out at the Department of Applied Mathematics and Theoretical Physics Cambridge (United Kingdom)  相似文献   

18.
LetF(x,y) be a function of the vector variablesxR n andyR m . One possible scheme for minimizingF(x,y) is to successively alternate minimizations in one vector variable while holding the other fixed. Local convergence analysis is done for this vector (grouped variable) version of coordinate descent, and assuming certain regularity conditions, it is shown that such an approach is locally convergent to a minimizer and that the rate of convergence in each vector variable is linear. Examples where the algorithm is useful in clustering and mixture density decomposition are given, and global convergence properties are briefly discussed.This research was supported in part by NSF Grant No. IST-84-07860. The authors are indebted to Professor R. A. Tapia for his help in improving this paper.  相似文献   

19.
We propose a class of self-adaptive proximal point methods suitable for degenerate optimization problems where multiple minimizers may exist, or where the Hessian may be singular at a local minimizer. If the proximal regularization parameter has the form where η∈[0,2) and β>0 is a constant, we obtain convergence to the set of minimizers that is linear for η=0 and β sufficiently small, superlinear for η∈(0,1), and at least quadratic for η∈[1,2). Two different acceptance criteria for an approximate solution to the proximal problem are analyzed. These criteria are expressed in terms of the gradient of the proximal function, the gradient of the original function, and the iteration difference. With either acceptance criterion, the convergence results are analogous to those of the exact iterates. Preliminary numerical results are presented using some ill-conditioned CUTE test problems. This material is based upon work supported by the National Science Foundation under Grant Nos. 0203270, 0619080, and 0620286.  相似文献   

20.
A method of conjugate directions, the projection method, for solving unconstrained minimization problems is presented. Under the assumption of uniform strict convexity, the method is shown to converge to the global minimizer of the unconstrained problem and to have an (n – 1)-step superlinear rate of convergence. With a Lipschitz condition on the second derivatives, the rate of convergence is shown to be a modifiedn-step quadratic one.This research was supported in part by the Army Research Office, Contract No. DAHC 19-69-C-0017, and the Office of Naval Research, Contract No. N00014-71-C-0116(NR-047-099).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号