首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A method of conjugate directions, the projection method, for solving unconstrained minimization problems is presented. Under the assumption of uniform strict convexity, the method is shown to converge to the global minimizer of the unconstrained problem and to have an (n – 1)-step superlinear rate of convergence. With a Lipschitz condition on the second derivatives, the rate of convergence is shown to be a modifiedn-step quadratic one.This research was supported in part by the Army Research Office, Contract No. DAHC 19-69-C-0017, and the Office of Naval Research, Contract No. N00014-71-C-0116(NR-047-099).  相似文献   

2.
A local convergence analysis of Newton’s method for solving nonlinear equations, under a majorant condition, is presented in this paper. Without assuming convexity of the derivative of the majorant function, which relaxes the Lipschitz condition on the operator under consideration, convergence, the biggest range for uniqueness of the solution, the optimal convergence radius and results on the convergence rate are established. Besides, two special cases of the general theory are presented as applications.  相似文献   

3.
This paper is concerned with convex composite minimization problems in a Hilbert space. In these problems, the objective is the sum of two closed, proper, and convex functions where one is smooth and the other admits a computationally inexpensive proximal operator. We analyze a family of generalized inertial proximal splitting algorithms (GIPSA) for solving such problems. We establish weak convergence of the generated sequence when the minimum is attained. Our analysis unifies and extends several previous results. We then focus on \(\ell _1\)-regularized optimization, which is the ubiquitous special case where the nonsmooth term is the \(\ell _1\)-norm. For certain parameter choices, GIPSA is amenable to a local analysis for this problem. For these choices we show that GIPSA achieves finite “active manifold identification”, i.e. convergence in a finite number of iterations to the optimal support and sign, after which GIPSA reduces to minimizing a local smooth function. We prove local linear convergence under either restricted strong convexity or a strict complementarity condition. We determine the rate in terms of the inertia, stepsize, and local curvature. Our local analysis is applicable to certain recent variants of the Fast Iterative Shrinkage–Thresholding Algorithm (FISTA), for which we establish active manifold identification and local linear convergence. Based on our analysis we propose a momentum restart scheme in these FISTA variants to obtain the optimal local linear convergence rate while maintaining desirable global properties.  相似文献   

4.
We consider a class of unconstrained nonsmooth convex optimization problems, in which the objective function is the sum of a convex smooth function on an open subset of matrices and a separable convex function on a set of matrices. This problem includes the covariance selection problem that can be expressed as an 1-penalized maximum likelihood estimation problem. In this paper, we propose a block coordinate gradient descent method (abbreviated as BCGD) for solving this class of nonsmooth separable problems with the coordinate block chosen by a Gauss-Seidel rule. The method is simple, highly parallelizable, and suited for large-scale problems. We establish global convergence and, under a local Lipschizian error bound assumption, linear rate of convergence for this method. For the covariance selection problem, the method can terminate in O(n3/e){O(n^3/\epsilon)} iterations with an e{\epsilon}-optimal solution. We compare the performance of the BCGD method with the first-order methods proposed by Lu (SIAM J Optim 19:1807–1827, 2009; SIAM J Matrix Anal Appl 31:2000–2016, 2010) for solving the covariance selection problem on randomly generated instances. Our numerical experience suggests that the BCGD method can be efficient for large-scale covariance selection problems with constraints.  相似文献   

5.
In this paper, we study the generalized Douglas–Rachford algorithm and its cyclic variants which include many projection-type methods such as the classical Douglas–Rachford algorithm and the alternating projection algorithm. Specifically, we establish several local linear convergence results for the algorithm in solving feasibility problems with finitely many closed possibly nonconvex sets under different assumptions. Our findings not only relax some regularity conditions but also improve linear convergence rates in the literature. In the presence of convexity, the linear convergence is global.  相似文献   

6.
Summary The convergence of the Gauss-Newton algorithm for solving discrete nonlinear approximation problems is analyzed for general norms and families of functions. Aquantitative global convergence theorem and several theorems on the rate of local convergence are derived. A general stepsize control procedure and two regularization principles are incorporated. Examples indicate the limits of the convergence theorems.  相似文献   

7.
In this paper, the Iri-Imai algorithm for solving linear and convex quadratic programming is extended to solve some other smooth convex programming problems. The globally linear convergence rate of this extended algorithm is proved, under the condition that the objective and constraint functions satisfy a certain type of convexity, called the harmonic convexity in this paper. A characterization of this convexity condition is given. The same convexity condition was used by Mehrotra and Sun to prove the convergence of a path-following algorithm.The Iri-Imai algorithm is a natural generalization of the original Newton algorithm to constrained convex programming. Other known convergent interior-point algorithms for smooth convex programming are mainly based on the path-following approach.  相似文献   

8.
We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ-strongly convex objective functions with L-Lipschitz continuous gradient. In the framework of Nesterov both μ and L are assumed known—an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ and L during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the convergence rate and iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. In numerical simulations we demonstrate the advantage in terms of faster convergence when estimating the strong convexity parameter μ for solving ill-conditioned problems to high accuracy, in comparison with an optimal method for non-strongly convex problems and a first-order method with Barzilai-Borwein step size selection.  相似文献   

9.
We obtain local estimates of the distance to a set defined by equality constraints under assumptions which are weaker than those previously used in the literature. Specifically, we assume that the constraints mapping has a Lipschitzian derivative, and satisfies a certain 2-regularity condition at the point under consideration. This setting directly subsumes the classical regular case and the twice differentiable 2-regular case, for which error bounds are known, but it is significantly richer than either of these two cases. When applied to a certain equation-based reformulation of the nonlinear complementarity problem, our results yield an error bound under an assumption more general than b-regularity. The latter appears to be the weakest assumption under which a local error bound for complementarity problems was previously available. We also discuss an application of our results to the convergence rate analysis of the exterior penalty method for solving irregular problems. Received: February 2000 / Accepted: November 2000?Published online January 17, 2001  相似文献   

10.
We study the solving of nonlinear equations by an iterative method of Aitken type, which has the interpolation nodes controlled by the Newton method. We obtain a local convergence result which shows that the q-convergence order of this method is 6 and its efficiency index is $\sqrt[5]{6},$ which is higher than the efficiency index of the Aitken or Newton methods. Monotone sequences are obtained for initial approximations farther from the solution, if they satisfy the Fourier condition and the nonlinear mapping satisfies monotony and convexity assumptions on the domain.  相似文献   

11.
Block coordinate update (BCU) methods enjoy low per-update computational complexity because every time only one or a few block variables would need to be updated among possibly a large number of blocks. They are also easily parallelized and thus have been particularly popular for solving problems involving large-scale dataset and/or variables. In this paper, we propose a primal–dual BCU method for solving linearly constrained convex program with multi-block variables. The method is an accelerated version of a primal–dual algorithm proposed by the authors, which applies randomization in selecting block variables to update and establishes an O(1 / t) convergence rate under convexity assumption. We show that the rate can be accelerated to \(O(1/t^2)\) if the objective is strongly convex. In addition, if one block variable is independent of the others in the objective, we then show that the algorithm can be modified to achieve a linear rate of convergence. The numerical experiments show that the accelerated method performs stably with a single set of parameters while the original method needs to tune the parameters for different datasets in order to achieve a comparable level of performance.  相似文献   

12.
We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set which is by itself an optimal set of another convex problem. We introduce a gradient-based method, called the minimal norm gradient method, for solving this class of problems, and establish the convergence of the sequence generated by the algorithm as well as a rate of convergence of the sequence of function values. The paper ends with several illustrating numerical examples.  相似文献   

13.
14.
A Newton approach is proposed for solving variable order smooth constrained vector optimization problems. The concept of strong convexity is presented, and its properties are analyzed. It is thus obtained that the Newton direction is well defined and that the algorithm converges. Moreover, the rate of convergence is obtained under ordering structures satisfying a mild hypothesis.  相似文献   

15.
Extended Linear-Quadratic Programming (ELQP) problems were introduced by Rockafellar and Wets for various models in stochastic programming and multistage optimization. Several numerical methods with linear convergence rates have been developed for solving fully quadratic ELQP problems, where the primal and dual coefficient matrices are positive definite. We present a two-stage sequential quadratic programming (SQP) method for solving ELQP problems arising in stochastic programming. The first stage algorithm realizes global convergence and the second stage algorithm realizes superlinear local convergence under a condition calledB-regularity.B-regularity is milder than the fully quadratic condition; the primal coefficient matrix need not be positive definite. Numerical tests are given to demonstrate the efficiency of the algorithm. Solution properties of the ELQP problem underB-regularity are also discussed.Supported by the Australian Research Council.  相似文献   

16.
Huang  Wen  Wei  Ke 《Mathematical Programming》2022,194(1-2):371-413

In the Euclidean setting the proximal gradient method and its accelerated variants are a class of efficient algorithms for optimization problems with decomposable objective. In this paper, we develop a Riemannian proximal gradient method (RPG) and its accelerated variant (ARPG) for similar problems but constrained on a manifold. The global convergence of RPG is established under mild assumptions, and the O(1/k) is also derived for RPG based on the notion of retraction convexity. If assuming the objective function obeys the Rimannian Kurdyka–?ojasiewicz (KL) property, it is further shown that the sequence generated by RPG converges to a single stationary point. As in the Euclidean setting, local convergence rate can be established if the objective function satisfies the Riemannian KL property with an exponent. Moreover, we show that the restriction of a semialgebraic function onto the Stiefel manifold satisfies the Riemannian KL property, which covers for example the well-known sparse PCA problem. Numerical experiments on random and synthetic data are conducted to test the performance of the proposed RPG and ARPG.

  相似文献   

17.
In this paper, a class of finely discretized Semi-Infinite Programming (SIP) problems is discussed. Combining the idea of the norm-relaxed Method of Feasible Directions (MFD) and the technique of updating discretization index set, we present a new algorithm for solving the Discretized Semi-Infinite (DSI) problems from SIP. At each iteration, the iteration point is feasible for the discretized problem and an improved search direction is computed by solving only one direction finding subproblem, i.e., a quadratic program, and some appropriate constraints are chosen to reduce the computational cost. A high-order correction direction can be obtained by solving another quadratic programming subproblem with only equality constraints. Under weak conditions such as Mangasarian–Fromovitz Constraint Qualification (MFCQ), the proposed algorithm possesses weak global convergence. Moreover, the superlinear convergence is obtained under Linearly Independent Constraint Qualification (LICQ) and other assumptions. In the end, some elementary numerical experiments are reported.  相似文献   

18.
An implicit iterative method is applied to solving linear ill‐posed problems with perturbed operators. It is proved that the optimal convergence rate can be obtained after choosing suitable number of iterations. A generalized Morozov's discrepancy principle is proposed for the problems, and then the optimal convergence rate can also be obtained by an a posteriori strategy. The convergence results show that the algorithm is a robust regularization method. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
We present a first-order algorithm for solving semi-infinite generalized min-max problems which consist of minimizing a function f0(x) = F(1(x), .... , m (x)), where F is a smooth function and each i is the maximum of an infinite number of smooth functions.In Section 3.3 of [17] Polak finds a methodology for solving infinite dimensional problems by expanding them into an infinite sequence of consistent finite dimensional approximating problems, and then using a master algorithm that selects an appropriate subsequence of these problems and applies a number of iterations of a finite dimensional optimization algorithm to each of these problems, sequentially. Our algorithm was constructed within this framework; it calls an algorithm by Kiwiel as a subroutine. The number of iterations of the Kiwiel algorithm to be applied to the approximating problems is determined by a test that ensures that the overall scheme retains the rate of convergence of the Kiwiel algorithm.Under reasonable assumptions we show that all the accumulation points of sequences constructed by our algorithm are stationary, and, under an additional strong convexity assumption, that the Kiwiel algorithm converges at least linearly, and that our algorithm also converges at least linearly, with the same rate constant bounds as Kiwiel's.  相似文献   

20.
We study the asymptotic rate of convergence of the alternating Hermitian/skew-Hermitian iteration for solving saddle-point problems arising in the discretization of elliptic partial differential equations. By a careful analysis of the iterative scheme at the continuous level we determine optimal convergence parameters for the model problem of the Poisson equation written in div-grad form. We show that the optimized convergence rate for small mesh parameter h is asymptotically 1–O(h 1/2). Furthermore we show that when the splitting is used as a preconditioner for a Krylov method, a different optimization leading to two clusters in the spectrum gives an optimal, h-independent, convergence rate. The theoretical analysis is supported by numerical experiments.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号