首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A family of variable metric proximal methods   总被引:5,自引:0,他引:5  
We consider conceptual optimization methods combining two ideas: the Moreau—Yosida regularization in convex analysis, and quasi-Newton approximations of smooth functions. We outline several approaches based on this combination, and establish their global convergence. Then we study theoretically the local convergence properties of one of these approaches, which uses quasi-Newton updates of the objective function itself. Also, we obtain a globally and superlinearly convergent BFGS proximal method. At each step of our study, we single out the assumptions that are useful to derive the result concerned.  相似文献   

2.
3.
We consider a class of unconstrained nonsmooth convex optimization problems, in which the objective function is the sum of a convex smooth function on an open subset of matrices and a separable convex function on a set of matrices. This problem includes the covariance selection problem that can be expressed as an 1-penalized maximum likelihood estimation problem. In this paper, we propose a block coordinate gradient descent method (abbreviated as BCGD) for solving this class of nonsmooth separable problems with the coordinate block chosen by a Gauss-Seidel rule. The method is simple, highly parallelizable, and suited for large-scale problems. We establish global convergence and, under a local Lipschizian error bound assumption, linear rate of convergence for this method. For the covariance selection problem, the method can terminate in O(n3/e){O(n^3/\epsilon)} iterations with an e{\epsilon}-optimal solution. We compare the performance of the BCGD method with the first-order methods proposed by Lu (SIAM J Optim 19:1807–1827, 2009; SIAM J Matrix Anal Appl 31:2000–2016, 2010) for solving the covariance selection problem on randomly generated instances. Our numerical experience suggests that the BCGD method can be efficient for large-scale covariance selection problems with constraints.  相似文献   

4.
A coordinate gradient descent method for nonsmooth separable minimization   总被引:1,自引:0,他引:1  
We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with ?1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method. The local Lipschitzian error bound holds under assumptions analogous to those for constrained smooth optimization, e.g., the convex function is polyhedral and the smooth function is (nonconvex) quadratic or is the composition of a strongly convex function with a linear mapping. We report numerical experience with solving the ?1-regularization of unconstrained optimization problems from Moré et al. in ACM Trans. Math. Softw. 7, 17–41, 1981 and from the CUTEr set (Gould and Orban in ACM Trans. Math. Softw. 29, 373–394, 2003). Comparison with L-BFGS-B and MINOS, applied to a reformulation of the ?1-regularized problem as a bound-constrained optimization problem, is also reported.  相似文献   

5.
考虑求解目标函数为光滑损失函数与非光滑正则函数之和的凸优化问题的一种基于线搜索的邻近梯度算法及其收敛性分析,证明了在梯度局部Lipschitz连续条件下该算法是R-线性收敛的,并在非光滑部分为稀疏块LASSO正则函数情况下给出了误差界条件成立的证明,得到了线性收敛率.最后,数值实验结果验证了方法的有效性.  相似文献   

6.
A family of variable metric methods for convex constrained optimizationwas introduced recently by Birgin, Martínez and Raydan.One of the members of this family is the inexact spectral projectedgradient (ISPG) method for minimization with convex constraints.At each iteration of these methods a strictly convex quadraticfunction with convex constraints must be (inexactly) minimized.In the case of the ISPG method it was shown that, in some importantapplications, iterative projection methods can be used for thisminimization. In this paper the particular case in which theconvex domain is a polytope described by a finite set of linearinequalities is considered. For solving the linearly constrainedconvex quadratic subproblem a dual approach is adopted, by meansof which subproblems become (not necessarily strictly) convexquadratic minimization problems with box constraints. Thesesubproblems are solved by means of an active-set box-constraintquadratic optimizer with a proximal-point type unconstrainedalgorithm for minimization within the current faces. Convergenceresults and numerical experiments are presented.  相似文献   

7.
We develop a one parameter family of variable metric updates by considering a fundamental decomposition of the Hessian that underlies Variable Metric Algorithms. The relationship with other Variable Metric Updates is discussed. Considerations based on the condition of the Hessian inverse approximation indicate particular choices of the parameter and these are discussed in the second half of this paper.Work performed under the auspices of the U.S. Energy Research and Development Administration.  相似文献   

8.
9.
It has become customary to compare the performance of unconstrained optimization algorithms on families of extended symmetric test functions. In this paper, results are presented which indicate that the performance of the variable metric algorithm on such functions is greatly distorted by rounding errors that destroy the special nature of these functions. A simple method of overcoming this difficulty is demonstrated, and it confirms the theoretical result that the number of iterations required to solve such problems is independent of the dimension.This research was supported by the Science and Engineering Research Council.  相似文献   

10.
This paper deals with a new variable metric algorithm for stochastic optimization problems. The essence of this is as follows: there exist two stochastic quasigradient algorithms working simultaneously — the first in the main space, the second with respect to the matrices that modify the space variables. Almost sure convergence of the algorithm is proved for the case of the convex (possiblynonsmooth) objective function.  相似文献   

11.
A new four-point implicit block multistep method is developed for solving systems of first-order ordinary differential equations with variable step size. The method computes the numerical solution at four equally spaced points simultaneously. The stability of the proposed method is investigated. The Gauss-Seidel approach is used for the implementation of the proposed method in the PE(CE)m mode. The method is presented in a simple form of Adams type and all coefficients are stored in the code in order to avoid the calculation of divided difference and integration coefficients. Numerical examples are given to illustrate the efficiency of the proposed method.  相似文献   

12.
We study an inexact proximal stochastic gradient (IPSG) method for convex composite optimization, whose objective function is a summation of an average of a large number of smooth convex functions and a convex, but possibly nonsmooth, function. Variance reduction techniques are incorporated in the method to reduce the stochastic gradient variance. The main feature of this IPSG algorithm is to allow solving the proximal subproblems inexactly while still keeping the global convergence with desirable complexity bounds. Different subproblem stopping criteria are proposed. Global convergence and the component gradient complexity bounds are derived for the both cases when the objective function is strongly convex or just generally convex. Preliminary numerical experiment shows the overall efficiency of the IPSG algorithm.  相似文献   

13.
In this paper, we discuss a variable metric Proximal-Descent Algorithm for finding a zero of any given maximal monotone operator. At each iteration, it first implements a proximal step and then a descent step to locate the new iterate. In the proximal step, we have replaced the regularization parameter by some positive definite matrix, which may vary from iteration to iteration. Under standard assumptions, we prove its global convergence without the matrix??s symmetry. Some key aspects of the algorithm??s applications are discussed. Preliminary numerical experiments show the efficiency in practical implementations.  相似文献   

14.
We consider an extended second-order cone linear complementarity problem (SOCLCP), including the generalized SOCLCP, the horizontal SOCLCP, the vertical SOCLCP, and the mixed SOCLCP as special cases. In this paper, we present some simple second-order cone constrained and unconstrained reformulation problems, and under mild conditions prove the equivalence between the stationary points of these optimization problems and the solutions of the extended SOCLCP. Particularly, we develop a proximal gradient descent method for solving the second-order cone constrained problems. This method is very simple and at each iteration makes only one Euclidean projection onto second-order cones. We establish global convergence and, under a local Lipschitzian error bound assumption, linear rate of convergence. Numerical comparisons are made with the limited-memory BFGS method for the unconstrained reformulations, which verify the effectiveness of the proposed method.  相似文献   

15.
Mathematical Programming - Many applications require minimizing the sum of smooth and nonsmooth functions. For example, basis pursuit denoising problems in data science require minimizing a measure...  相似文献   

16.
Computational Optimization and Applications - We investigate the convergence of the proximal gradient method applied to control problems with non-smooth and non-convex control cost. Here, we focus...  相似文献   

17.
Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. We establish global convergence and, under a local error bound assumption (which is satisfied by the SVM QP), linear rate of convergence for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We show that, for the SVM QP with n variables, this rule can be implemented in O(n) operations using Rockafellar’s notion of conformal realization. Thus, for SVM training, our method requires only O(n) operations per iteration and, in contrast to existing decomposition methods, achieves linear convergence without additional assumptions. We report our numerical experience with the method on some large SVM QP arising from two-class data classification. Our experience suggests that the method can be efficient for SVM training with nonlinear kernel.  相似文献   

18.
The Kriging surrogate model, which is frequently employed to apply evolutionary computation to real-world problems, with a coordinate transformation of the design space is proposed to improve the approximation accuracy of objective functions with correlated design variables. The coordinate transformation is conducted to extract significant trends in the objective function and identify the suitable coordinate system based on either one of two criteria: likelihood function or estimated gradients of the objective function to each design variable. Compared with the ordinary Kriging model, the proposed methods show higher accuracy in the approximation of various test functions. The proposed method based on likelihood shows higher accuracy than that based on gradients when the number of design variables is less than six. The latter method achieves higher accuracy than the ordinary Kriging model even for high-dimensional functions and is applied to an airfoil design problem with spline curves as an example with correlated design variables. This method achieves better performances not only in the approximation accuracy but also in the capability to explore the optimal solution.  相似文献   

19.
We present a novel variational approach to gradient-flow evolution in metric spaces. In particular, we advance a functional defined on entire trajectories, whose minimizers converge to curves of maximal slope for geodesically convex energies. The crucial step of the argument is the reformulation of the variational approach in terms of a dynamic programming principle, and the use of the corresponding Hamilton–Jacobi equation. The result is applicable to a large class of nonlinear evolution PDEs including nonlinear drift-diffusion, Fokker–Planck, and heat flows on metric-measure spaces.  相似文献   

20.
This paper discusses a direct three-point implicit block multistep method for direct solution of the general third-order initial value problems of ordinary differential equations using variable step size. The method is based on a pair of explicit and implicit of Adams type formulas which are implemented in PE(CE) t mode and in order to avoid calculating divided difference and integration coefficients all the coefficients are stored in the code. The method approximates the numerical solution at three equally spaced points simultaneously. The Gauss Seidel approach is used for the implementation of the proposed method. The local truncation error of the proposed scheme is studied. Numerical examples are given to illustrate the efficiency of the method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号