首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new nonmonotone algorithm is proposed and analyzed for unconstrained nonlinear optimization. The nonmonotone techniques applied in this algorithm are based on the estimate sequence proposed by Nesterov (Introductory Lectures on Convex Optimization: A Basic Course, 2004) for convex optimization. Under proper assumptions, global convergence of this algorithm is established for minimizing general nonlinear objective function with Lipschitz continuous derivatives. For convex objective function, this algorithm maintains the optimal convergence rate of convex optimization. In numerical experiments, this algorithm is specified by employing safe-guarded nonlinear conjugate gradient search directions. Numerical results show the nonmonotone algorithm performs significantly better than the corresponding monotone algorithm for solving the unconstrained optimization problems in the CUTEr (Bongartz et al. in ACM Trans. Math. Softw. 21:123–160, 1995) library.  相似文献   

2.
We present a new filter trust-region approach for solving unconstrained nonlinear optimization problems making use of the filter technique introduced by Fletcher and Leyffer to generate non-monotone iterations. We also use the concept of a multidimensional filter used by Gould et?al. (SIAM J. Optim. 15(1):17?C38, 2004) and introduce a new filter criterion showing good properties. Moreover, we introduce a new technique for reducing the size of the filter. For the algorithm, we present two different convergence analyses. First, we show that at least one of the limit points of the sequence of the iterates is first-order critical. Second, we prove the stronger property that all the limit points are first-order critical for a modified version of our algorithm. We also show that, under suitable conditions, all the limit points are second-order critical. Finally, we compare our algorithm with a natural trust-region algorithm and the filter trust-region algorithm of Gould et al. on the CUTEr unconstrained test problems Gould et?al. in ACM Trans. Math. Softw. 29(4):373?C394, 2003. Numerical results demonstrate the efficiency and robustness of our proposed algorithms.  相似文献   

3.
An improvement is made to an automatic quadrature due to Ninomiya (J. Inf. Process. 3:162–170, 1980) of adaptive type based on the Newton–Cotes rule by incorporating a doubly-adaptive algorithm due to Favati, Lotti and Romani (ACM Trans. Math. Softw. 17:207–217, 1991; ACM Trans. Math. Softw. 17:218–232, 1991). We compare the present method in performance with some others by using various test problems including Kahaner’s ones (Computation of numerical quadrature formulas. In: Rice, J.R. (ed.) Mathematical Software, 229–259. Academic, Orlando, FL, 1971).   相似文献   

4.
In this paper we propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an approximation of the Hessian/vector product through finite differences. For search direction computation, the method uses a forward difference approximation to the Hessian/vector product in combination with a careful choice of the finite difference interval. For the step length computation we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms including CONMIN by Shanno and Phua [D.F. Shanno, K.H. Phua, Algorithm 500, minimization of unconstrained multivariate functions, ACM Trans. Math. Softw. 2 (1976) 87–94], SCALCG by Andrei [N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Comput. Optim. Appl. 38 (2007) 401–416; N. Andrei, Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Optim. Methods Softw. 22 (2007) 561–571; N. Andrei, A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Appl. Math. Lett. 20 (2007) 645–650], and new conjugacy condition and related new conjugate gradient by Li, Tang and Wei [G. Li, C. Tang, Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math. 202 (2007) 523–539] or truncated Newton TN by Nash [S.G. Nash, Preconditioning of truncated-Newton methods, SIAM J. on Scientific and Statistical Computing 6 (1985) 599–616] using a set of 750 unconstrained optimization test problems show that the suggested algorithm outperforms these conjugate gradient algorithms as well as TN.  相似文献   

5.
This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function. We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function. Under a local Lipschitzian error bound assumption, we show that the algorithm possesses global and local linear convergence properties. We also give some numerical tests (including image recovery examples) to illustrate the efficiency of the proposed method.  相似文献   

6.
We consider a class of unconstrained nonsmooth convex optimization problems, in which the objective function is the sum of a convex smooth function on an open subset of matrices and a separable convex function on a set of matrices. This problem includes the covariance selection problem that can be expressed as an 1-penalized maximum likelihood estimation problem. In this paper, we propose a block coordinate gradient descent method (abbreviated as BCGD) for solving this class of nonsmooth separable problems with the coordinate block chosen by a Gauss-Seidel rule. The method is simple, highly parallelizable, and suited for large-scale problems. We establish global convergence and, under a local Lipschizian error bound assumption, linear rate of convergence for this method. For the covariance selection problem, the method can terminate in O(n3/e){O(n^3/\epsilon)} iterations with an e{\epsilon}-optimal solution. We compare the performance of the BCGD method with the first-order methods proposed by Lu (SIAM J Optim 19:1807–1827, 2009; SIAM J Matrix Anal Appl 31:2000–2016, 2010) for solving the covariance selection problem on randomly generated instances. Our numerical experience suggests that the BCGD method can be efficient for large-scale covariance selection problems with constraints.  相似文献   

7.
We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex programs. Finding the worst-case performance of a black-box first-order method is formulated as an optimization problem over a set of smooth (strongly) convex functions and initial conditions. We develop closed-form necessary and sufficient conditions for smooth (strongly) convex interpolation, which provide a finite representation for those functions. This allows us to reformulate the worst-case performance estimation problem as an equivalent finite dimension-independent semidefinite optimization problem, whose exact solution can be recovered up to numerical precision. Optimal solutions to this performance estimation problem provide both worst-case performance bounds and explicit functions matching them, as our smooth (strongly) convex interpolation procedure is constructive. Our works build on those of Drori and Teboulle (Math Program 145(1–2):451–482, 2014) who introduced and solved relaxations of the performance estimation problem for smooth convex functions. We apply our approach to different fixed-step first-order methods with several performance criteria, including objective function accuracy and gradient norm. We conjecture several numerically supported worst-case bounds on the performance of the fixed-step gradient, fast gradient and optimized gradient methods, both in the smooth convex and the smooth strongly convex cases, and deduce tight estimates of the optimal step size for the gradient method.  相似文献   

8.
An optimal method for stochastic composite optimization   总被引:1,自引:0,他引:1  
This paper considers an important class of convex programming (CP) problems, namely, the stochastic composite optimization (SCO), whose objective function is given by the summation of general nonsmooth and smooth stochastic components. Since SCO covers non-smooth, smooth and stochastic CP as certain special cases, a valid lower bound on the rate of convergence for solving these problems is known from the classic complexity theory of convex programming. Note however that the optimization algorithms that can achieve this lower bound had never been developed. In this paper, we show that the simple mirror-descent stochastic approximation method exhibits the best-known rate of convergence for solving these problems. Our major contribution is to introduce the accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP (Nesterov in Doklady AN SSSR 269:543–547, 1983; Nesterov in Math Program 103:127–152, 2005), and show that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO. To the best of our knowledge, it is also the first universally optimal algorithm in the literature for solving non-smooth, smooth and stochastic CP problems. We illustrate the significant advantages of the AC-SA algorithm over existing methods in the context of solving a special but broad class of stochastic programming problems.  相似文献   

9.
We extend the classical affine scaling interior trust region algorithm for the linear constrained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian. We propose and analyze a new affine scaling trust-region method in association with nonmonotonic interior backtracking line search technique for solving the linear constrained LC1 optimization where the second-order derivative of the objective function is explicitly required to be locally Lipschitzian. The general trust region subproblem in the proposed algorithm is defined by minimizing an augmented affine scaling quadratic model which requires both first and second order information of the objective function subject only to an affine scaling ellipsoidal constraint in a null subspace of the augmented equality constraints. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions where twice smoothness of the objective function is not required. Applications of the algorithm to some nonsmooth optimization problems are discussed.  相似文献   

10.
《Optimization》2012,61(8):1173-1197
We consider a class of derivative-free descent methods for solving the second-order cone complementarity problem (SOCCP). The algorithm is based on the Fischer–Burmeister (FB) unconstrained minimization reformulation of the SOCCP, and utilizes a convex combination of the negative partial gradients of the FB merit function ψFB as the search direction. We establish the global convergence results of the algorithm under monotonicity and the uniform Jordan P-property, and show that under strong monotonicity the merit function value sequence generated converges at a linear rate to zero. Particularly, the rate of convergence is dependent on the structure of second-order cones. Numerical comparisons are also made with the limited BFGS method used by Chen and Tseng (An unconstrained smooth minimization reformulation of the second-order cone complementarity problem, Math. Program. 104(2005), pp. 293–327), which confirm the theoretical results and the effectiveness of the algorithm.  相似文献   

11.
Let C be a nonempty closed convex subset of a 2-uniformly convex and uniformly smooth Banach space E and {A_n}_(n∈N) be a family of monotone and Lipschitz continuos mappings of C into E~*. In this article, we consider the improved gradient method by the hybrid method in mathematical programming [10] for solving the variational inequality problem for{A_n} and prove strong convergence theorems. And we get several results which improve the well-known results in a real 2-uniformly convex and uniformly smooth Banach space and a real Hilbert space.  相似文献   

12.
In this paper, we introduce a new class of nonsmooth convex functions called SOS-convex semialgebraic functions extending the recently proposed notion of SOS-convex polynomials. This class of nonsmooth convex functions covers many common nonsmooth functions arising in the applications such as the Euclidean norm, the maximum eigenvalue function and the least squares functions with ? 1-regularization or elastic net regularization used in statistics and compressed sensing. We show that, under commonly used strict feasibility conditions, the optimal value and an optimal solution of SOS-convex semialgebraic programs can be found by solving a single semidefinite programming problem (SDP). We achieve the results by using tools from semialgebraic geometry, convex-concave minimax theorem and a recently established Jensen inequality type result for SOS-convex polynomials. As an application, we show that robust SOS-convex optimization proble ms under restricted spectrahedron data uncertainty enjoy exact SDP relaxations. This extends the existing exact SDP relaxation result for restricted ellipsoidal data uncertainty and answers an open question in the literature on how to recover a robust solution of uncertain SOS-convex polynomial programs from its semidefinite programming relaxation in this broader setting.  相似文献   

13.
《Optimization》2012,61(11):1963-2001
ABSTRACT

This paper introduces the Fejér-monotone hybrid steepest descent method (FM-HSDM), a new member to the HSDM family of algorithms, for solving affinely constrained minimization tasks in real Hilbert spaces, where convex smooth and non-smooth losses compose the objective function. FM-HSDM offers sequences of estimates which converge weakly and, under certain hypotheses, strongly to solutions of the task at hand. In contrast to its HSDM's precursors, FM-HSDM enjoys Fejér monotonicity, the step-size parameter stays constant across iterations to promote convergence speed-ups of the sequence of estimates to a minimizer, while only Lipschitzian continuity, and not strong monotonicity, of the derivative of the smooth-loss function is needed to ensure convergence. FM-HSDM utilizes fixed-point theory, variational inequalities and affine-nonexpansive mappings to accommodate affine constraints in a more versatile way than state-of-the-art primal–dual techniques and the alternating direction method of multipliers do. Recursions can be tuned to score low computational footprints, well-suited for large-scale optimization tasks, without compromising convergence guarantees. Results on the rate of convergence to an optimal point are also presented. Finally, numerical tests on synthetic data are used to validate the theoretical findings.  相似文献   

14.
In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing 1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general 1-regularized convex minimization problems, i.e., the problem of minimizing an 1-regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale 1-regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale 1-regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-art algorithms specifically designed for solving large-scale 1-regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems.  相似文献   

15.
We consider a class of smoothing methods for minimization problems where the feasible set is convex but the objective function is not convex, not differentiable and perhaps not even locally Lipschitz at the solutions. Such optimization problems arise from wide applications including image restoration, signal reconstruction, variable selection, optimal control, stochastic equilibrium and spherical approximations. In this paper, we focus on smoothing methods for solving such optimization problems, which use the structure of the minimization problems and composition of smoothing functions for the plus function (x)+. Many existing optimization algorithms and codes can be used in the inner iteration of the smoothing methods. We present properties of the smoothing functions and the gradient consistency of subdifferential associated with a smoothing function. Moreover, we describe how to update the smoothing parameter in the outer iteration of the smoothing methods to guarantee convergence of the smoothing methods to a stationary point of the original minimization problem.  相似文献   

16.
The interval bounded generalized trust region subproblem (GTRS) consists in minimizing a general quadratic objective, q 0(x)→min, subject to an upper and lower bounded general quadratic constraint, ?q 1(x)≤u. This means that there are no definiteness assumptions on either quadratic function. We first study characterizations of optimality for this implicitly convex problem under a constraint qualification and show that it can be assumed without loss of generality. We next classify the GTRS into easy case and hard case instances, and demonstrate that the upper and lower bounded general problem can be reduced to an equivalent equality constrained problem after identifying suitable generalized eigenvalues and possibly solving a sparse system. We then discuss how the Rendl-Wolkowicz algorithm proposed in Fortin and Wolkowicz (Optim. Methods Softw. 19(1):41–67, 2004) and Rendl and Wolkowicz (Math. Program. 77(2, Ser. B):273–299, 1997) can be extended to solve the resulting equality constrained problem, highlighting the connection between the GTRS and the problem of finding minimum generalized eigenvalues of a parameterized matrix pencil. Finally, we present numerical results to illustrate this algorithm at the end of the paper.  相似文献   

17.
In this paper, we analyse the convergence rate of the sequence of objective function values of a primal-dual proximal-point algorithm recently introduced in the literature for solving a primal convex optimization problem having as objective the sum of linearly composed infimal convolutions, nonsmooth and smooth convex functions and its Fenchel-type dual one. The theoretical part is illustrated by numerical experiments in image processing.  相似文献   

18.
A trust region algorithm for minimization of locally Lipschitzian functions   总被引:7,自引:0,他引:7  
Qi  Liqun  Sun  Jie 《Mathematical Programming》1994,66(1-3):25-43
The classical trust region algorithm for smooth nonlinear programs is extended to the nonsmooth case where the objective function is only locally Lipschitzian. At each iteration, an objective function that carries both first and second order information is minimized over a trust region. The term that carries the first order information is an iteration function that may not explicitly depend on subgradients or directional derivatives. We prove that the algorithm is globally convergent. This convergence result extends the result of Powell for minimization of smooth functions, the result of Yuan for minimization of composite convex functions, and the result of Dennis, Li and Tapia for minimization of regular functions. In addition, compared with the recent model of Pang, Han and Rangaraj for minimization of locally Lipschitzian functions using a line search, this algorithm has the same convergence property without assuming positive definiteness and uniform boundedness of the second order term. Applications of the algorithm to various nonsmooth optimization problems are discussed.This author's work was supported in part by the Australian Research Council.This author's work was carried out while he was visiting the Department of Applied Mathematics at the University of New South Wales.  相似文献   

19.
This paper shows that the optimal subgradient algorithm (OSGA)—which uses first-order information to solve convex optimization problems with optimal complexity—can be used to efficiently solve arbitrary bound-constrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the bound-constrained rational subproblem required by OSGA. This leads to an efficient implementation of OSGA on large-scale problems in applications arising from signal and image processing, machine learning and statistics. Numerical experiments demonstrate the promising performance of OSGA on such problems. A software package implementing OSGA for bound-constrained convex problems is available.  相似文献   

20.
The aim of this paper is to develop an efficient algorithm for solving a class of unconstrained nondifferentiable convex optimization problems in finite dimensional spaces. To this end we formulate first its Fenchel dual problem and regularize it in two steps into a differentiable strongly convex one with Lipschitz continuous gradient. The doubly regularized dual problem is then solved via a fast gradient method with the aim of accelerating the resulting convergence scheme. The theoretical results are finally applied to an l 1 regularization problem arising in image processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号