首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Gradient methods for minimizing composite functions   总被引:1,自引:0,他引:1  
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure. Despite the absence of good properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the first part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (with convergence rate $O\left({1 \over k}\right)$ ), and an accelerated multistep version with convergence rate $O\left({1 \over k^2}\right)$ , where $k$ is the iteration counter. For nonconvex problems with this structure, we prove convergence to a point from which there is no descent direction. In contrast, we show that for general nonsmooth, nonconvex problems, even resolving the question of whether a descent direction exists from a point is NP-hard. For all methods, we suggest some efficient “line search” procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.  相似文献   

2.
Portfolio selection with higher moments is a NP-hard nonconvex polynomial optimization problem. In this paper, we propose an efficient local optimization approach based on DC (Difference of Convex functions) programming—called DCA (DC Algorithm)—that consists of solving the nonconvex program by a sequence of convex ones. DCA will construct, in each iteration, a suitable convex quadratic subproblem which can be easily solved by explicit method, due to the proposed special DC decomposition. Computational results show that DCA almost always converges to global optimal solutions while comparing with the global optimization methods (Gloptipoly, Branch-and-Bound) and it outperforms several standard local optimization algorithms.  相似文献   

3.
In this paper, a nonsmooth bundle algorithm to minimize the maximum eigenvalue function of a nonconvex smooth function is presented. The bundle method uses an oracle to compute separately the function and subgradient information for a convex function, and the function and derivative values for the smooth mapping. Using this information, in each iteration, we replace the smooth inner mapping by its Taylor-series linearization around the current serious step. To solve the convex approximate eigenvalue problem with affine mapping faster, we adopt the second-order bundle method based on ????-decomposition theory. Through the backtracking test, we can make a better approximation for the objective function. Quadratic convergence of our special bundle method is given, under some additional assumptions. Then we apply our method to some particular instance of nonconvex eigenvalue optimization, specifically: bilinear matrix inequality problems.  相似文献   

4.
In this paper a new approach for the global solution of nonconvex MINLP (Mixed Integer NonLinear Programming) problems that contain signomial (generalized geometric) expressions is proposed and illustrated. By applying different variable transformation techniques and a discretization scheme a lower bounding convex MINLP problem can be derived. The convexified MINLP problem can be solved with standard methods. The key element in this approach is that all transformations are applied termwise. In this way all convex parts of the problem are left unaffected by the transformations. The method is illustrated by four example problems.  相似文献   

5.
6.
针对非凸区域上的凸函数比式和问题,给出一种求其全局最优解的确定性方法.该方法基于分支定界框架.首先通过引入变量,将原问题等价转化为d.c.规划问题,然后利用次梯度和凸包络构造松弛线性规划问题,从而将关键的估计下界问题转化为一系列线性规划问题,这些线性规划易于求解而且规模不变,更容易编程实现和应用到实际中;分支采用单纯形对分不但保证其穷举性,而且使得线性规划规模更小.理论分析和数值实验表明所提出的算法可行有效.  相似文献   

7.
This paper considers planar location problems with rectilinear distance and barriers where the objective function is any convex, nondecreasing function of distance. Such problems have a non-convex feasible region and a nonconvex objective function. Based on an equivalent problem with modified barriers, derived in a companion paper [3], the non convex feasible set is partitioned into a network and rectangular cells. The rectangular cells are further partitioned into a polynomial number of convex subcells, called convex domains, on which the distance function, and hence the objective function, is convex. Then the problem is solved over the network and convex domains for an optimal solution. Bounds are given that reduce the number of convex domains to be examined. The number of convex domains is bounded above by a polynomial in the size of the problem.  相似文献   

8.
A rigorous decomposition approach to solve separable mixed-integer nonlinear programs where the participating functions are nonconvex is presented. The proposed algorithms consist of solving an alternating sequence of Relaxed Master Problems (mixed-integer linear program) and two nonlinear programming problems (NLPs). A sequence of valid nondecreasing lower bounds and upper bounds is generated by the algorithms which converge in a finite number of iterations. A Primal Bounding Problem is introduced, which is a convex NLP solved at each iteration to derive valid outer approximations of the nonconvex functions in the continuous space. Two decomposition algorithms are presented in this work. On finite termination, the first yields the global solution to the original nonconvex MINLP and the second finds a rigorous bound to the global solution. Convergence and optimality properties, and refinement of the algorithms for efficient implementation are presented. Finally, numerical results are compared with currently available algorithms for example problems, illuminating the potential benefits of the proposed algorithm.  相似文献   

9.
A class of nonconvex minimization problems can be classified as hidden convex minimization problems. A nonconvex minimization problem is called a hidden convex minimization problem if there exists an equivalent transformation such that the equivalent transformation of it is a convex minimization problem. Sufficient conditions that are independent of transformations are derived in this paper for identifying such a class of seemingly nonconvex minimization problems that are equivalent to convex minimization problems. Thus, a global optimality can be achieved for this class of hidden convex optimization problems by using local search methods. The results presented in this paper extend the reach of convex minimization by identifying its equivalent with a nonconvex representation.  相似文献   

10.
In this paper, we consider an optimization problem which aims to minimize a convex function over the weakly efficient set of a multiobjective programming problem. To solve such a problem, we propose an inner approximation algorithm, in which two kinds of convex subproblems are solved successively. These convex subproblems are fairly easy to solve and therefore the proposed algorithm is practically useful. The algorithm always terminates after finitely many iterations by compromising the weak efficiency to a multiobjective programming problem. Moreover, for a subproblem which is solved at each iteration of the algorithm, we suggest a procedure for eliminating redundant constraints.  相似文献   

11.
Minimax optimal design of sonar transducer arrays can be formulated as a nonlinear program with many convex quadratic constraints and a nonconvex quadratic efficiency constraint. The variables of this problem are a scaling and phase shift applied to the output of each sensor. This problem is solved by applying Lagrangian relaxation to the convex quadratic constraints. Extensive computational experience shows that this approach can efficiently find near-optimal solutions of problems with up to 391 variables and 579 constraints. This work was supported by ONR Contracts N00014-83-C-0437 and N00014-82-C-824.  相似文献   

12.
The method of centers is a well-known method for solving nonlinear programming problems having inequality constraints. Pironneau and Polak have recently presented a new version of this method. In the new method, the direction of search is obtained, at each iteration, by solving a convex quadratic programming problem. This direction finding subprocedure is essentially insensitive to the dimension of the space on which the problem is defined. Moreover, the method of Pironneau and Polak is known to converge linearly for finite-dimensional convex programs for which the objective function has a positive-definite Hessian near the solution (and for which the functions involved are twice continuously differentiable). In the present paper, the method and a completely implementable version of it are shown to converge linearly for a very general class of finite-dimensional problems; the class is determined by a second-order sufficiency condition and includes both convex and nonconvex problems. The arguments employed here are based on the indirect sufficiency method of Hestenes. Furthermore, the arguments can be modified to prove linear convergence for a certain class of infinite-dimensional convex problems, thus providing an answer to a conjecture made by Pironneau and Polak.  相似文献   

13.
This paper addresses itself to the algorithm for minimizing the sum of a convex function and a product of two linear functions over a polytope. It is shown that this nonconvex minimization problem can be solved by solving a sequence of convex programming problems. The basic idea of this algorithm is to embed the original problem into a problem in higher dimension and apply a parametric programming (path following) approach. Also it is shown that the same idea can be applied to a generalized linear fractional programming problem whose objective function is the sum of a convex function and a linear fractional function.  相似文献   

14.
In this paper, we present a global optimization method for solving nonconvex mixed integer nonlinear programming (MINLP) problems. A convex overestimation of the feasible region is obtained by replacing the nonconvex constraint functions with convex underestimators. For signomial functions single-variable power and exponential transformations are used to obtain the convex underestimators. For more general nonconvex functions two versions of the so-called αBB-underestimator, valid for twice-differentiable functions, are integrated in the actual reformulation framework. However, in contrast to what is done in branch-and-bound type algorithms, no direct branching is performed in the actual algorithm. Instead a piecewise convex reformulation is used to convexify the entire problem in an extended variable-space, and the reformulated problem is then solved by a convex MINLP solver. As the piecewise linear approximations are made finer, the solution to the convexified and overestimated problem will form a converging sequence towards a global optimal solution. The result is an easily-implementable algorithm for solving a very general class of optimization problems.  相似文献   

15.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

16.
We present an interior point approach to the zero–one integer programming feasibility problem based on the minimization of a nonconvex potential function. Given a polytope defined by a set of linear inequalities, this procedure generates a sequence of strict interior points of this polytope, such that each consecutive point reduces the value of the potential function. An integer solution (not necessarily feasible) is generated at each iteration by a rounding scheme. The direction used to determine the new iterate is computed by solving a nonconvex quadratic program on an ellipsoid. We illustrate the approach by considering a class of difficult set covering problems that arise from computing the 1-width of the incidence matrix of Steiner triple systems.  相似文献   

17.
Linearly constrained optimization problems with simple bounds are considered in the present work. First, a preconditioned spectral gradient method is defined for the case in which no simple bounds are present. This algorithm can be viewed as a quasi-Newton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian approximation and the whole process is combined with a nonmonotone line search strategy. The simple bounds are then taken into account by placing them in an exponential penalty term that modifies the objective function. The exponential penalty scheme defines the outer iterations of the process. Each outer iteration involves the application of the previously defined preconditioned spectral gradient method for linear equality constrained problems. Therefore, an equality constrained convex quadratic programming problem needs to be solved at every inner iteration. The associated extended KKT matrix remains constant unless the process is reinitiated. In ordinary inner iterations, only the right-hand side of the KKT system changes. Therefore, suitable sparse factorization techniques can be applied and exploited effectively. Encouraging numerical experiments are presented.This research was supported by FAPESP Grant 2001-04597-4 and Grant 903724-6, FINEP and FAEP-UNICAMP, and the Scientific Computing Center of UCV. The authors thank two anonymous referees whose comments helped us to improve the final version of this paper.  相似文献   

18.
This article presents a simplicial branch and duality bound algorithm for globally solving the sum of convex–convex ratios problem with nonconvex feasible region. To our knowledge, little progress has been made for globally solving this problem so far. The algorithm uses a branch and bound scheme where the Lagrange duality theory is used to obtain the lower bounds. As a result, the lower-bounding subproblems during the algorithm search are all ordinary linear programs that can be solved very efficiently. It has been proved that the algorithm possesses global convergence. Finally, the numerical experiments are given to show the feasibility of the proposed algorithm.  相似文献   

19.
We propose a method for support vector machine classification using indefinite kernels. Instead of directly minimizing or stabilizing a nonconvex loss function, our algorithm simultaneously computes support vectors and a proxy kernel matrix used in forming the loss. This can be interpreted as a penalized kernel learning problem where indefinite kernel matrices are treated as noisy observations of a true Mercer kernel. Our formulation keeps the problem convex and relatively large problems can be solved efficiently using the projected gradient or analytic center cutting plane methods. We compare the performance of our technique with other methods on several standard data sets.  相似文献   

20.
We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a (probably nonconvex) smooth function and a (probably nonsmooth) convex function. The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions. Therefore, the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods. When the accuracy of the model function at the solution of the subproblem is not sufficient, we add a safeguard on the stepsizes for improving the accuracy. For a class of functions that can be "truncated'', an additional truncation step is defined and a stepsize modification strategy is designed. The overall scheme converges globally and we establish fast local convergence under suitable assumptions. In particular, using a connection with a smooth Riemannian trust-region method, we prove local quadratic convergence for partly smooth functions under a strict complementary condition. Preliminary numerical results on a family of $\ell_1$-optimization problems are reported and demonstrate the efficiency of our approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号