首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The DC programming and its DC algorithm (DCA) address the problem of minimizing a function f=gh (with g,h being lower semicontinuous proper convex functions on R n ) on the whole space. Based on local optimality conditions and DC duality, DCA was successfully applied to a lot of different and various nondifferentiable nonconvex optimization problems to which it quite often gave global solutions and proved to be more robust and more efficient than related standard methods, especially in the large scale setting. The computational efficiency of DCA suggests to us a deeper and more complete study on DC programming, using the special class of DC programs (when either g or h is polyhedral convex) called polyhedral DC programs. The DC duality is investigated in an easier way, which is more convenient to the study of optimality conditions. New practical results on local optimality are presented. We emphasize regularization techniques in DC programming in order to construct suitable equivalent DC programs to nondifferentiable nonconvex optimization problems and new significant questions which have to be answered. A deeper insight into DCA is introduced which really sheds new light on DCA and could partly explain its efficiency. Finally DC models of real world nonconvex optimization are reported.  相似文献   

2.
Nonlinear image deblurring procedures based on probabilistic considerations have been widely investigated in the literature. This approach leads to model the deblurring problem as a large scale optimization problem, with a nonlinear, convex objective function and non-negativity constraints on the sign of the variables. The interior point methods have shown in the last years to be very reliable in nonlinear programs. In this paper we propose an inexact Newton interior point (IP) algorithm designed for the solution of the deblurring problem. The numerical experience compares the IP method with another state-of-the-art method, the Lucy Richardson algorithm, and shows a significant improvement of the processing time.  相似文献   

3.
Numerical methods are proposed for solving finite-dimensional convex problems with inequality constraints satisfying the Slater condition. A method based on solving the dual to the original regularized problem is proposed and justified for problems having a strictly uniformly convex sum of the objective function and the constraint functions. Conditions for the convergence of this method are derived, and convergence rate estimates are obtained for convergence with respect to the functional, convergence with respect to the argument to the set of optimizers, and convergence to the g-normal solution. For more general convex finite-dimensional minimization problems with inequality constraints, two methods with finite-step inner algorithms are proposed. The methods are based on the projected gradient and conditional gradient algorithms. The paper is focused on finite-dimensional problems obtained by approximating infinite-dimensional problems, in particular, optimal control problems for systems with lumped or distributed parameters.  相似文献   

4.
In this paper we present penalty and barrier methods for solving general convex semidefinite programming problems. More precisely, the constraint set is described by a convex operator that takes its values in the cone of negative semidefinite symmetric matrices. This class of methods is an extension of penalty and barrier methods for convex optimization to this setting. We provide implementable stopping rules and prove the convergence of the primal and dual paths obtained by these methods under minimal assumptions. The two parameters approach for penalty methods is also extended. As for usual convex programming, we prove that after a finite number of steps all iterates will be feasible.  相似文献   

5.
A convergent decomposition algorithm for support vector machines   总被引:1,自引:0,他引:1  
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables.  相似文献   

6.
The UTAs (UTilité Additives) type methods for constructing nondecreasing additive utility functions were first proposed by Jacquet-Lagrèze and Siskos in 1982 for handling decision problems of multicriteria ranking. In this article, by UTA functions, we mean functions which are constructed by the UTA type methods. Our purpose is to propose an algorithm for globally maximizing UTA functions of a class of linear/convex multiple objective programming problems. The algorithm is established based on a branch and bound scheme, in which the branching procedure is performed by a so-called I-rectangular bisection in the objective (outcome) space, and the bounding procedure by some convex or linear programs. Preliminary computational experiments show that this algorithm can work well for the case where the number of objective functions in the multiple objective optimization problem under consideration is much smaller than the number of variables.  相似文献   

7.
The problem of finding the Euclidean distance between two convex polyhedra can be reduced to the combinatorial optimization problem of finding the minimum distance between their faces. This paper presents a global optimality criterion for this problem. An algorithm (QLDPA) for the fast computation of the distance between convex and bounded polyhedra is proposed as an application of it. Computer experiments show its fast performance, especially when the total number of vertices is large.  相似文献   

8.
Interior point methods for optimization have been around for more than 25 years now. Their presence has shaken up the field of optimization. Interior point methods for linear and (convex) quadratic programming display several features which make them particularly attractive for very large scale optimization. Among the most impressive of them are their low-degree polynomial worst-case complexity and an unrivalled ability to deliver optimal solutions in an almost constant number of iterations which depends very little, if at all, on the problem dimension. Interior point methods are competitive when dealing with small problems of dimensions below one million constraints and variables and are beyond competition when applied to large problems of dimensions going into millions of constraints and variables.In this survey we will discuss several issues related to interior point methods including the proof of the worst-case complexity result, the reasons for their amazingly fast practical convergence and the features responsible for their ability to solve very large problems. The ever-growing sizes of optimization problems impose new requirements on optimization methods and software. In the final part of this paper we will therefore address a redesign of interior point methods to allow them to work in a matrix-free regime and to make them well-suited to solving even larger problems.  相似文献   

9.
Global optimization problems involving the minimization of a product of convex functions on a convex set are addressed in this paper. Elements of convex analysis are used to obtain a suitable representation of the convex multiplicative problem in the outcome space, where its global solution is reduced to the solution of a sequence of quasiconcave minimizations on polytopes. Computational experiments illustrate the performance of the global optimization algorithm proposed.   相似文献   

10.
An inverse problem of determination of a coefficient in an elliptic equation is considered. This problem is ill-posed in the sense of Hadamard and Tikhonov's regularization method is used for solving it in a stable way. This method requires globally solving nonconvex optimization problems, the solution methods for which have been very little studied in the inverse problems community. It is proved that the objective function of the corresponding optimization problem for our inverse problem can be represented as the difference of two convex functions (d.c. functions), and the difference of convex functions algorithm (DCA) in combination with a branch-and-bound technique can be used to globally solve it. Numerical examples are presented which show the efficiency of the method.  相似文献   

11.
一个关于二次规划问题的分段线性同伦算法   总被引:1,自引:1,他引:0  
本文发展了一个关于二次规划问题的分段线性同伦算法。该算法可看作是外点罚函数法的一个变体。凡是符合外点罚函数法收敛条件的二次规划问题用该算法均可经有限次轮回运算得到稳定解。大量的关于随机的凸二次规划问题的数值实验结果表明它的计算效率是高的,在某些条件下可能是多项式时间算法。  相似文献   

12.
In this paper, we introduce a novel projected steepest descent iterative method with frozen derivative. The classical projected steepest descent iterative method involves the computation of derivative of the nonlinear operator at each iterate. The method of this paper requires the computation of derivative of the nonlinear operator only at an initial point. We exhibit the convergence analysis of our method by assuming the conditional stability of the inverse problem on a convex and compact set. Further, by assuming the conditional stability on a nested family of convex and compact subsets, we develop a multi-level method. In order to enhance the accuracy of approximation between neighboring levels, we couple it with the growth of stability constants. This along with a suitable discrepancy criterion ensures that the algorithm proceeds from level to level and terminates within finite steps. Finally, we discuss an inverse problem on which our methods are applicable.  相似文献   

13.
《Optimization》2012,61(2):265-288
In this article, we investigate the possibilities of accelerating the double smoothing (DS) technique when solving unconstrained nondifferentiable convex optimization problems. This approach relies on the regularization in two steps of the Fenchel dual problem associated with the problem to be solved into an optimization problem having a differentiable strongly convex objective function with Lipschitz continuous gradient. The doubly regularized dual problem is then solved via a fast gradient method. The aim of this article is to show how the properties of the functions in the objective of the primal problem influence the implementation of the DS approach and its rate of convergence. The theoretical results are applied to linear inverse problems by making use of different regularization functionals.  相似文献   

14.
This article is concerned with two global optimization problems (P1) and (P2). Each of these problems is a fractional programming problem involving the maximization of a ratio of a convex function to a convex function, where at least one of the convex functions is a quadratic form. First, the article presents and validates a number of theoretical properties of these problems. Included among these properties is the result that, under a mild assumption, any globally optimal solution for problem (P1) must belong to the boundary of its feasible region. Also among these properties is a result that shows that problem (P2) can be reformulated as a convex maximization problem. Second, the article presents for the first time an algorithm for globally solving problem (P2). The algorithm is a branch and bound algorithm in which the main computational effort involves solving a sequence of convex programming problems. Convergence properties of the algorithm are presented, and computational issues that arise in implementing the algorithm are discussed. Preliminary indications are that the algorithm can be expected to provide a practical approach for solving problem (P2), provided that the number of variables is not too large.  相似文献   

15.
The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an efficiency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonlinear projected-subgradient type method, derived from using a general distance-like function instead of the usual Euclidean squared distance. Within this interpretation, we derive in a simple way convergence and efficiency estimates. We then propose an Entropic mirror descent algorithm for convex minimization over the unit simplex, with a global efficiency estimate proven to be mildly dependent in the dimension of the problem.  相似文献   

16.
Optimal kernel selection in twin support vector machines   总被引:2,自引:0,他引:2  
In twin support vector machines (TWSVMs), we determine pair of non-parallel planes by solving two related SVM-type problems, each of which is smaller than the one in a conventional SVM. However, similar to other classification methods, the performance of the TWSVM classifier depends on the choice of the kernel. In this paper we treat the kernel selection problem for TWSVM as an optimization problem over the convex set of finitely many basic kernels, and formulate the same as an iterative alternating optimization problem. The efficacy of the proposed classification algorithm is demonstrated with some UCI machine learning benchmark datasets.  相似文献   

17.
In this paper, we characterize a class of feasible direction methods in nonlinear programming through the concept of partial linearization of the objective function. Based on a feasible point, the objective is replaced by an arbitrary convex and continuously differentiable function, and the error is taken into account by a first-order approximation of it. A new feasible point is defined through a line search with respect to the original objective, toward the solution of the approximate problem. Global convergence results are given for exact and approximate line searches, and possible interpretations are made. We present some instances of the general algorithm and discuss extensions to nondifferentiable programming.The author wishes to thank Drs. K. Holmberg, T. Larsson, and A. Migdalas for their helpful comments.  相似文献   

18.
In this paper, we consider the optimization problem in locally convex Hausdorff topological vector spaces with objectives given as the difference of two composite functions and constraints described by an arbitrary (possibly infinite) number of convex inequalities. Using the epigraph technique, we introduce some new constraint qualifications, which completely characterize the Farkas lemma, the dualities between the primal problem and its dual problem. Applications to the conical programming with DC composite function are also given.  相似文献   

19.
Nonlinear rescaling and proximal-like methods in convex optimization   总被引:4,自引:0,他引:4  
The nonlinear rescaling principle (NRP) consists of transforming the objective function and/or the constraints of a given constrained optimization problem into another problem which is equivalent to the original one in the sense that their optimal set of solutions coincides. A nonlinear transformation parameterized by a positive scalar parameter and based on a smooth sealing function is used to transform the constraints. The methods based on NRP consist of sequential unconstrained minimization of the classical Lagrangian for the equivalent problem, followed by an explicit formula updating the Lagrange multipliers. We first show that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establish that the two methods are dually equivalent for convex constrained minimization problems. We then study the convergence properties of the nonlinear rescaling algorithm and the corresponding entropy-like proximal methods for convex constrained optimization problems. Special cases of the nonlinear rescaling algorithm are presented. In particular a new class of exponential penalty-modified barrier functions methods is introduced. Partially supported by the National Science Foundation, under Grants DMS-9201297, and DMS-9401871. Partially supported by NASA Grant NAG3-1397 and NSF Grant DMS-9403218.  相似文献   

20.
Ming Tian  Si-Wen Jiao 《Optimization》2016,65(11):2007-2024
In this article, we provide a general iterative method for solving an equilibrium and a constrained convex minimization problem. By using the idea of regularized gradient-projection algorithm (RGPA), we find a common element, which is also a solution of a variational inequality problem. Then the strong convergence theorems are obtained under suitable conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号