首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper addresses the minimization of the product ofp convex functions on a convex set. It is shown that this nonconvex problem can be converted to a concave minimization problem withp variables, whose objective function value is determined by solving a convex minimization problem. An outer approximation method is proposed for obtaining a global minimum of the resulting problem. Computational experiments indicate that this algorithm is reasonable efficient whenp is less than 4.This research was partly supported by Grant-in-Aid for Scientific Research of the Ministry of Education, Science and Culture, Grant No. (C)03832018 and (C)04832010.  相似文献   

2.
The problems of (bi-)proportional rounding of a nonnegative vector or matrix, resp., are written as particular separable convex integer minimization problems. Allowing any convex (separable) objective function we use the notions of vector and matrix apportionment problems. As a broader class of problems we consider separable convex integer minimization under linear equality restrictions Ax = b with any totally unimodular coefficient matrix A. By the total unimodularity Fenchel duality applies, despite the integer restrictions of the variables. The biproportional algorithm of Balinski and Demange (Math Program 45:193–210, 1989) is generalized and derives from the dual optimization problem. Also, a primal augmentation algorithm is stated. Finally, for the smaller class of matrix apportionment problems we discuss the alternating scaling algorithm, which is a discrete variant of the well-known Iterative Proportional Fitting procedure.  相似文献   

3.
In this paper we consider the solution of certain convex integer minimization problems via greedy augmentation procedures. We show that a greedy augmentation procedure that employs only directions from certain Graver bases needs only polynomially many augmentation steps to solve the given problem. We extend these results to convex N-fold integer minimization problems and to convex 2-stage stochastic integer minimization problems. Finally, we present some applications of convex N-fold integer minimization problems for which our approach provides polynomial time solution algorithms.  相似文献   

4.
The matrix rank minimization problem has applications in many fields, such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for solving the nuclear norm minimization problem (Math. Program., doi:, 2009). By incorporating an approximate singular value decomposition technique in this algorithm, the solution to the matrix rank minimization problem is usually obtained. In this paper, we study the convergence/recoverability properties of the fixed-point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving affinely constrained matrix rank minimization problems are reported.  相似文献   

5.
《Optimization》2012,61(9):1367-1385
The gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. Based on Marino and Xu's method [G. Marino and H.-K. Xu, A general method for nonexpansive mappings in Hilbert space, J. Math. Anal. Appl. 318 (2006), pp. 43–52], we combine GPA and averaged mapping approach to propose implicit and explicit composite iterative algorithms for finding a common solution of an equilibrium and a constrained convex minimization problem for the first time in this article. Under suitable conditions, strong convergence theorems are obtained.  相似文献   

6.
In this paper, we propose a decomposition algorithm for convex differentiable minimization. This algorithm at each iteration solves a variational inequality problem obtained by adding to the gradient of the cost function a strongly proximal related function. A line search is then performed in the direction of the solution to this variational inequality (with respect to the original cost). If the constraint set is a Cartesian product ofm sets, the variational inequality decomposes intom coupled variational inequalities, which can be solved in either a Jacobi manner or a Gauss-Seidel manner. This algorithm also applies to the minimization of a strongly convex (possibly nondifferentiable) cost subject to linear constraints. As special cases, we obtain the GP-SOR algorithm of Mangasarian and De Leone, a diagonalization algorithm of Feijoo and Meyer, the coordinate descent method, and the dual gradient method. This algorithm is also closely related to a splitting algorithm of Gabay and a gradient projection algorithm of Goldstein and of Levitin-Poljak, and has interesting applications to separable convex programming and to solving traffic assignment problems.This work was partially supported by the US Army Research Office Contract No. DAAL03-86-K-0171 and by the National Science Foundation Grant No. ECS-85-19058. The author thanks the referees for their many helpful comments, particularly for suggesting the use of a general functionH instead of that given by (4).  相似文献   

7.
We consider the problem of minimizing the sum of a convex function and of p1 fractions subject to convex constraints. The numerators of the fractions are positive convex functions, and the denominators are positive concave functions. Thus, each fraction is quasi-convex. We give a brief discussion of the problem and prove that in spite of its special structure, the problem is -complete even when only p=1 fraction is involved. We then show how the problem can be reduced to the minimization of a function of p variables where the function values are given by the solution of certain convex subproblems. Based on this reduction, we propose an algorithm for computing the global minimum of the problem by means of an interior-point method for convex programs.  相似文献   

8.
The majority of first-order methods for large-scale convex–concave saddle point problems and variational inequalities with monotone operators are proximal algorithms. To make such an algorithm practical, the problem’s domain should be proximal-friendly—admit a strongly convex function with easy to minimize linear perturbations. As a by-product, this domain admits a computationally cheap linear minimization oracle (LMO) capable to minimize linear forms. There are, however, important situations where a cheap LMO indeed is available, but the problem domain is not proximal-friendly, which motivates search for algorithms based solely on LMO. For smooth convex minimization, there exists a classical algorithm using LMO—conditional gradient. In contrast, known to us similar techniques for other problems with convex structure (nonsmooth convex minimization, convex–concave saddle point problems, even as simple as bilinear ones, and variational inequalities with monotone operators, even as simple as affine) are quite recent and utilize common approach based on Fenchel-type representations of the associated objectives/vector fields. The goal of this paper was to develop alternative (and seemingly much simpler) decomposition techniques based on LMO for bilinear saddle point problems and for variational inequalities with affine monotone operators.  相似文献   

9.
A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.  相似文献   

10.
In this paper a minimization problem with convex objective function subject to a separable convex inequality constraint “≤” and bounded variables (box constraints) is considered. We propose an iterative algorithm for solving this problem based on line search and convergence of this algorithm is proved. At each iteration, a separable convex programming problem with the same constraint set is solved using Karush-Kuhn-Tucker conditions. Convex minimization problems subject to linear equality/ linear inequality “≥” constraint and bounds on the variables are also considered. Numerical illustration is included in support of theory.  相似文献   

11.
Abstract

The problem of minimizing a smooth convex function over a specific cone in IRn is frequently encountered in nonparametric statistics. For that type of problem we suggest an algorithm and show that this algorithm converges to the solution of the minimization problem. Moreover, a simulation study is presented, showing the superiority of our algorithm compared to the EM algorithm in the interval censoring case 2 setting.  相似文献   

12.
In recent years second-order sufficient conditions of an isolated local minimizer for convex composite optimization problems have been established. In this paper, second-order optimality conditions are obtained of aglobal minimizer for convex composite problems with a non-finite valued convex function and a twice strictly differentiable function by introducing a generalized representation condition. This result is applied to a minimization problem with a closed convex set constraint which is shown to satisfy the basic constraint qualification. In particular, second-order necessary and sufficient conditions of a solution for a variational inequality problem with convex composite inequality constraints are obtained. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.  相似文献   

13.
A numerical algorithm for minimizing a convex function on the set-theoretic intersection of a spherical surface and a convex compact set is proposed. The idea behind the algorithm is to reduce the original minimization problem to a sequence of convex programming problems. Necessary extremum conditions are examined, and the convergence of the algorithm is analyzed.  相似文献   

14.
The purpose of this article is to study the iterative approximation of solution to multiple sets split feasibility problems in p-uniformly convex real Banach spaces that are also uniformly smooth. We propose an iterative algorithm for solving multiple sets split feasibility problems and prove a strong convergence theorem of the sequence generated by our algorithm under some appropriate conditions in p-uniformly convex real Banach spaces that are also uniformly smooth.  相似文献   

15.
A class of nonconvex minimization problems can be classified as hidden convex minimization problems. A nonconvex minimization problem is called a hidden convex minimization problem if there exists an equivalent transformation such that the equivalent transformation of it is a convex minimization problem. Sufficient conditions that are independent of transformations are derived in this paper for identifying such a class of seemingly nonconvex minimization problems that are equivalent to convex minimization problems. Thus, a global optimality can be achieved for this class of hidden convex optimization problems by using local search methods. The results presented in this paper extend the reach of convex minimization by identifying its equivalent with a nonconvex representation.  相似文献   

16.
This article presents a new global solution algorithm for Convex Multiplicative Programming called the Outcome Space Algorithm. To solve a given convex multiplicative program (P D), the algorithm solves instead an equivalent quasiconcave minimization problem in the outcome space of the original problem. To help accomplish this, the algorithm uses branching, bounding and outer approximation by polytopes, all in the outcome space of problem (P D). The algorithm economizes the computations that it requires by working in the outcome space, by avoiding the need to compute new vertices in the outer approximation process, and, except for one convex program per iteration, by requiring for its execution only linear programming techniques and simple algebra.  相似文献   

17.
We present new strongly polynomial algorithms for special cases of convex separable quadratic minimization over submodular constraints. The main results are: an O(NM log(N 2/M)) algorithm for the problemNetwork defined on a network onM arcs andN nodes; an O(n logn) algorithm for thetree problem onn variables; an O(n logn) algorithm for theNested problem, and a linear time algorithm for theGeneralized Upper Bound problem. These algorithms are the best known so far for these problems. The status of the general problem and open questions are presented as well.This research has been supported in part by ONR grant N00014-91-J-1241.Corresponding author.  相似文献   

18.
Global optimization problems involving the minimization of a product of convex functions on a convex set are addressed in this paper. Elements of convex analysis are used to obtain a suitable representation of the convex multiplicative problem in the outcome space, where its global solution is reduced to the solution of a sequence of quasiconcave minimizations on polytopes. Computational experiments illustrate the performance of the global optimization algorithm proposed.   相似文献   

19.
We present a new method for minimizing the sum of a convex function and aproduct of k nonnegative convex functions over a convex set. This problem isreduced to a k-dimensional quasiconcave minimization problem which is solvedby a conical branch-and-bound algorithm. Comparative computational results areprovided on test problems from the literature.  相似文献   

20.
Global Optimization Algorithm for the Nonlinear Sum of Ratios Problem   总被引:7,自引:0,他引:7  
This article presents a branch-and-bound algorithm for globally solving the nonlinear sum of ratios problem (P). The algorithm economizes the required computations by conducting the branch-and-bound search in p, rather than in n, where p is the number of ratios in the objective function of problem (P) and n is the number of decision variables in problem (P). To implement the algorithm, the main computations involve solving a sequence of convex programming problems for which standard algorithms are available.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号