首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we present parallel bundle-based decomposition algorithms to solve a class of structured large-scale convex optimization problems. An example in this class of problems is the block-angular linear programming problem. By dualizing, we transform the original problem to an unconstrained nonsmooth concave optimization problem which is in turn solved by using a modified bundle method. Further, this dual problem consists of a collection of smaller independent subproblems which give rise to the parallel algorithms. We discuss the implementation on the CRYSTAL multi-computer. Finally, we present computational experience with block-angular linear programming problems and observe that more than 70% efficiency can be obtained using up to eleven processors for one group of test problems, and more than 60% efficiency can be obtained for relatively smaller problems using up to five processors for another group of problems.  相似文献   

2.
We show the importance of exploiting the complementary convex structure for efficiently solving a wide class of specially structured nonconvex global optimization problems. Roughly speaking, a specific feature of these problems is that their nonconvex nucleus can be transformed into a complementary convex structure which can then be shifted to a subspace of much lower dimension than the original underlying space. This approach leads to quite efficient algorithms for many problems of practical interest, including linear and convex multiplicative programming problems, concave minimization problems with few nonlinear variables, bilevel linear optimization problems, etc...  相似文献   

3.
In this paper, we present a global optimization method for solving nonconvex mixed integer nonlinear programming (MINLP) problems. A convex overestimation of the feasible region is obtained by replacing the nonconvex constraint functions with convex underestimators. For signomial functions single-variable power and exponential transformations are used to obtain the convex underestimators. For more general nonconvex functions two versions of the so-called αBB-underestimator, valid for twice-differentiable functions, are integrated in the actual reformulation framework. However, in contrast to what is done in branch-and-bound type algorithms, no direct branching is performed in the actual algorithm. Instead a piecewise convex reformulation is used to convexify the entire problem in an extended variable-space, and the reformulated problem is then solved by a convex MINLP solver. As the piecewise linear approximations are made finer, the solution to the convexified and overestimated problem will form a converging sequence towards a global optimal solution. The result is an easily-implementable algorithm for solving a very general class of optimization problems.  相似文献   

4.
In this paper, we design a numerical algorithm for solving a simple bilevel program where the lower level program is a nonconvex minimization problem with a convex set constraint. We propose to solve a combined problem where the first order condition and the value function are both present in the constraints. Since the value function is in general nonsmooth, the combined problem is in general a nonsmooth and nonconvex optimization problem. We propose a smoothing augmented Lagrangian method for solving a general class of nonsmooth and nonconvex constrained optimization problems. We show that, if the sequence of penalty parameters is bounded, then any accumulation point is a Karush-Kuch-Tucker (KKT) point of the nonsmooth optimization problem. The smoothing augmented Lagrangian method is used to solve the combined problem. Numerical experiments show that the algorithm is efficient for solving the simple bilevel program.  相似文献   

5.
We study valid inequalities for optimization models that contain both binary indicator variables and separable concave constraints. These models reduce to a mixed-integer linear program (MILP) when the concave constraints are ignored, or to a nonconvex global optimization problem when the binary restrictions are ignored. In algorithms designed to solve these problems to global optimality, cutting planes to strengthen the relaxation are traditionally obtained using valid inequalities for the MILP only. We propose a technique to obtain valid inequalities that are based on both the MILP constraints and the concave constraints. We begin by characterizing the convex hull of a four-dimensional set consisting of a single binary indicator variable, a single concave constraint, and two linear inequalities. Using this analysis, we demonstrate how valid inequalities for the single node flow set and for the lot-sizing polyhedron can be “tilted” to give valid inequalities that also account for separable concave functions of the arc flows. We present computational results demonstrating the utility of the new inequalities for nonlinear transportation problems and for lot-sizing problems with concave costs. To our knowledge, this is one of the first works that simultaneously convexifies both nonconvex functions and binary variables to strengthen the relaxations of practical mixed-integer nonlinear programs.  相似文献   

6.
The subject of this article is a class of global optimization problems, in which the variables can be divided into two groups such that, in each group, the functions involved have the same structure (e.g. linear, convex or concave, etc.). Based on the decomposition idea of Benders (Ref. 1), a corresponding master problem is defined on the space of one of the two groups of variables. The objective function of this master problem is in fact the optimal value function of a nonlinear parametric optimization problem. To solve the resulting master problem, a branch-and-bound scheme is proposed, in which the estimation of the lower bounds is performed by applying the well-known weak duality theorem in Lagrange duality. The results of this article concentrate on two subjects: investigating the convergence of the general algorithm and solving dual problems of some special classes of nonconvex optimization problems. Based on results in sensitivity and stability theory and in parametric optimization, conditions for the convergence are established by investigating the so-called dual properness property and the upper semicontinuity of the objective function of the master problem. The general algorithm is then discussed in detail for some nonconvex problems including concave minimization problems with a special structure, general quadratic problems, optimization problems on the efficient set, and linear multiplicative programming problems.  相似文献   

7.
We provide a unifying geometric framework for the analysis of general classes of duality schemes and penalty methods for nonconvex constrained optimization problems. We present a separation result for nonconvex sets via general concave surfaces. We use this separation result to provide necessary and sufficient conditions for establishing strong duality between geometric primal and dual problems. Using the primal function of a constrained optimization problem, we apply our results both in the analysis of duality schemes constructed using augmented Lagrangian functions, and in establishing necessary and sufficient conditions for the convergence of penalty methods.  相似文献   

8.
李博  杜杰  万立娟 《数学杂志》2016,36(4):851-858
本文研究了一类非凸最优化问题的凸化方法与最优性条件的问题.利用构造含有参数的函数变换方法,将具有次正定性质的目标函数凸化,并获得了这一类非凸优化问题全局最优解的充要条件,推广了凸化方法在求解全局最优化问题方面的应用.  相似文献   

9.
Image space analysis of generalized fractional programs   总被引:2,自引:0,他引:2  
The solution of a particular nonconvex program is usually very dependent on the structure of the problem. In this paper we identify classes of nonconvex problems involving either sums or products of ratios of linear terms which may be treated by analysis in a transformed space. In each class, the image space is defined by a mapping which associates a new variable with each original ratio of linear terms. In the image space, optimization is easy in certain directions, and the overall solution may be realized by sequentially optimizing in these directions.In addition to these ratio problems, we also show how to use image space analysis to treat the subclass of problems whose objective is to optimize a product of linear terms. For each class of nonconvex problems, we present an algorithm that locates global solutions by computing both upper and lower bounds on the solution and then solving a sequence of linear programming sub-problems. We also demonstrate the algorithms described in this paper by solving several example problems.  相似文献   

10.
Traditionally, the minimum cost transshipment problems have been simplified as linear cost problems, which are not practical in real applications. Recently, some advanced local search algorithms have been developed that can directly solve concave cost bipartite network problems. However, they are not applicable to general transshipment problems. Moreover, the effectiveness of these modified local search algorithms for solving general concave cost transshipment problems is doubtful. In this research, we propose a global search algorithm for solving concave cost transshipment problems. Effecient methods for encoding, generating initial populations, selection, crossover and mutation are proposed, according to the problem characteristics. To evaluate the effectiveness of the proposed global search algorithm, four advanced local search algorithms based on the threshold accepting algorithm, the great deluge algorithm, and the tabu search algorithm, are also developed and are used for comparison purpose. To assist with the comparison of the proposed algorithms, a randomized network generator is designed to produce test problems. All the tests are performed on a personal computer. The results indicate that the proposed global search algorithm is more effective than the four advanced local algorithms, for solving concave cost transshipment problems.  相似文献   

11.
The purpose of this article is to develop a branch-and-bound algorithm using duality bounds for the general quadratically-constrained quadratic programming problem and having the following properties: (i) duality bounds are computed by solving ordinary linear programs; (ii) they are at least as good as the lower bounds obtained by solving relaxed problems, in which each nonconvex function is replaced by its convex envelope; (iii) standard convergence properties of branch-and-bound algorithms for nonconvex global optimization problems are guaranteed. Numerical results of preliminary computational experiments for the case of one quadratic constraint are reported.  相似文献   

12.
A new algorithm to solve nonconvex NLP problems is presented. It is based on the solution of two problems. The reformulated problem RP is a suitable reformulation of the original problem and involves convex terms and concave univariate terms. The main problem MP is a nonconvex NLP that outer-approximates the feasible region and underestimate the objective function. MP involves convex terms and terms which are the products of concave univariate functions and new variables. Fixing the variables in the concave terms, a convex NLP that overestimates the feasible region and underestimates the objective function is obtained from the MP. Like most of the deterministic global optimization algorithms, bounds on all the variables in the nonconvex terms must be provided. MP forces the objective value to improve and minimizes the difference of upper and lower bound of all the variables either to zero or to a positive value. In the first case, a feasible solution of the original problem is reached and the objective function is improved. In general terms, the second case corresponds to an infeasible solution of the original problem due to the existence of gaps in some variables. A branching procedure is performed in order to either prove that there is no better solution or reduce the domain, eliminating the local solution of MP that was found. The MP solution indicates a key point to do the branching. A bound reduction technique is implemented to accelerate the convergence speed. Computational results demonstrate that the algorithm compares very favorably to other approaches when applied to test problems and process design problems. It is typically faster and it produces very accurate results.  相似文献   

13.
We demonstrate how the size of certain global optimization problems can substantially be reduced by using dualization and polyhedral annexation techniques. The results are applied to develop efficient algorithms for solving concave minimization problems with a low degree of nonlinearity. This class includes in particular nonconvex optimization problems involving products or quotients of affine functions in the objective function.This work was completed while the author was visiting the Department of Mathematics of Linköping University.  相似文献   

14.
Robinson has proposed the bundle-based decomposition algorithm to solve a class of structured large-scale convex optimization problems. In this method, the original problem is transformed (by dualization) to an unconstrained nonsmooth concave optimization problem which is in turn solved by using a modified bundle method. In this paper, we give a posteriori error estimates on the approximate primal optimal solution and on the duality gap. We describe implementation and present computational experience with a special case of this class of problems, namely, block-angular linear programming problems. We observe that the method is efficient in obtaining the approximate optimal solution and compares favorably with MINOS and an advanced implementation of the Dantzig—Wolfe decomposition method.  相似文献   

15.
A standard Quadratic Programming problem (StQP) consists in minimizing a (nonconvex) quadratic form over the standard simplex. For solving a StQP we present an exact and a heuristic algorithm, that are based on new theoretical results for quadratic and convex optimization problems. With these results a StQP is reduced to a constrained nonlinear minimum weight clique problem in an associated graph. Such a clique problem, which does not seem to have been studied before, is then solved with an exact and a heuristic algorithm. Some computational experience shows that our algorithms are able to solve StQP problems of at least one order of magnitude larger than those reported in the literature.  相似文献   

16.
Forward–backward and Douglas–Rachford splitting are methods for structured nonsmooth optimization. With the aim to use smooth optimization techniques for nonsmooth problems, the forward–backward and Douglas–Rachford envelopes where recently proposed. Under specific problem assumptions, these envelope functions have favorable smoothness and convexity properties and their stationary points coincide with the fixed-points of the underlying algorithm operators. This allows for solving such nonsmooth optimization problems by minimizing the corresponding smooth convex envelope function. In this paper, we present a general envelope function that unifies and generalizes existing ones. We provide properties of the general envelope function that sharpen corresponding known results for the special cases. We also present a new interpretation of the underlying methods as being majorization–minimization algorithms applied to their respective envelope functions.  相似文献   

17.
Three parallel space-decomposition minimization (PSDM) algorithms, based on the parallel variable transformation (PVT) and the parallel gradient distribution (PGD) algorithms (O.L. Mangasarian, SIMA Journal on Control and Optimization, vol. 33, no. 6, pp. 1916–1925.), are presented for solving convex or nonconvex unconstrained minimization problems. The PSDM algorithms decompose the variable space into subspaces and distribute these decomposed subproblems among parallel processors. It is shown that if all decomposed subproblems are uncoupled of each other, they can be solved independently. Otherwise, the parallel algorithms presented in this paper can be used. Numerical experiments show that these parallel algorithms can save processor time, particularly for medium and large-scale problems. Up to six parallel processors are connected by Ethernet networks to solve four large-scale minimization problems. The results are compared with those obtained by using sequential algorithms run on a single processor. An application of the PSDM algorithms to the training of multilayer Adaptive Linear Neurons (Madaline) and a new parallel architecture for such parallel training are also presented.  相似文献   

18.
This paper presents a canonical duality theory for solving quadratic minimization problems subjected to either box or integer constraints. Results show that under Gao and Strang’s general global optimality condition, these well-known nonconvex and discrete problems can be converted into smooth concave maximization dual problems over closed convex feasible spaces without duality gap, and can be solved by well-developed optimization methods. Both existence and uniqueness of these canonical dual solutions are presented. Based on a second-order canonical dual perturbation, the discrete integer programming problem is equivalent to a continuous unconstrained Lipschitzian optimization problem, which can be solved by certain deterministic technique. Particularly, an analytical solution is obtained under certain condition. A fourth-order canonical dual perturbation algorithm is presented and applications are illustrated. Finally, implication of the canonical duality theory for the popular semi-definite programming method is revealed.  相似文献   

19.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

20.
It is shown that, for very general classes of nonconvex global optimization problems, the duality gap obtained by solving a corresponding Lagrangian dual in reduced to zero in the limit when combined with suitably refined partitioning of the feasible set. A similar result holds for partly convex problems where exhaustive partitioning is applied only in the space of nonconvex variables. Applications include branch-and-bound approaches for linearly constrained problems where convex envelopes can be computed, certain generalized bilinear problems, linearly constrained optimization of the sum of ratios of affine functions, and concave minimization under reverse convex constraints.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号