首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One of the challenging optimization problems is determining the minimizer of a nonlinear programming problem that has binary variables. A vexing difficulty is the rate the work to solve such problems increases as the number of discrete variables increases. Any such problem with bounded discrete variables, especially binary variables, may be transformed to that of finding a global optimum of a problem in continuous variables. However, the transformed problems usually have astronomically large numbers of local minimizers, making them harder to solve than typical global optimization problems. Despite this apparent disadvantage, we show that the approach is not futile if we use smoothing techniques. The method we advocate first convexifies the problem and then solves a sequence of subproblems, whose solutions form a trajectory that leads to the solution. To illustrate how well the algorithm performs we show the computational results of applying it to problems taken from the literature and new test problems with known optimal solutions.  相似文献   

2.
We propose primal–dual path-following Mehrotra-type predictor–corrector methods for solving convex quadratic semidefinite programming (QSDP) problems of the form: , where is a self-adjoint positive semidefinite linear operator on , bR m , and is a linear map from to R m . At each interior-point iteration, the search direction is computed from a dense symmetric indefinite linear system (called the augmented equation) of dimension m + n(n + 1)/2. Such linear systems are typically very large and can only be solved by iterative methods. We propose three classes of preconditioners for the augmented equation, and show that the corresponding preconditioned matrices have favorable asymptotic eigenvalue distributions for fast convergence under suitable nondegeneracy assumptions. Numerical experiments on a variety of QSDPs with n up to 1600 are performed and the computational results show that our methods are efficient and robust. Research supported in part by Academic Research Grant R146-000-076-112.  相似文献   

3.
In this paper, we consider the problem of computing an optimal branch decomposition of a graph. Branch decompositions and branchwidth were introduced by Robertson and Seymour in their series of papers that proved the Graph Minors Theorem. Branch decompositions have proven to be useful in solving many NP-hard problems, such as the traveling salesman, independent set, and ring routing problems, by means of combinatorial algorithms that operate on branch decompositions. We develop an implicit enumeration algorithm for the optimal branch decomposition problem and examine its performance on a set of classical graph instances.  相似文献   

4.
5.
Ant colony optimization is a metaheuristic that has been applied to a variety of combinatorial optimization problems. In this paper, an ant colony optimization approach is proposed to deal with the multidimensional knapsack problem. It is an extension of Max Min Ant System which imposes lower and upper trail limits on pheromone values to avoid stagnation. In order to choose the lower trail limit, we provide a new method which takes into account the influence of heuristic information. Furthermore, a local search procedure is proposed to improve the solutions constructed by ants. Computational experiments on benchmark problems are carried out. The results show that the proposed algorithm can compete efficiently with other promising approaches to the problem.  相似文献   

6.
For solving large-scale unconstrained minimization problems, the nonlinear conjugate gradient method is welcome due to its simplicity, low storage, efficiency and nice convergence properties. Among all the methods in the framework, the conjugate gradient descent algorithm — CG_DESCENT is very popular, in which the generated directions descend automatically, and this nice property is independent of any line search used. In this paper, we generalize CG_DESCENT with two Barzilai–Borwein steplength reused cyclically. We show that the resulting algorithm owns attractive sufficient descent property and converges globally under some mild conditions. We test the proposed algorithm by using a large set of unconstrained problems with high dimensions in CUTEr library. The numerical comparisons with the state-of-the-art algorithm CG_DESCENT illustrate that the proposed method is effective, competitive, and promising.  相似文献   

7.
In this paper, we present an interior-point path-following algorithm for computing a Leontief economy equilibrium, that is, an exchange market equilibrium with Leontief utility functions, which is known to be in the complexity class of PPAD-complete. It is known that an equilibrium corresponds to a solution of a system of complementarities, so we construct a smooth homotopy interior-point path to tackle this system. We prove that there always exists a continuously differentiable path leading to a complementary solution of the nonlinear system and at the same time to a Leontief economy equilibrium associated with the solution. We also report preliminary computational results to show effectiveness of the path-following Newton method.  相似文献   

8.
《Optimization》2012,61(12):1457-1471
A modified Polak–Ribière–Polyak conjugate gradient algorithm which satisfies both the sufficient descent condition and the conjugacy condition is presented. These properties are independent of the line search. The algorithms use the standard Wolfe line search. Under standard assumptions, we show the global convergence of the algorithm. Numerical comparisons with conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that this computational scheme outperforms the known Polak–Ribière–Polyak algorithm, as well as some other unconstrained optimization algorithms.  相似文献   

9.
First principles approaches to the protein structure prediction problem must search through an enormous conformational space to identify low-energy, near-native structures. In this paper, we describe the formulation of the tertiary structure prediction problem as a nonlinear constrained minimization problem, where the goal is to minimize the energy of a protein conformation subject to constraints on torsion angles and interatomic distances. The core of the proposed algorithm is a hybrid global optimization method that combines the benefits of the αBB deterministic global optimization approach with conformational space annealing. These global optimization techniques employ a local minimization strategy that combines torsion angle dynamics and rotamer optimization to identify and improve the selection of initial conformations and then applies a sequential quadratic programming approach to further minimize the energy of the protein conformations subject to constraints. The proposed algorithm demonstrates the ability to identify both lower energy protein structures, as well as larger ensembles of low-energy conformations.  相似文献   

10.
We study the applicability of the Peaceman–Rachford (PR) splitting method for solving nonconvex optimization problems. When applied to minimizing the sum of a strongly convex Lipschitz differentiable function and a proper closed function, we show that if the strongly convex function has a large enough strong convexity modulus and the step-size parameter is chosen below a threshold that is computable, then any cluster point of the sequence generated, if exists, will give a stationary point of the optimization problem. We also give sufficient conditions guaranteeing boundedness of the sequence generated. We then discuss one way to split the objective so that the proposed method can be suitably applied to solving optimization problems with a coercive objective that is the sum of a (not necessarily strongly) convex Lipschitz differentiable function and a proper closed function; this setting covers a large class of nonconvex feasibility problems and constrained least squares problems. Finally, we illustrate the proposed algorithm numerically.  相似文献   

11.
The subgradient method is both a heavily employed and widely studied algorithm for non-differentiable optimization. Nevertheless, there are some basic properties of subgradient optimization that, while “well known” to specialists, seem to be rather poorly known in the larger optimization community. This note concerns two such properties, both applicable to subgradient optimization using the divergent series steplength rule. The first involves convergence of the iterative process, and the second deals with the construction of primal estimates when subgradient optimization is applied to maximize the Lagrangian dual of a linear program. The two topics are related in that convergence of the iterates is required to prove correctness of the primal construction scheme. Dedicated to B.T. Polyak on the occassion of his 70th birthday.  相似文献   

12.
We introduce an entropy-like proximal algorithm for the problem of minimizing a closed proper convex function subject to symmetric cone constraints. The algorithm is based on a distance-like function that is an extension of the Kullback-Leiber relative entropy to the setting of symmetric cones. Like the proximal algorithms for convex programming with nonnegative orthant cone constraints, we show that, under some mild assumptions, the sequence generated by the proposed algorithm is bounded and every accumulation point is a solution of the considered problem. In addition, we also present a dual application of the proposed algorithm to the symmetric cone linear program, leading to a multiplier method which is shown to possess similar properties as the exponential multiplier method (Tseng and Bertsekas in Math. Program. 60:1–19, 1993) holds.  相似文献   

13.
In this paper, we introduce an additional constraint to the one-dimensional variable sized bin packing problem. Practically, some of items have to be packed separately in different bins due to their specific requirement. Therefore, these items are labelled as different types. The bins can be used to pack either any type of items if they are empty originally or the same type of items as what they already have. We model the problem as a type-constrained and variable sized bin packing problem (TVSBPP), and solve it via a branch and bound method. An efficient backtracking procedure is proposed to improve the efficiency of the algorithm.  相似文献   

14.
A branch and bound global optimization method,BB, for general continuous optimization problems involving nonconvexities in the objective function and/or constraints is presented. The nonconvexities are categorized as being either of special structure or generic. A convex relaxation of the original nonconvex problem is obtained by (i) replacing all nonconvex terms of special structure (i.e. bilinear, fractional, signomial) with customized tight convex lower bounding functions and (ii) by utilizing the parameter as defined in [17] to underestimate nonconvex terms of generic structure. The proposed branch and bound type algorithm attains finite-convergence to the global minimum through the successive subdivision of the original region and the subsequent solution of a series of nonlinear convex minimization problems. The global optimization method,BB, is implemented in C and tested on a variety of example problems.  相似文献   

15.
Despite the lack of theoretical and practical convergence support, the Nelder–Mead (NM) algorithm is widely used to solve unconstrained optimization problems. It is a derivative-free algorithm, that attempts iteratively to replace the worst point of a simplex by a better one. The present paper proposes a way to extend the NM algorithm to inequality constrained optimization. This is done through a search step of the mesh adaptive direct search (Mads) algorithm, inspired by the NM algorithm. The proposed algorithm does not suffer from the NM lack of convergence, but instead inherits from the totality of the Mads convergence analysis. Numerical experiments show an important improvement in the quality of the solutions produced using this search step.  相似文献   

16.
INDSCAL (INdividual Differences SCALing) is a useful technique for investigating both common and unique aspects of K similarity data matrices. The model postulates a common stimulus configuration in a low-dimensional Euclidean space, while representing differences among the K data matrices by differential weighting of dimensions by different data sources. Since Carroll and Chang proposed their algorithm for INDSCAL, several issues have been raised: non-symmetric solutions, negative saliency weights, and the degeneracy problem. Orthogonal INDSCAL (O-INDSCAL) which imposes orthogonality constraints on the matrix of stimulus configuration has been proposed to overcome some of these difficulties. Two algorithms have been proposed for O-INDSCAL, one by Ten Berge, Knol, and Kiers, and the other by Trendafilov. In this paper, an acceleration technique called minimal polynomial extrapolation is incorporated in Ten Berge et al.’s algorithm. Simulation studies are conducted to compare the performance of the three algorithms (Ten Berge et al.’s original algorithm, the accelerated algorithm, and Trendafilov’s). Possible extensions of the accelerated algorithm to similar situations are also suggested.  相似文献   

17.
We present an algorithm that determines Sequential Tail Value at Risk (STVaR) for path-independent payoffs in a binomial tree. STVaR is a dynamic version of Tail-Value-at-Risk (TVaR) characterized by the property that risk levels at any moment must be in the range of risk levels later on. The algorithm consists of a finite sequence of backward recursions that is guaranteed to arrive at the solution of the corresponding dynamic optimization problem. The algorithm makes concrete how STVaR differs from TVaR over the remaining horizon, and from recursive TVaR, which amounts to Dynamic Programming. Algorithmic aspects are compared with the cutting-plane method. Time consistency and comonotonicity properties are illustrated by applying the algorithm on elementary examples.  相似文献   

18.
Grimmer  Benjamin  Lu  Haihao  Worah  Pratik  Mirrokni  Vahab 《Mathematical Programming》2023,201(1-2):373-407
Mathematical Programming - Minimax optimization has become a central tool in machine learning with applications in robust optimization, reinforcement learning, GANs, etc. These applications are...  相似文献   

19.
In this paper, a modification of the BFGS algorithm for unconstrained nonconvex optimization is proposed. The idea of the algorithm is to modify the approximate Hessian matrix for obtaining the descent direction and guaranteeing the efficacious of the new quasi-Newton iteration equation B k+1 s k =y k * , where y k * is the sum of y k and t k g(x k )‖s k . The global convergence property of the algorithm associated with the general line search rule is prove.  相似文献   

20.
We present an exact algorithm for mean-risk optimization subject to a budget constraint, where decision variables may be continuous or integer. The risk is measured by the covariance matrix and weighted by an arbitrary monotone function, which allows to model risk-aversion in a very individual way. We address this class of convex mixed-integer minimization problems by designing a branch-and-bound algorithm, where at each node, the continuous relaxation is solved by a non-monotone Frank–Wolfe type algorithm with away-steps. Experimental results on portfolio optimization problems show that our approach can outperform the MISOCP solver of CPLEX 12.6 for instances where a linear risk-weighting function is considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号