首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 125 毫秒
1.
In this paper, we are concerned with the development of parallel algorithms for solving some classes of nonconvex optimization problems. We present an introductory survey of parallel algorithms that have been used to solve structured problems (partially separable, and large-scale block structured problems), and algorithms based on parallel local searches for solving general nonconvex problems. Indefinite quadratic programming posynomial optimization, and the general global concave minimization problem can be solved using these approaches. In addition, for the minimum concave cost network flow problem, we are going to present new parallel search algorithms for large-scale problems. Computational results of an efficient implementation on a multi-transputer system will be presented.  相似文献   

2.
《Optimization》2012,61(10):1661-1686
ABSTRACT

Optimization over the efficient set of a multi-objective optimization problem is a mathematical model for the problem of selecting a most preferred solution that arises in multiple criteria decision-making to account for trade-offs between objectives within the set of efficient solutions. In this paper, we consider a particular case of this problem, namely that of optimizing a linear function over the image of the efficient set in objective space of a convex multi-objective optimization problem. We present both primal and dual algorithms for this task. The algorithms are based on recent algorithms for solving convex multi-objective optimization problems in objective space with suitable modifications to exploit specific properties of the problem of optimization over the efficient set. We first present the algorithms for the case that the underlying problem is a multi-objective linear programme. We then extend them to be able to solve problems with an underlying convex multi-objective optimization problem. We compare the new algorithms with several state of the art algorithms from the literature on a set of randomly generated instances to demonstrate that they are considerably faster than the competitors.  相似文献   

3.
Over the last few decades several methods have been proposed for handling functional constraints while solving optimization problems using evolutionary algorithms (EAs). However, the presence of equality constraints makes the feasible space very small compared to the entire search space. As a consequence, the handling of equality constraints has long been a difficult issue for evolutionary optimization methods. This paper presents a Hybrid Evolutionary Algorithm (HEA) for solving optimization problems with both equality and inequality constraints. In HEA, we propose a new local search technique with special emphasis on equality constraints. The basic concept of the new technique is to reach a point on the equality constraint from the current position of an individual solution, and then explore on the constraint landscape. We believe this new concept will influence the future research direction for constrained optimization using population based algorithms. The proposed algorithm is tested on a set of standard benchmark problems. The results show that the proposed technique works very well on those benchmark problems.  相似文献   

4.
The traditional numerical analysis considers optimization algorithms which guarantee some accuracy for all functions to be optimized. This includes the exact algorithms. Limiting the maximal error requires a computational effort that in many cases increases exponentially with the size of the problem (Horst and Pardalos, 1995, Handbook of Global Optimization, Kluwer). That limits practical applications of the worst case analysis. An alternative is the average case analysis where the average error is made as small as possible (Calvin and Glynn, 1997, J. Appl. Prob., 32: 157). The average is taken over a set of functions to be optimized. The average case analysis is called the Bayesian Approach (BA) (Diaconis, 1988, Statistical Decision Theory and Related Topics, Springer; Mockus and Mockus, 1987, Theory of Optimal Decisions, Nauk, Lithuania). Application of BA to optimization of heuristics is called the Bayesian Heuristic Approach (BHA) (Mockus, 2000, A Set of Examples of Global and Discrete Optimization, Kluwer). In this paper a short presentation of the basic ideas of BHA (described in detail in Mockus (1989), Bayesian Approach to Global Optimization, Kluwer and Mockus (2000), A Set of Examples of Global and Discrete Optimization, Kluwer) is given using the knapsack problem as an example. The application potential is illustrated by the school scheduling example. In addition the new heuristic algorithm for solving a bimatrix game problem is investigated. The results ae applied while solving real life optimization problems and also as examples for distance graduate level studies of the theory of games and markets in the Internet environment.  相似文献   

5.
Predicting the native structure of proteins is one of the most challenging problems in molecular biology. The goal is to determine the three-dimensional structure from the one-dimensional amino acid sequence. De novo prediction algorithms seek to do this by developing a representation of the proteins structure, an energy potential and some optimization algorithm that finds the structure with minimal energy. Bee Colony Optimization (BCO) is a relatively new approach to solving optimization problems based on the foraging behaviour of bees. Several variants of BCO have been suggested in the literature. We have devised a new variant that unifies the existing and is much more flexible with respect to replacing the various elements of the BCO. In particular, this applies to the choice of the local search as well as the method for generating scout locations and performing the waggle dance. We apply our BCO method to generate good solutions to the protein structure prediction problem. The results show that BCO generally finds better solutions than simulated annealing which so far has been the metaheuristic of choice for this problem.  相似文献   

6.
This paper is a follow-up to the author’s previous paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modified there the three most popular greedy algorithms in nonlinear approximation in Banach spaces-Weak Chebyshev Greedy Algorithm, Weak Greedy Algorithm with Free Relaxation, and Weak Relaxed Greedy Algorithm-for solving convex optimization problems. We continue to study sparse approximate solutions to convex optimization problems. It is known that in many engineering applications researchers are interested in an approximate solution of an optimization problem as a linear combination of elements from a given system of elements. There is an increasing interest in building such sparse approximate solutions using different greedy-type algorithms. In this paper we concentrate on greedy algorithms that provide expansions, which means that the approximant at the mth iteration is equal to the sum of the approximant from the previous, (m ? 1)th, iteration and one element from the dictionary with an appropriate coefficient. The problem of greedy expansions of elements of a Banach space is well studied in nonlinear approximation theory. At first glance the setting of a problem of expansion of a given element and the setting of the problem of expansion in an optimization problem are very different. However, it turns out that the same technique can be used for solving both problems. We show how the technique developed in nonlinear approximation theory, in particular, the greedy expansions technique, can be adjusted for finding a sparse solution of an optimization problem given by an expansion with respect to a given dictionary.  相似文献   

7.
In Ref. 1, a new superlinearly convergent algorithm of sequential systems of linear equations (SSLE) for nonlinear optimization problems with inequality constraints was proposed. At each iteration, this new algorithm only needs to solve four systems of linear equations having the same coefficient matrix, which is much less than the amount of computation required for existing SQP algorithms. Moreover, unlike the quadratic programming subproblems of the SQP algorithms (which may not have a solution), the subproblems of the SSLE algorithm are always solvable. In Ref. 2, it is shown that the new algorithm can also be used to deal with nonlinear optimization problems having both equality and inequality constraints, by solving an auxiliary problem. But the algorithm of Ref. 2 has to perform a pivoting operation to adjust the penalty parameter per iteration. In this paper, we improve the work of Ref. 2 and present a new algorithm of sequential systems of linear equations for general nonlinear optimization problems. This new algorithm preserves the advantages of the SSLE algorithms, while at the same time overcoming the aforementioned shortcomings. Some numerical results are also reported.  相似文献   

8.
Global Minimization Algorithms for Holder Functions   总被引:1,自引:0,他引:1  
This paper deals with the one-dimensional global optimization problem where the objective function satisfies a Hölder condition over a closed interval. A direct extension of the popular Piyavskii method proposed for Lipschitz functions to Hölder optimization requires an a priori estimate of the Hölder constant and solution to an equation of degree N at each iteration. In this paper a new scheme is introduced. Three algorithms are proposed for solving one-dimensional Hölder global optimization problems. All of them work without solving equations of degree N. The case (very often arising in applications) when a Hölder constant is not given a priori is considered. It is shown that local information about the objective function used inside the global procedure can accelerate the search signicantly. Numerical experiments show quite promising performance of the new algorithms.  相似文献   

9.
The evolutionary metaheuristic called scatter search has been applied successfully to optimization problems for several years. In this paper, we apply the scatter search technique to the well-known 0–1 multidimensional knapsack problem. We propose a new relaxation-based diversification generator, which produces an initial population with elite solutions. The computational results obtained for a set of classic and correlated instances clearly show that (1) this generator can also be used as a heuristic for solving the multidimensional knapsack problem; (2) using the population produced by our generator as a starting point for the scatter search algorithm leads to better performance. We also enhance the scatter search algorithm by integrating memory and by using adapted intensification phases. Overall, the results are interesting and competitive compared to other population-based algorithms, such as genetic algorithms.   相似文献   

10.
In this paper several parameter dependent scalarization approaches for solving nonlinear multi-objective optimization problems are discussed. It is shown that they can be considered as special cases of a scalarization problem by Pascoletti and Serafini (or a modification of this problem). Based on these connections theoretical results as well as a new algorithm for adaptively controlling the choice of the parameters for generating almost equidistant approximations of the efficient set, lately developed for the Pascoletti-Serafini scalarization, can be applied to these problems. For instance for such well-known scalarizations as the ε-constraint or the normal boundary intersection problem algorithms for adaptively generating high quality approximations are derived.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号