首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We suggested some modifications to the controlled random search (CRS) algorithm for global optimization. We introduce new trial point generation schemes in CRS, in particular, point generation schemes using linear interpolation and mutation. Central to our modifications is the probabilistic adaptation of point generation schemes within the CRS algorithm.A numerical study is carried out using a set of 50 test problems many of which are inspired by practical applications. Numerical experiments indicate that the resulting algorithms are considerably better than the previous versions. Thus, they offer a reasonable alternative to many currently available stochastic algorithms, especially for problems requiring direct search type methods.  相似文献   

2.
This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved.  相似文献   

3.
An approach to non-convex multi-objective optimization problems is considered where only the values of objective functions are required by the algorithm. The proposed approach is a generalization of the probabilistic branch-and-bound approach well applicable to complicated problems of single-objective global optimization. In the present paper the concept of probabilistic branch-and-bound based multi-objective optimization algorithms is discussed, and some illustrations are presented.  相似文献   

4.
Designing different estimation of distribution algorithms for continuous optimization is a recent emerging focus in the evolutionary computation field. This paper proposes an improved population-based incremental learning algorithm using histogram probabilistic model for continuous optimization. Histogram models are advantageous in describing the solution distribution of complex and multimodal continuous problems. The algorithm utilizes the sub-dividing strategy to guarantee the accuracy of optimal solutions. Experimental results show that the proposed algorithm is effective and it obtains better performance than the fast evolutionary programming (FEP) and those newly published EDAs in most test functions.  相似文献   

5.
Genetic algorithms are stochastic search algorithms that have been applied to optimization problems. In this paper we analyze the run-time complexity of a genetic algorithm when we are interested in one of a set of distinguished solutions. One such case occurs when multiple optima exist. We define the worst case scenario and derive a probabilistic worst case bound on the number of iterations required to find one of these multiple solutions of interest.  相似文献   

6.
Stuart  A. M. 《Numerical Algorithms》1997,14(1-3):227-260
The numerical solution of initial value problems for ordinary differential equations is frequently performed by means of adaptive algorithms with user-input tolerance τ. The time-step is then chosen according to an estimate, based on small time-step heuristics, designed to try and ensure that an approximation to the local error commited is bounded by τ. A question of natural interest is to determine how the global error behaves with respect to the tolerance τ. This has obvious practical interest and also leads to an interesting problem in mathematical analysis. The primary difficulties arising in the analysis are that: (i) the time-step selection mechanisms used in practice are discontinuous as functions of the specified data; (ii) the small time-step heuristics underlying the control of the local error can break down in some cases. In this paper an analysis is presented which incorporates these two difficulties. For a mathematical model of an error per unit step or error per step adaptive Runge–Kutta algorithm, it may be shown that in a certain probabilistic sense, with respect to a measure on the space of initial data, the small time-step heuristics are valid with probability one, leading to a probabilistic convergence result for the global error as τ→0. The probabilistic approach is only valid in dimension m>1 this observation is consistent with recent analysis concerning the existence of spurious steady solutions of software codes which highlights the difference between the cases m=1 and m>1. The breakdown of the small time-step heuristics can be circumvented by making minor modifications to the algorithm, leading to a deterministic convergence proof for the global error of such algorithms as τ→0. An underlying theory is developed and the deterministic and probabilistic convergence results proved as particular applications of this theory. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

7.
Traditionally, minimum cost transshipment problems have been simplified as linear cost problems, which are not practical in real applications. Some advanced local search algorithms have been developed to solve concave cost bipartite network problems. These have been found to be more effective than the traditional linear approximation methods and local search methods. Recently, a genetic algorithm and an ant colony system algorithm were employed to develop two global search algorithms for solving concave cost transshipment problems. These two global search algorithms were found to be more effective than the advanced local search algorithms for solving concave cost transshipment problems. Although the particle swarm optimization algorithm has been used to obtain good results in many applications, to the best of our knowledge, it has not yet been applied in minimum concave cost network flow problems. Thus, in this study, we employ an arc-based particle swarm optimization algorithm, coupled with some genetic algorithm and threshold accepting method techniques, as well as concave cost network heuristics, to develop a hybrid global search algorithm for efficiently solving minimum cost network flow problems with concave arc costs. The proposed algorithm is evaluated by solving several randomly generated network flow problems. The results indicate that the proposed algorithm is more effective than several other recently designed methods, such as local search algorithms, genetic algorithms and ant colony system algorithms, for solving minimum cost network flow problems with concave arc costs.  相似文献   

8.
Multiplicative programming problems (MPPs) are global optimization problems known to be NP-hard. In this paper, we employ algorithms developed to compute the entire set of nondominated points of multi-objective linear programmes (MOLPs) to solve linear MPPs. First, we improve our own objective space cut and bound algorithm for convex MPPs in the special case of linear MPPs by only solving one linear programme in each iteration, instead of two as the previous version indicates. We call this algorithm, which is based on Benson’s outer approximation algorithm for MOLPs, the primal objective space algorithm. Then, based on the dual variant of Benson’s algorithm, we propose a dual objective space algorithm for solving linear MPPs. The dual algorithm also requires solving only one linear programme in each iteration. We prove the correctness of the dual algorithm and use computational experiments comparing our algorithms to a recent global optimization algorithm for linear MPPs from the literature as well as two general global optimization solvers to demonstrate the superiority of the new algorithms in terms of computation time. Thus, we demonstrate that the use of multi-objective optimization techniques can be beneficial to solve difficult single objective global optimization problems.  相似文献   

9.
We present a probabilistic analysis of integer linear programs (ILPs). More specifically, we study ILPs in a so-called smoothed analysis in which it is assumed that first an adversary specifies the coefficients of an integer program and then (some of) these coefficients are randomly perturbed, e.g., using a Gaussian or a uniform distribution with small standard deviation. In this probabilistic model, we investigate structural properties of ILPs and apply them to the analysis of algorithms. For example, we prove a lower bound on the slack of the optimal solution. As a result of our analysis, we are able to specify the smoothed complexity of classes of ILPs in terms of their worst case complexity. This way, we obtain polynomial smoothed complexity for packing and covering problems with any fixed number of constraints. Previous results of this kind were restricted to the case of binary programs.   相似文献   

10.
Successive linear programming (SLP) algorithms solve nonlinear optimization problems via a sequence of linear programs. We present an approach for a special class of nonlinear programming problems, which arise in multiperiod coal blending. The class of nonlinear programming problems and the solution approach considered in this paper are quite different from previous work. The algorithm is very simple, easy to apply and can be applied to as large a problem as the linear programming code can handle. The quality of solution, produced by the proposed algorithm, is discussed and the results of some test problems, in the real world environment, are provided.  相似文献   

11.
CONOPT: A GRG code for large sparse dynamic nonlinear optimization problems   总被引:1,自引:0,他引:1  
The paper presents CONOPT, an optimization system for static and dynamic large-scale nonlinearly constrained optimization problems. The system is based on the GRG algorithm. All computations involving the Jacobian of the constraints use sparse-matrix algorithms from linear programming, modified to deal with the nonlinearity and to take maximum advantage of the periodic structure in dynamic models. The paper presents the main features of the system, espcially the inversion routines and their data structures, the dynamic setting of tolerances in Newton’s algorithm, and the user features in the overal packaging. The difficulties with implementing a practical GRG algorithm are described in detail. Computational experience with some medium to large models is presented, idicating the viability of CONOPT for certain real-life problems, particularly those involving almost as many constraints as variables. The views and interpretations in this document are those of the author and should not be attributed to the World Bank, to its affiliated organizations or to any individual acting in their behalf.  相似文献   

12.
Experimental Evaluation of Heuristic Optimization Algorithms: A Tutorial   总被引:4,自引:0,他引:4  
Heuristic optimization algorithms seek good feasible solutions to optimization problems in circumstances where the complexities of the problem or the limited time available for solution do not allow exact solution. Although worst case and probabilistic analysis of algorithms have produced insight on some classic models, most of the heuristics developed for large optimization problem must be evaluated empirically—by applying procedures to a collection of specific instances and comparing the observed solution quality and computational burden.This paper focuses on the methodological issues that must be confronted by researchers undertaking such experimental evaluations of heuristics, including experimental design, sources of test instances, measures of algorithmic performance, analysis of results, and presentation in papers and talks. The questions are difficult, and there are no clear right answers. We seek only to highlight the main issues, present alternative ways of addressing them under different circumstances, and caution about pitfalls to avoid.  相似文献   

13.
AbstractIn this paper, a new superlinearly convergent algorithm of sequential systems of linear equations (SSLE) for nonlinear optimization problems with inequality constraints is proposed. Since the new algorithm only needs to solve several systems of linear equations having a same coefficient matrix per iteration, the computation amount of the algorithm is much less than that of the existing SQP algorithms per iteration. Moreover, for the SQP type algorithms, there exist so-called inconsistent problems, i.e., quadratic programming subproblems of the SQP algorithms may not have a solution at some iterations, but this phenomenon will not occur with the SSLE algorithms because the related systems of linear equations always have solutions. Some numerical results are reported.  相似文献   

14.
This contribution gives an overview on the state-of-the-art and recent advances in mixed integer optimization to solve planning and design problems in the process industry. In some case studies specific aspects are stressed and the typical difficulties of real world problems are addressed. Mixed integer linear optimization is widely used to solve supply chain planning problems. Some of the complicating features such as origin tracing and shelf life constraints are discussed in more detail. If properly done the planning models can also be used to do product and customer portfolio analysis. We also stress the importance of multi-criteria optimization and correct modeling for optimization under uncertainty. Stochastic programming for continuous LP problems is now part of most optimization packages, and there is encouraging progress in the field of stochastic MILP and robust MILP. Process and network design problems often lead to nonconvex mixed integer nonlinear programming models. If the time to compute the solution is not bounded, there are already a commercial solvers available which can compute the global optima of such problems within hours. If time is more restricted, then tailored solution techniques are required.  相似文献   

15.
Stochastic dominance relations are well studied in statistics, decision theory and economics. Recently, there has been significant interest in introducing dominance relations into stochastic optimization problems as constraints. In the discrete case, stochastic optimization models involving second order stochastic dominance constraints can be solved by linear programming. However, problems involving first order stochastic dominance constraints are potentially hard due to the non-convexity of the associated feasible regions. In this paper we consider a mixed 0–1 linear programming formulation of a discrete first order constrained optimization model and present a relaxation based on second order constraints. We derive some valid inequalities and restrictions by employing the probabilistic structure of the problem. We also generate cuts that are valid inequalities for the disjunctive relaxations arising from the underlying combinatorial structure of the problem by applying the lift-and-project procedure. We describe three heuristic algorithms to construct feasible solutions, based on conditional second order constraints, variable fixing, and conditional value at risk. Finally, we present numerical results for several instances of a real world portfolio optimization problem. This research was supported by the NSF awards DMS-0603728 and DMI-0354678.  相似文献   

16.
In Ref. 1, a new superlinearly convergent algorithm of sequential systems of linear equations (SSLE) for nonlinear optimization problems with inequality constraints was proposed. At each iteration, this new algorithm only needs to solve four systems of linear equations having the same coefficient matrix, which is much less than the amount of computation required for existing SQP algorithms. Moreover, unlike the quadratic programming subproblems of the SQP algorithms (which may not have a solution), the subproblems of the SSLE algorithm are always solvable. In Ref. 2, it is shown that the new algorithm can also be used to deal with nonlinear optimization problems having both equality and inequality constraints, by solving an auxiliary problem. But the algorithm of Ref. 2 has to perform a pivoting operation to adjust the penalty parameter per iteration. In this paper, we improve the work of Ref. 2 and present a new algorithm of sequential systems of linear equations for general nonlinear optimization problems. This new algorithm preserves the advantages of the SSLE algorithms, while at the same time overcoming the aforementioned shortcomings. Some numerical results are also reported.  相似文献   

17.
Weak sharp minimality is a notion emerged in optimization whose utility is largely recognized in the convergence analysis of algorithms for solving extremum problems as well as in the study of the perturbation behavior of such problems. In this article, some dual constructions of nonsmooth analysis, mainly related to quasidifferential calculus and its recent developments, are employed in formulating sufficient conditions for global weak sharp minimality. They extend to nonconvex functions a condition, which is known to be valid in the convex case. A feature distinguishing the results here proposed is that they avoid to assume the Asplund property on the underlying space.  相似文献   

18.
We initiate the study of a new measure of approximation. This measure compares the performance of an approximation algorithm to the random assignment algorithm. This is a useful measure for optimization problems where the random assignment algorithm is known to give essentially the best possible polynomial time approximation. In this paper, we focus on this measure for the optimization problems Max‐Lin‐2 in which we need to maximize the number of satisfied linear equations in a system of linear equations modulo 2, and Max‐k‐Lin‐2, a special case of the above problem in which each equation has at most k variables. The main techniques we use, in our approximation algorithms and inapproximability results for this measure, are from Fourier analysis and derandomization. © 2004 Wiley Periodicals, Inc. Random Struct. Alg., 2004  相似文献   

19.
Two of the main approaches in multiple criteria optimization are optimization over the efficient set and utility function program. These are nonconvex optimization problems in which local optima can be different from global optima. Existing global optimization methods for solving such problems can only work well for problems of moderate dimensions. In this article, we propose some ways to reduce the number of criteria and the dimension of a linear multiple criteria optimization problem. By the concept of so-called representative and extreme criteria, which is motivated by the concept of redundant (or nonessential) objective functions of Gal and Leberling, we can reduce the number of criteria without altering the set of efficient solutions. Furthermore, by using linear independent criteria, the linear multiple criteria optimization problem under consideration can be transformed into an equivalent linear multiple criteria optimization problem in the space of linear independent criteria. This equivalence is understood in a sense that efficient solutions of each problem can be derived from efficient solutions of the other by some affine transformation. As a result, such criteria and dimension reduction techniques could help to increase the efficiency of existing algorithms and to develop new methods for handling global optimization problems arisen from multiple objective optimization.  相似文献   

20.
In elliptic cone optimization problems, we minimize a linear objective function over the intersection of an affine linear manifold with the Cartesian product of the so-called elliptic cones. We present some general classes of optimization problems that can be cast as elliptic cone programmes such as second-order cone programmes and circular cone programmes. We also describe some real-world applications of this class of optimization problems. We study and analyse the Jordan algebraic structure of the elliptic cones. Then, we present a glimpse of the duality theory associated with elliptic cone optimization. A primal–dual path-following interior-point algorithm is derived for elliptic cone optimization problems. We prove the polynomial convergence of the proposed algorithms by showing that the logarithmic barrier is a strongly self-concordant barrier. The numerical examples show the path-following algorithms are efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号