首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 125 毫秒
1.
In this paper two algorithms, of the feasible-directions and dual feasible-directions type, are presented for optimization problems with equality and inequality constraints. An associated problem, having only inequality constraints, is defined, and shown to be equivalent to the original problem if a certain parameter is sufficiently large. The algorithms solve the associated problem, but incorporate a method for automatically increasing this parameter in order to ensure global convergence to a solution to the original problem. Any feasible directions algorithm can be similarly modified to enable it to handle equality constraints.Research sponsored by the US Army Research Office — Durham, Contract DAHCO4-73-C-0025 and the National Science Foundation Grant GK-37572.  相似文献   

2.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

3.
A new algorithm for solving nonconvex, equality-constrained optimization problems with separable structures is proposed in the present paper. A new augmented Lagrangian function is derived, and an iterative method is presented. The new proposed Lagrangian function preserves separability when the original problem is separable, and the property of linear convergence of the new algorithm is also presented. Unlike earlier algorithms for nonconvex decomposition, the convergence ratio for this method can be made arbitrarily small. Furthermore, it is feasible to extend this method to algorithms suited for inequality-constrained optimization problems. An example is included to illustrate the method.This research was supported in part by the National Science Foundation under NSF Grant No. ECS-85-06249.  相似文献   

4.
Large neighborhood search (LNS) is a combination of constraint programming (CP) and local search (LS) that has proved to be a very effective tool for solving complex optimization problems. However, the practice of applying LNS to real world problems remains an art which requires a great deal of expertise. In this paper, we show how adaptive techniques can be used to create algorithms that adjust their behavior to suit the problem instance being solved. We present three design principles towards this goal: cost-based neighborhood heuristics, growing neighborhood sizes, and the application of learning algorithms to combine portfolios of neighborhood heuristics. Our results show that the application of these principles gives strong performance on a challenging set of job shop scheduling problems. More importantly, we are able to achieve robust solving performance across problem sets and time limits. This material is based upon works supported by the Science Foundation Ireland under Grant No. 00/PI.1/C075, the Embark Initiative of the Irish Research Council of Science Engineering and Technology under Grant PD2002/21, and ILOG, S.A.  相似文献   

5.
The method of partitioned random search has been proposed in recent years to obtain an as good as possible solution for the global optimization problem (1). A practical algorithm has been developed and applied to real-life problems. However, the design of this algorithm was based mainly on intuition. The theoretical foundation of the method is an important issue in the development of efficient algorithms for such problems. In this paper, we generalize previous theoretical results and propose a sequential sampling policy for the partitioned random search for global optimization with sampling cost and discounting factor. A proof of the optimality of the proposed sequential sampling policy is given by using the theory of optimal stopping.  相似文献   

6.
The weighted median problem arises as a subproblem in certain multivariate optimization problems, includingL 1 approximation. Three algorithms for the weighted median problem are presented and the relationships between them are discussed. We report on computational experience with these algorithms and on their use in the context of multivariateL 1 approximation.This work was supported in part by National Science Foundation Grant CCR-8713893 and in part by a grant from The City University of New York PSC-CUNY Research Award program.  相似文献   

7.
A derivative-free simulated annealing driven multi-start algorithm for continuous global optimization is presented. We first propose a trial point generation scheme in continuous simulated annealing which eliminates the need for the gradient-based trial point generation. We then suitably embed the multi-start procedure within the simulated annealing algorithm. We modify the derivative-free pattern search method and use it as the local search in the multi-start procedure. We study the convergence properties of the algorithm and test its performance on a set of 50 problems. Numerical results are presented which show the robustness of the algorithm. Numerical comparisons with a gradient-based simulated annealing algorithm and three population-based global optimization algorithms show that the new algorithm could offer a reasonable alternative to many currently available global optimization algorithms, specially for problems requiring ‘direct search’ type algorithm.  相似文献   

8.
针对传统Kriging模型在多变量(高维)输入全局优化中因超参数过多而引发收敛速度慢,精度低,建模效率不高问题,提出了基于偏最小二乘变换技术和Kriging模型的有效全局优化方法.首先,构造偏最小二乘高斯核函数;其次,借助差分进化算法寻找满足期望改进准则最大化条件的新样本点;然后,将不同核函数和期望改进准则组合,构建四种有效全局优化算法并进行比较;最后,数值算例结果表明,基于偏最小二乘变换的Kriging全局优化方法在解决高维全局优化问题方面相比于标准的全局优化算法在收敛精度及收敛速度方面更具优势.  相似文献   

9.
A novel metaheuristics approach for continuous global optimization   总被引:3,自引:0,他引:3  
This paper proposes a novel metaheuristics approach to find the global optimum of continuous global optimization problems with box constraints. This approach combines the characteristics of modern metaheuristics such as scatter search (SS), genetic algorithms (GAs), and tabu search (TS) and named as hybrid scatter genetic tabu (HSGT) search. The development of the HSGT search, parameter settings, experimentation, and efficiency of the HSGT search are discussed. The HSGT has been tested against a simulated annealing algorithm, a GA under the name GENOCOP, and a modified version of a hybrid scatter genetic (HSG) search by using 19 well known test functions. Applications to Neural Network training are also examined. From the computational results, the HSGT search proved to be quite effective in identifying the global optimum solution which makes the HSGT search a promising approach to solve the general nonlinear optimization problem.  相似文献   

10.
Metaheuristic optimization algorithms have become popular choice for solving complex and intricate problems which are otherwise difficult to solve by traditional methods. In the present study an attempt is made to review the hybrid optimization techniques in which one main algorithm is a well known metaheuristic; particle swarm optimization or PSO. Hybridization is a method of combining two (or more) techniques in a judicious manner such that the resulting algorithm contains the positive features of both (or all) the algorithms. Depending on the algorithm/s used we made three classifications as (i) Hybridization of PSO and genetic algorithms (ii) Hybridization of PSO with differential evolution and (iii) Hybridization of PSO with other techniques. Where, other techniques include various local and global search methods. Besides giving the review we also show a comparison of three hybrid PSO algorithms; hybrid differential evolution particle swarm optimization (DE-PSO), adaptive mutation particle swarm optimization (AMPSO) and hybrid genetic algorithm particle swarm optimization (GA-PSO) on a test suite of nine conventional benchmark problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号