首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Several papers in the scientific literature use metaheuristics to solve continuous global optimization. To perform this task, some metaheuristics originally proposed for solving combinatorial optimization problems, such as Greedy Randomized Adaptive Search Procedure (GRASP), Tabu Search and Simulated Annealing, among others, have been adapted to solve continuous global optimization problems. Proposed by Hirsch et al., the Continuous-GRASP (C-GRASP) is one example of this group of metaheuristics. The C-GRASP is an adaptation of GRASP proposed to solve continuous global optimization problems under box constraints. It is simple to implement, derivative-free and widely applicable method. However, according to Hedar, due to its random construction, C-GRASP may fail to detect promising search directions especially in the vicinity of minima, which may result in a slow convergence. To minimize this problem, in this paper we propose a set of methods to direct the search on C-GRASP, called Directed Continuous-GRASP (DC-GRASP). The proposal is to combine the ability of C-GRASP to diversify the search over the space with some efficient local search strategies to accelerate its convergence. We compare the DC-GRASP with the C-GRASP and other metaheuristics from literature on a set of standard test problems whose global minima are known. Computational results show the effectiveness and efficiency of the proposed methods, as well as their ability to accelerate the convergence of the C-GRASP.  相似文献   

2.
Much research on Artificial Intelligence (AI) has been focusing on exploring various potential applications of intelligent systems. In most cases, the researches attempt to model human intelligence by mimicking the brain structure and function, but they ignore an important aspect in human learning and decision making: the artificial emotion. In this paper, we present a new unconstrained global optimization method, hybrid chaos optimization algorithm with artificial emotion (HCOAAE), which avoids trapping to local minima, and improves convergence in large space and high-dimension optimization problems. The main purpose of artificial emotion is to mimic decision making behavior process of humans, to choose most suitable parameters of HCOAAE and decide whether to change current search strategy or not in the next iteration. Numerical simulations of 13 benchmark functions with different dimensions are used to test the performance of HCOAAE. Experimental results show that the proposed method significantly outperforms the existing methods in terms of convergence speed, computational effectiveness, and numerical stability.  相似文献   

3.
The optimization of three problems with high dimensionality and many local minima are investigated under five different optimization algorithms: DIRECT, simulated annealing, Spall’s SPSA algorithm, the KNITRO package, and QNSTOP, a new algorithm developed at Indiana University.  相似文献   

4.
This note introduces TRAVEL, a software package designed to produce probably good solutions to the Travelling Salesman Problem. The system is menu driven, allows the user to choose among various methods and features animated graphics displays of these procedures as they are running.  相似文献   

5.
《Optimization》2012,61(6):641-663
In the present article rather general penalty/barrier-methods are considered, that define a local continuously differentiable primal-dual path. The class of penalty/barrier terms includes most of the usual techniques like logarithmic barriers, SUMT, quadratic loss functions as well as exponential penalties, and the optimization problem which may contain inequality as well as equality constraints. The convergence of the corresponding general primal-dual path-following method is shown for local minima that satisfy strong second-order sufficiency conditions with linear independence constraint qualification (LICQ) and strict complementarity. A basic tool in the analysis of these methods is to estimate the radius of convergence of Newton's method depending on the penalty/barrier-parameter. Without using self-concordance properties convergence bounds are derived by direct estimations of the solutions of the Newton equations. Parameter selection rules are proposed which guarantee the local convergence of the considered penalty/barrier-techniques with only a finite number of Newton steps at each parameter level. Numerical examples illustrate the practical behavior of the proposed class of methods.  相似文献   

6.
Many real life problems can be modeled as nonlinear discrete optimization problems. Such problems often have multiple local minima and thus require global optimization methods. Due to high complexity of these problems, heuristic based global optimization techniques are usually required when solving large scale discrete optimization or mixed discrete optimization problems. One of the more recent global optimization tools is known as the discrete filled function method. Nine variations of the discrete filled function method in literature are identified and a review on theoretical properties of each method is given. Some of the most promising filled functions are tested on various benchmark problems. Numerical results are given for comparison.  相似文献   

7.
In this paper, we study the influence of noise on subgradient methods for convex constrained optimization. The noise may be due to various sources, and is manifested in inexact computation of the subgradients and function values. Assuming that the noise is deterministic and bounded, we discuss the convergence properties for two cases: the case where the constraint set is compact, and the case where this set need not be compact but the objective function has a sharp set of minima (for example the function is polyhedral). In both cases, using several different stepsize rules, we prove convergence to the optimal value within some tolerance that is given explicitly in terms of the errors. In the first case, the tolerance is nonzero, but in the second case, the optimal value can be obtained exactly, provided the size of the error in the subgradient computation is below some threshold. We then extend these results to objective functions that are the sum of a large number of convex functions, in which case an incremental subgradient method can be used.  相似文献   

8.
Abstract

We consider the minimization of a convex objective function subject to the set of minima of another convex function, under the assumption that both functions are twice continuously differentiable. We approach this optimization problem from a continuous perspective by means of a second-order dynamical system with Hessian-driven damping and a penalty term corresponding to the constrained function. By constructing appropriate energy functionals, we prove weak convergence of the trajectories generated by this differential equation to a minimizer of the optimization problem as well as convergence for the objective function values along the trajectories. The performed investigations rely on Lyapunov analysis in combination with the continuous version of the Opial Lemma. In case the objective function is strongly convex, we can even show strong convergence of the trajectories.  相似文献   

9.
李冲  王兴华  张文红 《计算数学》2002,24(4):469-478
本文研究解决复合凸优化问题:min F(x):=h(f(x)) (P)x∈X的Gauss-Newton法的收敛性.这里f是从Banach空间X到Banach空间Y的具有Frechet导数的非线性映照,h是定义在Y上的凸泛函. 复合凸优化问题近年来一直受到广泛的关注,目前它已成为非线性光滑理论中的一个主流方向.它在非线性包含,最大最小问题,罚函数技巧 [1-5]等许多重要的问题和技巧中得到了广泛的应用.同时它也提供了一个新的统一框架,使优化问题数值解的理论分析得到别开生面的发展.并且它也是研究有限区域内一阶或二阶最优性条件的一个便利工具[3,5,6,7].  相似文献   

10.
By far the most efficient methods for global optimization are based on starting a local optimization routine from an appropriate subset of uniformly distributed starting points. As the number of local optima is frequently unknown in advance, it is a crucial problem when to stop the sequence of sampling and searching. By viewing a set of observed minima as a sample from a generalized multinomial distribution whose cells correspond to the local optima of the objective function, we obtain the posterior distribution of the number of local optima and of the relative size of their regions of attraction. This information is used to construct sequential Bayesian stopping rules which find the optimal trade off between reliability and computational effort.  相似文献   

11.
We consider the problem of multiple fitting of linearly parametrized curves, that arises in many computer vision problems such as road scene analysis. Data extracted from images usually contain non-Gaussian noise and outliers, which makes classical estimation methods ineffective. In this paper, we first introduce a family of robust probability density functions which appears to be well-suited to many real-world problems. Also, such noise models are suitable for defining continuation heuristics to escape shallow local minima and their robustness is devised in terms of breakdown point. Second, the usual Iterative Reweighted Least Squares (IRLS) robust estimator is extended to the problem of robustly estimating sets of linearly parametrized curves. The resulting, non-convex optimization problem is tackled within a Lagrangian approach, leading to the so-called Simultaneous Robust Multiple Fitting (SRMF) algorithm, whose global convergence to a local minimum is proved using results from constrained optimization theory.  相似文献   

12.
 In the last two decades, the mathematical programming community has witnessed some spectacular advances in interior point methods and robust optimization. These advances have recently started to significantly impact various fields of applied sciences and engineering where computational efficiency is essential. This paper focuses on two such fields: digital signal processing and communication. In the past, the widely used optimization methods in both fields had been the gradient descent or least squares methods, both of which are known to suffer from the usual headaches of stepsize selection, algorithm initialization and local minima. With the recent advances in conic and robust optimization, the opportunity is ripe to use the newly developed interior point optimization techniques and highly efficient software tools to help advance the fields of signal processing and digital communication. This paper surveys recent successes of applying interior point and robust optimization to solve some core problems in these two fields. The successful applications considered in this paper include adaptive filtering, robust beamforming, design and analysis of multi-user communication system, channel equalization, decoding and detection. Throughout, our emphasis is on how to exploit the hidden convexity, convex reformulation of semi-infinite constraints, analysis of convergence, complexity and performance, as well as efficient practical implementation. Received: January 22, 2003 / Accepted: April 29, 2003 Published online: May 28, 2003 RID="*" ID="*" This research was supported in part by the Natural Sciences and Engineering Research Council of Canada, Grant No. OPG0090391, and by the Canada Research Chair program. New address after April 1, 2003: Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA  相似文献   

13.
SOR-like Methods for Augmented Systems   总被引:9,自引:0,他引:9  
Several SOR-like methods are proposed for solving augmented systems. These have many different applications in scientific computing, for example, constrained optimization and the finite element method for solving the Stokes equation. The convergence and the choice of optimal parameter for these algorithms are studied. The convergence and divergence regions for some algorithms are given, and the new algorithms are applied to solve the Stokes equations as well.  相似文献   

14.
Interval methods have shown their ability to locate and prove the existence of a global optima in a safe and rigorous way. Unfortunately, these methods are rather slow. Efficient solvers for optimization problems are based on linear relaxations. However, the latter are unsafe, and thus may overestimate, or, worst, underestimate the very global minima. This paper introduces QuadOpt, an efficient and safe framework to rigorously bound the global optima as well as its location. QuadOpt uses consistency techniques to speed up the initial convergence of the interval narrowing algorithms. A lower bound is computed on a linear relaxation of the constraint system and the objective function. All these computations are based on a safe and rigorous implementation of linear programming techniques. First experimental results are very promising.  相似文献   

15.
This paper deals with the packing problem of circles and non-convex polygons, which can be both translated and rotated into a strip with prohibited regions. Using the Φ-function technique, a mathematical model of the problem is constructed and its characteristics are investigated. Based on the characteristics, a solution approach to the problem is offered. The approach includes the following methods: an optimization method by groups of variables to construct starting points, a modification of the Zoutendijk feasible direction method to search for local minima and a special non-exhaustive search of local minima to find an approximation to a global minimum. A number of numerical results are given. The numerical results are compared with the best known ones.  相似文献   

16.
Abstract

Quasi-convex optimization is fundamental to the modelling of many practical problems in various fields such as economics, finance and industrial organization. Subgradient methods are practical iterative algorithms for solving large-scale quasi-convex optimization problems. In the present paper, focusing on quasi-convex optimization, we develop an abstract convergence theorem for a class of sequences, which satisfy a general basic inequality, under some suitable assumptions on parameters. The convergence properties in both function values and distances of iterates from the optimal solution set are discussed. The abstract convergence theorem covers relevant results of many types of subgradient methods studied in the literature, for either convex or quasi-convex optimization. Furthermore, we propose a new subgradient method, in which a perturbation of the successive direction is employed at each iteration. As an application of the abstract convergence theorem, we obtain the convergence results of the proposed subgradient method under the assumption of the Hölder condition of order p and by using the constant, diminishing or dynamic stepsize rules, respectively. A preliminary numerical study shows that the proposed method outperforms the standard, stochastic and primal-dual subgradient methods in solving the Cobb–Douglas production efficiency problem.  相似文献   

17.
Global optimization is a field of mathematical programming dealing with finding global (absolute) minima of multi-dimensional multiextremal functions. Problems of this kind where the objective function is non-differentiable, satisfies the Lipschitz condition with an unknown Lipschitz constant, and is given as a “black-box” are very often encountered in engineering optimization applications. Due to the presence of multiple local minima and the absence of differentiability, traditional optimization techniques using gradients and working with problems having only one minimum cannot be applied in this case. These real-life applied problems are attacked here by employing one of the mostly abstract mathematical objects—space-filling curves. A practical derivative-free deterministic method reducing the dimensionality of the problem by using space-filling curves and working simultaneously with all possible estimates of Lipschitz and Hölder constants is proposed. A smart adaptive balancing of local and global information collected during the search is performed at each iteration. Conditions ensuring convergence of the new method to the global minima are established. Results of numerical experiments on 1000 randomly generated test functions show a clear superiority of the new method w.r.t. the popular method DIRECT and other competitors.  相似文献   

18.
In a general Hilbert framework, we consider continuous gradient-like dynamical systems for constrained multiobjective optimization involving nonsmooth convex objective functions. Based on the Yosida regularization of the subdifferential operators involved in the system, we obtain the existence of strong global trajectories. We prove a descent property for each objective function, and the convergence of trajectories to weak Pareto minima. This approach provides a dynamical endogenous weighting of the objective functions, a key property for applications in cooperative games, inverse problems, and numerical multiobjective optimization.  相似文献   

19.
一种快速且全局收敛的BP神经网络学习算法   总被引:1,自引:0,他引:1  
目前误差反向传播(BP)算法在训练多层神经网络方面有很多成功的应用.然而,BP算法也有一些不足:收敛缓慢和易陷入局部极小点等.提出一种快速且全局收敛的BP神经网络学习算法,并且对该优化算法的全局收敛性进行分析和详细证明.实证结果表明提出的算法比标准的BP算法效率更高且更精确.  相似文献   

20.
Differential evolution algorithms represent an up to date and efficient way of solving complicated optimization tasks. In this article we concentrate on the ability of the differential evolution algorithms to attain the global minimum of the cost function. We demonstrate that although often declared as a global optimizer the classic differential evolution algorithm does not in general guarantee the convergence to the global minimum. To improve this weakness we design a simple modification of the classic differential evolution algorithm. This modification limits the possible premature convergence to local minima and ensures the asymptotic global convergence. We also introduce concepts that are necessary for the subsequent proof of the asymptotic global convergence of the modified algorithm. We test the classic and modified algorithm by numerical experiments and compare the efficiency of finding the global minimum for both algorithms. The tests confirm that the modified algorithm is significantly more efficient with respect to the global convergence than the classic algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号