首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stochastic optimization/approximation algorithms are widely used to recursively estimate the optimum of a suitable function or its root under noisy observations when this optimum or root is a constant or evolves randomly according to slowly time-varying continuous sample paths. In comparison, this paper analyzes the asymptotic properties of stochastic optimization/approximation algorithms for recursively estimating the optimum or root when it evolves rapidly with nonsmooth (jump-changing) sample paths. The resulting problem falls into the category of regime-switching stochastic approximation algorithms with two-time scales. Motivated by emerging applications in wireless communications, and system identification, we analyze asymptotic behavior of such algorithms. Our analysis assumes that the noisy observations contain a (nonsmooth) jump process modeled by a discrete-time Markov chain whose transition frequency varies much faster than the adaptation rate of the stochastic optimization algorithm. Using stochastic averaging, we prove convergence of the algorithm. Rate of convergence of the algorithm is obtained via bounds on the estimation errors and diffusion approximations. Remarks on improving the convergence rates through iterate averaging, and limit mean dynamics represented by differential inclusions are also presented. The research of G. Yin was supported in part by the National Science Foundation under DMS-0603287, in part by the National Security Agency under MSPF-068-029, and in part by the National Natural Science Foundation of China under #60574069. The research of C. Ion was supported in part by the Wayne State University Rumble Fellowship. The research of V. Krishnamurthy was supported in part by NSERC (Canada).  相似文献   

2.
This paper is devoted to the study of partition-based deterministic algorithms for global optimization of Lipschitz-continuous functions without requiring knowledge of the Lipschitz constant. First we introduce a general scheme of a partition-based algorithm. Then, we focus on the selection strategy in such a way to exploit the information on the objective function. We propose two strategies. The first one is based on the knowledge of the global optimum value of the objective function. In this case the selection strategy is able to guarantee convergence of every infinite sequence of trial points to global minimum points. The second one does not require any a priori knowledge on the objective function and tries to exploit information on the objective function gathered during progress of the algorithm. In this case, from a theoretical point of view, we can guarantee the so-called every-where dense convergence of the algorithm.  相似文献   

3.
In elliptic cone optimization problems, we minimize a linear objective function over the intersection of an affine linear manifold with the Cartesian product of the so-called elliptic cones. We present some general classes of optimization problems that can be cast as elliptic cone programmes such as second-order cone programmes and circular cone programmes. We also describe some real-world applications of this class of optimization problems. We study and analyse the Jordan algebraic structure of the elliptic cones. Then, we present a glimpse of the duality theory associated with elliptic cone optimization. A primal–dual path-following interior-point algorithm is derived for elliptic cone optimization problems. We prove the polynomial convergence of the proposed algorithms by showing that the logarithmic barrier is a strongly self-concordant barrier. The numerical examples show the path-following algorithms are efficient.  相似文献   

4.
遗传信赖域方法   总被引:5,自引:0,他引:5  
钟守楠  高飞  纪昌明 《数学杂志》2001,21(4):468-472
本文将具有并行计算性能的遗传算法与具有全局收敛的信赖域方法相结合以形成混合搜索方法,为解决复杂多峰极值优化问题提供一种有效算法,证明了算法的收敛性。  相似文献   

5.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

6.
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka–?ojasiewicz property.  相似文献   

7.
Among the penalty based approaches for constrained optimization, augmented Lagrangian (AL) methods are better in at least three ways: (i) they have theoretical convergence properties, (ii) they distort the original objective function minimally, thereby providing a better function landscape for search, and (iii) they can result in computing optimal Lagrange multiplier for each constraint as a by-product. Instead of keeping a constant penalty parameter throughout the optimization process, these algorithms update the parameters (called multipliers) adaptively so that the corresponding penalized function dynamically changes its optimum from the unconstrained minimum point to the constrained minimum point with iterations. However, the flip side of these algorithms is that the overall algorithm requires a serial application of a number of unconstrained optimization tasks, a process that is usually time-consuming and tend to be computationally expensive. In this paper, we devise a genetic algorithm based parameter update strategy to a particular AL method. The proposed strategy updates critical parameters in an adaptive manner based on population statistics. Occasionally, a classical optimization method is used to improve the GA-obtained solution, thereby providing the resulting hybrid procedure its theoretical convergence property. The GAAL method is applied to a number of constrained test problems taken from the evolutionary algorithms (EAs) literature. The number of function evaluations required by GAAL in most problems is found to be smaller than that needed by a number of existing evolutionary based constraint handling methods. GAAL method is found to be accurate, computationally fast, and reliable over multiple runs. Besides solving the problems, the proposed GAAL method is also able to find the optimal Lagrange multiplier associated with each constraint for the test problems as an added benefit??a matter that is important for a sensitivity analysis of the obtained optimized solution, but has not yet been paid adequate attention in the past evolutionary constrained optimization studies.  相似文献   

8.
The basic Harris Hawks optimization algorithm cannot take full advantage of the information sharing capability of the Harris Hawks while cooperatively searching for prey, and it is difficult to balance the exploration and development capacities of this algorithm. These factors limit the Harris Hawks optimization algorithm, such as in terms of premature convergence and ease of falling into a local optimum. To this end, an improved Harris Hawks optimization algorithm based on information exchange is proposed to optimize the continuous function and its application to engineering problems. First, an individual Harris Hawk obtains information from the shared area of cooperative foraging and the location area of collaborators, thereby realizing information exchange and sharing. Second, a nonlinear escaping energy factor with chaos disturbance is designed to better balance the local searching and the global searching of the algorithm. Finally, a numerical experiment is conducted with four benchmark test functions and five CEC-2017 real-parameter numerical optimization problems as well as seven practical engineering problems. The results show that the proposed algorithm outperforms the basic Harris Hawks optimization algorithm and other intelligence optimization algorithms in terms of the convergence rate, solution accuracy, and robustness.  相似文献   

9.
We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping, regularized by the group reproducing kernel norm. This class of problems arise naturally from applications in group Lasso, which is a popular technique for variable selection. An effective approach to solve such problems is by the proximal gradient method. In this paper we derive and study theoretically the efficient algorithms for the class of the convex problems, analyze the convergence of the algorithm and its subalgorithm.  相似文献   

10.
In this paper we present a chaos-based evolutionary algorithm (EA) for solving nonlinear programming problems named chaotic genetic algorithm (CGA). CGA integrates genetic algorithm (GA) and chaotic local search (CLS) strategy to accelerate the optimum seeking operation and to speed the convergence to the global solution. The integration of global search represented in genetic algorithm and CLS procedures should offer the advantages of both optimization methods while offsetting their disadvantages. By this way, it is intended to enhance the global convergence and to prevent to stick on a local solution. The inherent characteristics of chaos can enhance optimization algorithms by enabling it to escape from local solutions and increase the convergence to reach to the global solution. Twelve chaotic maps have been analyzed in the proposed approach. The simulation results using the set of CEC’2005 show that the application of chaotic mapping may be an effective strategy to improve the performances of EAs.  相似文献   

11.
A local linear embedding module for evolutionary computation optimization   总被引:1,自引:0,他引:1  
A Local Linear Embedding (LLE) module enhances the performance of two Evolutionary Computation (EC) algorithms employed as search tools in global optimization problems. The LLE employs the stochastic sampling of the data space inherent in Evolutionary Computation in order to reconstruct an approximate mapping from the data space back into the parameter space. This allows to map the target data vector directly into the parameter space in order to obtain a rough estimate of the global optimum, which is then added to the EC generation. This process is iterated and considerably improves the EC convergence. Thirteen standard test functions and two real-world optimization problems serve to benchmark the performance of the method. In most of our tests, optimization aided by the LLE mapping outperforms standard implementations of a genetic algorithm and a particle swarm optimization. The number and ranges of functions we tested suggest that the proposed algorithm can be considered as a valid alternative to traditional EC tools in more general applications. The performance improvement in the early stage of the convergence also suggests that this hybrid implementation could be successful as an initial global search to select candidates for subsequent local optimization.  相似文献   

12.
《Optimization》2012,61(6):733-763
We present a non-monotone trust region algorithm for unconstrained optimization. Using the filter technique of Fletcher and Leyffer, we introduce a new filter acceptance criterion and use it to define reference iterations dynamically. In contrast with the early filter criteria, the new criterion ensures that the size of the filter is finite. We also show a correlation between problem dimension and the filter size. We prove the global convergence of the proposed algorithm to first- and second-order critical points under suitable assumptions. It is significant that the global convergence analysis does not require the common assumption of monotonicity of the sequence of objective function values in reference iterations, as assumed by the standard non-monotone trust region algorithms. Numerical experiments on the CUTEr problems indicate that the new algorithm is competitive compared to some representative non-monotone trust region algorithms.  相似文献   

13.
Population-based algorithms have been used in many real-world problems. Bat algorithm(BA) is one of the states of the art of these approaches. Because of the super bat,on the one hand, BA can converge quickly; on the other hand, it is easy to fall into local optimum. Therefore, for typical BA algorithms, the ability of exploration and exploitation is not strong enough and it is hard to find a precise result. In this paper, we propose a novel bat algorithm based on cross boundary learning(CBL) and uniform explosion strategy(UES),namely BABLUE in short, to avoid the above contradiction and achieve both fast convergence and high quality. Different from previous opposition-based learning, the proposed CBL can expand the search area of population and then maintain the ability of global exploration in the process of fast convergence. In order to enhance the ability of local exploitation of the proposed algorithm, we propose UES, which can achieve almost the same search precise as that of firework explosion algorithm but consume less computation resource. BABLUE is tested with numerous experiments on unimodal, multimodal, one-dimensional, high-dimensional and discrete problems, and then compared with other typical intelligent optimization algorithms.The results show that the proposed algorithm outperforms other algorithms.  相似文献   

14.
Dynamic optimization and multi-objective optimization have separately gained increasing attention from the research community during the last decade. However, few studies have been reported on dynamic multi-objective optimization (dMO) and scarce effective dMO methods have been proposed. In this paper, we fulfill these gabs by developing new dMO test problems and new effective dMO algorithm. In the newly designed dMO problems, Pareto-optimal decision values (i.e., Pareto-optimal solutions: POS) or both POS and Pareto-optimal objective values (i.e., Pareto-optimal front: POF) change with time. A new multi-strategy ensemble multi-objective evolutionary algorithm (MS-MOEA) is proposed to tackle the challenges of dMO. In MS-MOEA, the convergence speed is accelerated by the new offspring creating mechanism powered by adaptive genetic and differential operators (GDM); a Gaussian mutation operator is employed to cope with premature convergence; a memory like strategy is proposed to achieve better starting population when a change takes place. In order to show the advantages of the proposed algorithm, we experimentally compare MS-MOEA with several algorithms equipped with traditional restart strategy. It is suggested that such a multi-strategy ensemble approach is promising for dealing with dMO problems.  相似文献   

15.
Multiplicative programming problems (MPPs) are global optimization problems known to be NP-hard. In this paper, we employ algorithms developed to compute the entire set of nondominated points of multi-objective linear programmes (MOLPs) to solve linear MPPs. First, we improve our own objective space cut and bound algorithm for convex MPPs in the special case of linear MPPs by only solving one linear programme in each iteration, instead of two as the previous version indicates. We call this algorithm, which is based on Benson’s outer approximation algorithm for MOLPs, the primal objective space algorithm. Then, based on the dual variant of Benson’s algorithm, we propose a dual objective space algorithm for solving linear MPPs. The dual algorithm also requires solving only one linear programme in each iteration. We prove the correctness of the dual algorithm and use computational experiments comparing our algorithms to a recent global optimization algorithm for linear MPPs from the literature as well as two general global optimization solvers to demonstrate the superiority of the new algorithms in terms of computation time. Thus, we demonstrate that the use of multi-objective optimization techniques can be beneficial to solve difficult single objective global optimization problems.  相似文献   

16.
Generalized hill climbing algorithms provide a framework for modeling several local search algorithms for hard discrete optimization problems. This paper introduces and analyzes generalized hill climbing algorithm performance measures that reflect how effectively an algorithm has performed to date in visiting a global optimum and how effectively an algorithm may pes]rform in the future in visiting such a solution. These measures are also used to obtain a necessary asymptotic convergence (in probability) condition to a global optimum, which is then used to show that a common formulation of threshold accepting does not converge. These measures assume particularly simple forms when applied to specific search strategies such as Monte Carlo search and threshold accepting.  相似文献   

17.
We explore data-driven methods for gaining insight into the dynamics of a two-population genetic algorithm (GA), which has been effective in tests on constrained optimization problems. We track and compare one population of feasible solutions and another population of infeasible solutions. Feasible solutions are selected and bred to improve their objective function values. Infeasible solutions are selected and bred to reduce their constraint violations. Interbreeding between populations is completely indirect, that is, only through their offspring that happen to migrate to the other population. We introduce an empirical measure of distance, and apply it between individuals and between population centroids to monitor the progress of evolution. We find that the centroids of the two populations approach each other and stabilize. This is a valuable characterization of convergence. We find the infeasible population influences, and sometimes dominates, the genetic material of the optimum solution. Since the infeasible population is not evaluated by the objective function, it is free to explore boundary regions, where the optimum is likely to be found. Roughly speaking, the No Free Lunch theorems for optimization show that all blackbox algorithms (such as Genetic Algorithms) have the same average performance over the set of all problems. As such, our algorithm would, on average, be no better than random search or any other blackbox search method. However, we provide two general theorems that give conditions that render null the No Free Lunch results for the constrained optimization problem class we study. The approach taken here thereby escapes the No Free Lunch implications, per se.  相似文献   

18.
一类min-max-min问题的区间算法   总被引:4,自引:0,他引:4  
讨论了一类由一阶连续可微函数构成的无约束min-max-min问题.通过构造目标函数的区间扩张、无解区域删除原则,建立了求解min-max-min问题的区间算法,证明了算法的收敛性,给出了数值算例.理论证明和数值结果表明方法是可靠和有效的.  相似文献   

19.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

20.
针对传统鲨鱼优化算法在求解高维目标函数时,易早熟收敛,陷入局部最优的缺陷.提出一种基于正弦控制因子的Lateral变异鲨鱼优化算法.通过正弦曲线的特性和自适应惯性权重,改善了传统鲨鱼优化算法中由于随机选取控制因子数值大小可能导致算法在迭代后期全局搜索能力降低的问题,提高了算法在迭代后期的全局收敛能力,并对最佳鲨鱼位置引入Lateral变异策略,加强了算法跳出局部最优的可能性.改进后的算法对多个shifted单峰,多峰以及固定维测试函数进行求解,实验结果表明,对比多种不同优化算法而言,本文所提LSSO算法具有更高的收敛精度和搜索速度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号