首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The filled function method is considered as an efficient method to find the global minimum of multidimensional functions. A number of filled functions were proposed recently, most of which have one or two adjustable parameters. However, there is no efficient criterion to choose the parameter appropriately. In this paper, we propose a filled function without parameter. And this function includes neither exponential terms nor logarithmic terms so it is superior to the traditional ones. Theories of the filled function are investigated. And an algorithm which does not compute gradients during minimizing the filled function is presented. Moreover, the numerical experiments demonstrate the efficiency of the proposed filled function.  相似文献   

2.
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.  相似文献   

3.
In this paper, a new filled function which has better properties is proposed for identifying a global minimum point for a general class of nonlinear programming problems within a closed bounded domain. An algorithm for unconstrained global optimization is developed from the new filled function. Theoretical and numerical properties of the proposed filled function are investigated. The implementation of the algorithm on seven test problems is reported with satisfactory numerical results.  相似文献   

4.
A parallel asynchronous Newton algorithm for unconstrained optimization   总被引:1,自引:0,他引:1  
A new approach to the solution of unconstrained optimization problems is introduced. It is based on the exploitation of parallel computation techniques and in particular on an asynchronous communication model for the data exchange among concurrent processes. The proposed approach arises by interpreting the Newton method as being composed of a set of iterative and independent tasks that can be mapped onto a parallel computing system for the execution.Numerical experiments on the resulting algorithm have been carried out to compare parallel versions using synchronous and asynchronous communication mechanisms in order to assess the benefits of the proposed approach on a variety of parallel computing architectures. It is pointed out that the proposed asynchronous Newton algorithm is preferable for medium and large-scale problems, in the context of both distributed and shared memory architectures.This research work was partially supported by the National Research Council of Italy, within the special project Sistemi Informatici e Calcolo Parallelo, under CNR Contract No. 90.00675.PF69.  相似文献   

5.
基于模矢搜索和遗传算法的混合约束优化算法   总被引:1,自引:0,他引:1  
近年,免梯度方法又开始引起大家的注意,由于不需要计算函数的梯度.特别适合用来求解那些无法得到梯度信息或需要花很大计算量才能得到梯度信息的问题.本文构造了一个基于模矢搜索和遗传算法的混合优化算法.在模矢搜索方法的搜索步,用一个类似于遗传算法的方法产生一个有限点集.算法是全局收敛的.  相似文献   

6.
《Optimization》2012,61(4):549-570
The best spectral conjugate gradient algorithm by (Birgin, E. and Martínez, J.M., 2001, A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization, 43, 117–128). which is mainly a scaled variant of (Perry, J.M., 1977, A class of Conjugate gradient algorithms with a two step varaiable metric memory, Discussion Paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University), is modified in such a way as to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded into the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative way by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Computational results and performance profiles for a set consisting of 700 unconstrained optimization problems show that this new scaled nonlinear conjugate gradient algorithm substantially outperforms known conjugate gradient methods including: the spectral conjugate gradient SCG by Birgin and Martínez, the scaled Fletcher and Reeves, the Polak and Ribière algorithms and the CONMIN by (Shanno, D.F. and Phua, K.H., 1976, Algorithm 500, Minimization of unconstrained multivariate functions. ACM Transactions on Mathematical Software, 2, 87–94).  相似文献   

7.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.  相似文献   

8.
The cyclic Barzilai--Borwein method for unconstrained optimization   总被引:1,自引:0,他引:1  
** Email: dyh{at}lsec.cc.ac.cn*** Email: hager{at}math.ufl.edu**** Email: klaus.schittkowski{at}uni-bayreuth.de***** Email: hzhang{at}math.ufl.edu In the cyclic Barzilai–Borwein (CBB) method, the sameBarzilai–Borwein (BB) stepsize is reused for m consecutiveiterations. It is proved that CBB is locally linearly convergentat a local minimizer with positive definite Hessian. Numericalevidence indicates that when m > n/2 3, where n is the problemdimension, CBB is locally superlinearly convergent. In the specialcase m = 3 and n = 2, it is proved that the convergence rateis no better than linear, in general. An implementation of theCBB method, called adaptive cyclic Barzilai–Borwein (ACBB),combines a non-monotone line search and an adaptive choice forthe cycle length m. In numerical experiments using the CUTErtest problem library, ACBB performs better than the existingBB gradient algorithm, while it is competitive with the well-knownPRP+ conjugate gradient algorithm.  相似文献   

9.
In this paper, an unconstrained minimization algorithm is defined in which a nonmonotone line search technique is employed in association with a truncated Newton algorithm. Numerical results obtained for a set of standard test problems are reported which indicate that the proposed algorithm is highly effective in the solution of illconditioned as well as of large dimensional problems.  相似文献   

10.
A class of nonmonotone trust region algorithms is presented for unconstrained optimizations. Under suitable conditions, the global and Q-quadratic convergences of the algorithm are proved. Several rules of choosing trial steps and trust region radii are also discussed. Project supported by the National Natural Science Foundation of China (Grant No. 19136012).  相似文献   

11.
This paper presents a hybrid trust region algorithm for unconstrained optimization problems. It can be regarded as a combination of ODE-based methods, line search and trust region techniques. A feature of the proposed method is that at each iteration, a system of linear equations is solved only once to obtain a trial step. Further, when the trial step is not accepted, the method performs an inexact line search along it instead of resolving a new linear system. Under reasonable assumptions, the algorithm is proven to be globally and superlinearly convergent. Numerical results are also reported that show the efficiency of this proposed method.  相似文献   

12.
《Optimization》2012,61(2):249-263
New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods.  相似文献   

13.
In this paper, a new descent algorithm for solving unconstrained optimization problem is presented. Its search direction is descent and line search procedure can be avoided except for the first iteration. It is globally convergent under mild conditions. The search direction of the new algorithm is generalized and convergence of corresponding algorithm is also proved. Numerical results show that the algorithm is efficient for given test problems.  相似文献   

14.
Efficient line search algorithm for unconstrained optimization   总被引:6,自引:0,他引:6  
A new line search algorithm for smooth unconstrained optimization is presented that requires only one gradient evaluation with an inaccurate line search and at most two gradient evaluations with an accurate line search. It terminates in finitely many operations and shares the same theoretical properties as the standard line search rules like the Armijo-Goldstein-Wolfe-Powell rules. This algorithm is especially appropriate for the situation when gradient evaluations are very expensive relative to function evaluations.The authors would like to thank Margaret Wright and Jorge Moré for valuable comments on earlier versions of this paper.  相似文献   

15.
In this paper, an adaptive nonmonotone line search method for unconstrained minimization problems is proposed. At every iteration, the new algorithm selects only one of the two directions: a Newton-type direction and a negative curvature direction, to perform the line search. The nonmonotone technique is included in the backtracking line search when the Newton-type direction is the search direction. Furthermore, if the negative curvature direction is the search direction, we increase the steplength under certain conditions. The global convergence to a stationary point with second-order optimality conditions is established. Some numerical results which show the efficiency of the new algorithm are reported.   相似文献   

16.
The development of efficient algorithms that provide all the local minima of a function is crucial to solve certain subproblems in many optimization methods. A “multi-local” optimization procedure using inexact line searches is presented, and numerical experiments are also reported. An application of the method to a semi-infinite programming procedure is included. This work was partially supported by Ministerio de Educación y Ciencia, Spain, DGICYT grant PB93-0703. Author (*) was supported by the Consellería d'Educació i Ciència of the Generalitat Valenciana.  相似文献   

17.
This paper proposes the hybrid NM-PSO algorithm based on the Nelder–Mead (NM) simplex search method and particle swarm optimization (PSO) for unconstrained optimization. NM-PSO is very easy to implement in practice since it does not require gradient computation. The modification of both the Nelder–Mead simplex search method and particle swarm optimization intends to produce faster and more accurate convergence. The main purpose of the paper is to demonstrate how the standard particle swarm optimizers can be improved by incorporating a hybridization strategy. In a suite of 20 test function problems taken from the literature, computational results via a comprehensive experimental study, preceded by the investigation of parameter selection, show that the hybrid NM-PSO approach outperforms other three relevant search techniques (i.e., the original NM simplex search method, the original PSO and the guaranteed convergence particle swarm optimization (GCPSO)) in terms of solution quality and convergence rate. In a later part of the comparative experiment, the NM-PSO algorithm is compared to various most up-to-date cooperative PSO (CPSO) procedures appearing in the literature. The comparison report still largely favors the NM-PSO algorithm in the performance of accuracy, robustness and function evaluation. As evidenced by the overall assessment based on two kinds of computational experience, the new algorithm has demonstrated to be extremely effective and efficient at locating best-practice optimal solutions for unconstrained optimization.  相似文献   

18.
An algorithm called DE-PSO is proposed which incorporates concepts from DE and PSO, updating particles not only by DE operators but also by mechanisms of PSO. The proposed algorithm is tested on several benchmark functions. Numerical comparisons with different hybrid meta-heuristics demonstrate its effectiveness and efficiency.  相似文献   

19.
《Optimization》2012,61(6):733-763
We present a non-monotone trust region algorithm for unconstrained optimization. Using the filter technique of Fletcher and Leyffer, we introduce a new filter acceptance criterion and use it to define reference iterations dynamically. In contrast with the early filter criteria, the new criterion ensures that the size of the filter is finite. We also show a correlation between problem dimension and the filter size. We prove the global convergence of the proposed algorithm to first- and second-order critical points under suitable assumptions. It is significant that the global convergence analysis does not require the common assumption of monotonicity of the sequence of objective function values in reference iterations, as assumed by the standard non-monotone trust region algorithms. Numerical experiments on the CUTEr problems indicate that the new algorithm is competitive compared to some representative non-monotone trust region algorithms.  相似文献   

20.
Filled functions for unconstrained global optimization   总被引:15,自引:0,他引:15  
This paper is concerned with filled function techniques for unconstrained global minimization of a continuous function of several variables. More general forms of filled functions are presented for smooth and non-smooth optimization problems. These functions have either one or two adjustable parameters. Conditions on functions and on the values of parameters are given so that the constructed functions have the desired properties of filled functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号