共查询到20条相似文献,搜索用时 15 毫秒
1.
Gilding the Lily: A Variant of the Nelder-Mead Algorithm Based on Golden-Section Search 总被引:1,自引:0,他引:1
We propose a variant of the Nelder-Mead algorithm derived from a reinterpretation of univariate golden-section direct search. In the univariate case, convergence of the variant can be analyzed analogously to golden-section search. In the multivariate case, we modify the variant by replacing strict descent with fortified descent and maintaining the interior angles of the simplex bounded away from zero. Convergence of the modified variant can be analyzed by applying results for a fortified-descent simplicial search method. Some numerical experience with the variant is reported. 相似文献
2.
针对模糊C均值算法用于图像分割时对初始值敏感、容易陷入局部极值的问题,提出基于混合单纯形算法的模糊均值图像分割算法.算法利用Nelder-Mead单纯形算法计算量小、搜索速度快和粒子群算法自适应能力强、具有较好的全局搜索能力的特点,将混合单纯形算法的结果作为模糊C均值算法的输入,并将其用于图像分割.实验结果表明:基于混合单纯形算法的模糊均值图像分割算法在改善图像分割质量的同时,提高了算法的运行速度. 相似文献
3.
M. B. Subrahmanyam 《Journal of Optimization Theory and Applications》1989,62(2):311-319
The simplex algorithm of Nelder and Mead is extended to handle nonlinear optimization problems with constraints. To prevent the simplex from collapsing into a subspace near the constraints, a delayed reflection is introduced for those points moving into the infeasible region. Numerical experience indicates that the proposed algorithm yields good results in the presence of both inequality and equality constraints, even when the constraint region is narrow. We note that it may be possible to modify and improve the algorithm by trying out variants. 相似文献
4.
唐建国 《数学的实践与认识》2006,36(4):135-143
为使线性规划的每个约束条件部分或全部地拥有原整个约束条件所包含的信息,将线性规划的约束条件“滚雪球”后得到与原约束条件等价的新约束条件,对新约束条件所构成的线性规划采用目标函数最速递减算法.有一定规模的随机数值算例显示了该算法只需进行m(约束条件数)次迭代即可求得最优解. 相似文献
5.
This work deals with the solution of ill-conditioned unconstrained minimization problems by means of pattern search methods. To this end, the usual requirement of monotonic reduction of the objective function is relaxed and nonmonotone pattern search methods are proposed, which maintain the convergence properties of their monotone counterparts. Numerical experimentation on several well-known ill-conditioned functions is reported. The results highlight a class of pattern search methods which benefit very much by the introduction of nonmonotone strategies. 相似文献
6.
The Adjoint Newton Algorithm for Large-Scale Unconstrained Optimization in Meteorology Applications 总被引:1,自引:0,他引:1
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed. 相似文献
7.
Increasing the Efficiency of Selected Grid Search Procedures Through the Introduction of a Best Step Mechanism 总被引:1,自引:0,他引:1
In this paper, several new developments are suggested in order to increase the efficiency of grid search procedures used within vertex enumeration solution algorithms based on both the classical cobweb technique (a two constraint approach) and a single constraint approach. The gains in efficiency (and robustness) are demonstrated by incorporating the developments into a grid search procedure (which is a part of a heuristic solution algorithm used to obtain solutions to a non-linear bi-level multi-period model of an aluminium smelter) and applying it to a test suite of problems. The classical cobweb technique is shown to have some serious problems when used in this application but with step size improvements suggested in the paper, is shown to be very competitive. The developments are also incorporated into a single constraint based approach which does not suffer from the cobweb specific difficulties. The improvements considered include the use of a step mechanism initially with a step size calculated {a priori} and then finally with a tailored (best) one. The latter development when applied to the cobweb approach, requires the approximation of the two constraints (and hence the intersection of their graphs) using second degree polynomials. 相似文献
8.
本构造一个求解非线性无约束优化问题的免梯度算法,该算法基于传统的模矢法,每次不成功迭代后,充分利用已有迭代点的信息,构造近似下降方向,产生新的迭代点。在较弱条件下,算法是总体收敛的。通过数值实验与传统模矢法相比,计算量明显减少。 相似文献
9.
L. Grippo F. Lampariello S. Lucidi 《Journal of Optimization Theory and Applications》1988,56(3):385-406
In this paper, acceptability criteria for the stepsize and global convergence conditions are established for unconstrained minimization methods employing only function values. On the basis of these results, the convergence of an implementable line search algorithm is proved and some global stabilization schemes are described.The authors would like to thank the anonymous referees for their useful suggestions. 相似文献
10.
L. Grippo F. Lampariello S. Lucidi 《Journal of Optimization Theory and Applications》1990,64(3):495-510
In this paper, we define an unconstrained optimization algorithm employing only first-order derivatives, in which a nonmonotone stabilization technique is used in conjunction with a quasidiscrete Newton method for the computation of the search direction. Global and superlinear convergence is proved, and numerical results are reported. 相似文献
11.
In 1952, Hestenes and Stiefel first established, along with the conjugate-gradient algorithm, fundamental relations which exist between conjugate direction methods for function minimization on the one hand and Gram-Schmidt processes relative to a given positive-definite, symmetric matrix on the other. This paper is based on a recent reformulation of these relations by Hestenes which yield the conjugate Gram-Schmidt (CGS) algorithm. CGS includes a variety of function minimization routines, one of which is the conjugate-gradient routine. This paper gives the basic equations of CGS, including the form applicable to minimizing general nonquadratic functions ofn variables. Results of numerical experiments of one form of CGS on five standard test functions are presented. These results show that this version of CGS is very effective.The preparation of this paper was sponsored in part by the US Army Research Office, Grant No. DH-ARO-D-31-124-71-G18.The authors wish to thank Mr. Paul Speckman for the many computer runs made using these algorithms. They served as a good check on the results which they had obtained earlier. Special thanks must go to Professor M. R. Hestenes whose constant encouragement and assistance made this paper possible. 相似文献
12.
本文提出了一种新的求解无约束优化问题的混合共轭梯度算法.通过构造新的β_k公式,并由此提出一个不同于传统方式的确定搜索方向的方法,使得新算法不但能自然满足下降性条件,而且这个性质与线性搜索和目标函数的凸性均无关.在较弱的条件下,我们证明了新算法的全局收敛性.数值结果亦表明了该算法的有效性. 相似文献
13.
Jian He Layne T. Watson Naren Ramakrishnan Clifford A. Shaffer Alex Verstak Jing Jiang Kyung Bae William H. Tranter 《Computational Optimization and Applications》2002,23(1):5-25
The DIRECT (DIviding RECTangles) algorithm of Jones, Perttunen, and Stuckman (Journal of Optimization Theory and Applications, vol. 79, no. 1, pp. 157–181, 1993), a variant of Lipschitzian methods for bound constrained global optimization, has proved effective even in higher dimensions. However, the performance of a DIRECT implementation in real applications depends on the characteristics of the objective function, the problem dimension, and the desired solution accuracy. Implementations with static data structures often fail in practice, since it is difficult to predict memory resource requirements in advance. This is especially critical in multidisciplinary engineering design applications, where the DIRECT optimization is just one small component of a much larger computation, and any component failure aborts the entire design process. To make the DIRECT global optimization algorithm efficient and robust on large-scale, multidisciplinary engineering problems, a set of dynamic data structures is proposed here to balance the memory requirements with execution time, while simultaneously adapting to arbitrary problem size. The focus of this paper is on design issues of the dynamic data structures, and related memory management strategies. Numerical computing techniques and modifications of Jones' original DIRECT algorithm in terms of stopping rules and box selection rules are also explored. Performance studies are done for synthetic test problems with multiple local optima. Results for application to a site-specific system simulator for wireless communications systems (S
4
W) are also presented to demonstrate the effectiveness of the proposed dynamic data structures for an implementation of DIRECT. 相似文献
14.
A convergent minimization algorithm made up of repetitive line searches is considered in
n
. It is shown that the uniform nonsingularity of the matrices consisting ofn successive normalized search directions guarantees a speed of convergence which is at leastn-step Q-linear. Consequences are given for multistep methods, including Powell's 1964 procedure for function minimization without calculating derivatives as well as Zangwill's modifications of this procedure.The authors wish to thank the Namur Department of Mathematics, especially its optimization group, for many discussions and encouragement. They also thank the reviewers for many helpful suggestions. 相似文献
15.
线性规划的目标函数最速递减算法 总被引:4,自引:1,他引:4
在对偶单纯形方法的基础上,提出了线性规划的目标函数最速递减算法。它避开求初始可行基或初始基,以目标函数全局快速递减作为选基准则,将选基过程与换基迭代合二为一,从而大大减少了迭代次数。数值算例显示了该算法的有效性和优越性。 相似文献
16.
D.G. Hull 《Journal of Optimization Theory and Applications》2002,113(1):1-4
Several authors have created one-parameter families of variable metric methods for function minimization. These families contain the methods known as Davidon–Fletcher–Powell, Broyden–Fletcher–Goldfarb–Shanno, and symmetric rank one. It is shown here that the same one-parameter families of methods are obtained from the Huang update by requiring the update to be symmetric. 相似文献
17.
Three parallel space-decomposition minimization (PSDM) algorithms, based on the parallel variable transformation (PVT) and the parallel gradient distribution (PGD) algorithms (O.L. Mangasarian, SIMA Journal on Control and Optimization, vol. 33, no. 6, pp. 1916–1925.), are presented for solving convex or nonconvex unconstrained minimization problems. The PSDM algorithms decompose the variable space into subspaces and distribute these decomposed subproblems among parallel processors. It is shown that if all decomposed subproblems are uncoupled of each other, they can be solved independently. Otherwise, the parallel algorithms presented in this paper can be used. Numerical experiments show that these parallel algorithms can save processor time, particularly for medium and large-scale problems. Up to six parallel processors are connected by Ethernet networks to solve four large-scale minimization problems. The results are compared with those obtained by using sequential algorithms run on a single processor. An application of the PSDM algorithms to the training of multilayer Adaptive Linear Neurons (Madaline) and a new parallel architecture for such parallel training are also presented. 相似文献
18.
P. Armand 《Journal of Optimization Theory and Applications》2007,132(2):287-305
This paper proposes a line search technique to satisfy a relaxed form of the strong Wolfe conditions in order to guarantee
the descent condition at each iteration of the Polak-Ribière-Polyak conjugate gradient algorithm. It is proved that this line
search algorithm preserves the usual convergence properties of any descent algorithm. In particular, it is shown that the
Zoutendijk condition holds under mild assumptions. It is also proved that the resulting conjugate gradient algorithm is convergent
under a strong convexity assumption. For the nonconvex case, a globally convergent modification is proposed. Numerical tests
are presented.
This paper is based on an earlier work presented at the International Symposium on Mathematical Programming in Lausanne in
1997. The author thanks J. C. Gilbert for his advice and M. Albaali for some recent discussions which motivated him to write
this paper. Special thanks to G. Liu, J. Nocedal, and R. Waltz for the availability of the software CG+ and to one of the
referees who indicated to him the paper of Grippo and Lucidi (Ref. 1). 相似文献
19.
Christophe Meyer 《Journal of Global Optimization》2000,18(4):357-365
In 1964, in a seminal paper, Tuy proposed a simple algorithm for concave minimization over a polytope. This algorithm was shown to cycle some years later. Recently however it has been shown that despite this possibility of cycling, Tuy's algorithm always finds the optimal solution of the problem. We present a modification of it which simplifies the cycle detection. 相似文献
20.
Adaptive Two-Point Stepsize Gradient Algorithm 总被引:7,自引:0,他引:7
Combined with the nonmonotone line search, the two-point stepsize gradient method has successfully been applied for large-scale unconstrained optimization. However, the numerical performances of the algorithm heavily depend on M, one of the parameters in the nonmonotone line search, even for ill-conditioned problems. This paper proposes an adaptive nonmonotone line search. The two-point stepsize gradient method is shown to be globally convergent with this adaptive nonmonotone line search. Numerical results show that the adaptive nonmonotone line search is specially suitable for the two-point stepsize gradient method. 相似文献