首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
《Optimization》2012,61(4):549-570
The best spectral conjugate gradient algorithm by (Birgin, E. and Martínez, J.M., 2001, A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization, 43, 117–128). which is mainly a scaled variant of (Perry, J.M., 1977, A class of Conjugate gradient algorithms with a two step varaiable metric memory, Discussion Paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University), is modified in such a way as to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded into the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative way by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Computational results and performance profiles for a set consisting of 700 unconstrained optimization problems show that this new scaled nonlinear conjugate gradient algorithm substantially outperforms known conjugate gradient methods including: the spectral conjugate gradient SCG by Birgin and Martínez, the scaled Fletcher and Reeves, the Polak and Ribière algorithms and the CONMIN by (Shanno, D.F. and Phua, K.H., 1976, Algorithm 500, Minimization of unconstrained multivariate functions. ACM Transactions on Mathematical Software, 2, 87–94).  相似文献   

2.
A trajectory-based method for solving constrained nonlinear optimization problems is proposed. The method is an extension of a trajectory-based method for unconstrained optimization. The optimization problem is transformed into a system of second-order differential equations with the aid of the augmented Lagrangian. Several novel contributions are made, including a new penalty parameter updating strategy, an adaptive step size routine for numerical integration and a scaling mechanism. A new criterion is suggested for the adjustment of the penalty parameter. Global convergence properties of the method are established.  相似文献   

3.
In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral conjugate gradient SCG algorithm. The author was awarded the Romanian Academy Grant 168/2003.  相似文献   

4.
Orthonormal matrices are a class of well-conditioned matrices with the least spectral condition number. Here, at first it is shown that a recently proposed choice for parameter of the Dai–Liao nonlinear conjugate gradient method makes the search direction matrix as close as possible to an orthonormal matrix in the Frobenius norm. Then, conducting a brief singular value analysis, it is shown that another recently proposed choice for the Dai–Liao parameter improves spectral condition number of the search direction matrix. Thus, theoretical justifications of the two choices for the Dai–Liao parameter are enhanced. Finally, some comparative numerical results are reported.  相似文献   

5.
研究了初值修正项为αz(1)(1)(其中α为修正参数)的灰色Verhuslt模型的修正参数估计方法.针对相关文献中,修正参数α求解无现成公式情况,通过最小化原始序列的一次累加序列与模拟序列之差,建立并求解一个非约束优化模型,获得了初值修正参数α的一个简单有效的计算公式,完善了相关文献中建立的初值修正灰色Verhuslt模型.最后,通过计算实例验证了修正参数公式可以有效提高初值修正灰色Verhuslt模型的精度.  相似文献   

6.
In this paper we construct families of compactly supported nonseparable interpolating refinable functions with arbitrary smoothness (or regularity). The symbols for the newly constructed scaling functions are given by a simple formula related to the Bernstein polynomials. The emphasis of the paper is to show that under an easy-to-verify geometric condition these families satisfy Cohenrs condition, and they have arbitrarily high regularity. Furthermore, the constructed scaling functions satisfy, under the same geometrical condition, the Strang–Fix conditions of arbitrarily high order, which implies that corresponding interpolating schemes have arbitrarily high accuracy.  相似文献   

7.
Using a strict bound of Spedicato to the condition number of bordered positive-definite matrices, we show that the scaling parameter in the ABS class for linear systems can always be chosen so that the bound of a certain update matrix is globally minimized. Moreover, if the scaling parameter is so chosen at every iteration, then the condition number itself is globally minimized. The resulting class of optimally conditioned algorithms contains as a special case the class of optimally stable algorithms in the sense of Broyden.This work was done in the framework of research supported by MPI, Rome, Italy, 60% Program.  相似文献   

8.
通过迭代函数系统构造出一种分形插值函数,从研究迭代过程入手,得到了关于这种自仿射分形插值函数的一些性质和特点.在垂直比例因子1/2d1的情况下,证明最大值的存在性,并计算出此类分形插值函数的最大值.  相似文献   

9.
A novel smooth nonlinear augmented Lagrangian for solving minimax problems with inequality constraints, is proposed in this paper, which has the positive properties that the classical Lagrangian and the penalty function fail to possess. The corresponding algorithm mainly consists of minimizing the nonlinear augmented Lagrangian function and updating the Lagrange multipliers and controlling parameter. It is demonstrated that the algorithm converges Q-superlinearly when the controlling parameter is less than a threshold under the mild conditions. Furthermore, the condition number of the Hessian of the nonlinear augmented Lagrangian function is studied, which is very important for the efficiency of the algorithm. The theoretical results are validated further by the preliminary numerical experiments for several testing problems reported at last, which show that the nonlinear augmented Lagrangian is promising.  相似文献   

10.
《Optimization》2012,61(3):375-389
In this paper we consider two alternative choices for the factor used to scale the initial Hessian approximation, before updating by a member of the Broyden family of updates for quasi-Newton optimization methods. By extensive computational experiments carried out on a set of standard test problems from the CUTE collection, using efficient implemen-tations of the quasi-Newton method, we show that the proposed new scaling factors are better, in terms of efficiency achieved (number of iterations, number of function and gradient evaluations), than the standard choice proposed in the literature  相似文献   

11.
Hammerstein–Wiener model can describe a large number of complicated industrial processes. In this paper, a novel identification method for neuro-fuzzy based Hammerstein–Wiener model is presented. A neuro-fuzzy system with correlation analysis based non-iterative parameter updating algorithm is proposed to model the static nonlinearity of Hammerstein–Winer processes. As a result, the proposed method not only avoid the inevitable restrictions on static nonlinear function encountered by using the polynomial approach, but also overcomes the problems of initialization and convergence of the model parameters, which are usually resorted to trial and error procedure in the existing iterative algorithms used for the identification of Hammerstein–Winer model. In addition, combined separable signals are adopted to identify the Hammerstein–Wiener process, resulting in the identification problem of the linear model separated from that of nonlinear parts. Moreover, one part of the input signals is extended to more general signals, such as binary signals, Gaussian signals or other modulated signals. Examples are used to illustrate the effectiveness of the proposed method.  相似文献   

12.
A modified Levenberg–Marquardt method for solving singular systems of nonlinear equations was proposed by Fan [J Comput Appl Math. 2003;21;625–636]. Using trust region techniques, the global and quadratic convergence of the method were proved. In this paper, to improve this method, we decide to introduce a new Levenberg–Marquardt parameter while also incorporate a new nonmonotone technique to this method. The global and quadratic convergence of the new method is proved under the local error bound condition. Numerical results show the new algorithm is efficient and promising.  相似文献   

13.
This paper proposes nonlinear Lagrangians based on modified Fischer-Burmeister NCP functions for solving nonlinear programming problems with inequality constraints. The convergence theorem shows that the sequence of points generated by this nonlinear Lagrange algorithm is locally convergent when the penalty parameter is less than a threshold under a set of suitable conditions on problem functions, and the error bound of solution, depending on the penalty parameter, is also established. It is shown that the condition number of the nonlinear Lagrangian Hessian at the optimal solution is proportional to the controlling penalty parameter. Moreover, the paper develops the dual algorithm associated with the proposed nonlinear Lagrangians. Numerical results reported suggest that the dual algorithm based on proposed nonlinear Lagrangians is effective for solving some nonlinear optimization problems.  相似文献   

14.
We study the mutation operation of the differential evolution algorithm. In particular, we study the effect of the scaling parameter of the differential vector in mutation. We derive the probability density function of points generated by mutation and thereby identify some drawbacks of the scaling parameter. We also visualize the drawbacks using simulation. We then propose a crossover rule, called the preferential crossover rule, to reduce the drawbacks. The preferential crossover rule uses points from an auxiliary population set. We also introduce a variable scaling parameter in mutation. Motivations for these changes are provided. A numerical study is carried out using 50 test problems, many of which are inspired by practical applications. Numerical results suggest that the proposed modification reduces the number of function evaluations and cpu time considerably.  相似文献   

15.
This article investigates averaging effects associated with a fine-grained boundary. A simple diffusion occurs everywhere except at a large number of small “holes” in the medium, at which an appropriately scaled mixed boundary condition is applied. The scaling considered is fitting for boundary conditions resulting from thin layer approximations in which the layer thickness scales with the diameter of the hole. Probabilistic methods associated with the Feynman-Kac formula are applied to find the limiting behavior, and the perforated domain and complex boundary condition are replaced by a straightforward attenuating term.  相似文献   

16.
A new nonlinear conjugate gradient method is proposed to solve large-scale unconstrained optimization problems. The direction is given by a search direction matrix, which contains a positive parameter. The value of the parameter is calculated by minimizing the upper bound of spectral condition number of the matrix defining it in order to cluster all the singular values. The new search direction satisfies the sufficient descent condition. Under some mild assumptions, the global convergence of the proposed method is proved for uniformly convex functions and the general functions. Numerical experiments show that, for the CUTEr library and the test problem collection given by Andrei, the proposed method is superior to M1 proposed by Babaie-Kafaki and Ghanbari (Eur. J. Oper. Res. 234(3), 625–630, 2014), CG_DESCENT(5.3), and CGOPT.  相似文献   

17.
The demand for computational efficiency and reduced cost presents a big challenge for the development of more applicable and practical approaches in the field of uncertainty model updating. In this article, a computationally efficient approach, which is a combination of Stochastic Response Surface Method (SRSM) and Monte Carlo inverse error propagation, for stochastic model updating is developed based on a surrogate model. This stochastic surrogate model is determined using the Hermite polynomial chaos expansion and regression-based efficient collocation method. This paper addresses the critical issue of effectiveness and efficiency of the presented method. The efficiency of this method is demonstrated as a large number of computationally demanding full model simulations are no longer essential, and instead, the updating of parameter mean values and variances is implemented on the stochastic surrogate model expressed as an explicit mathematical expression. A three degree-of-freedom numerical model and a double-hat structure formed by a number of bolted joints are employed to illustrate the implementation of the method. Using the Monte Carlo-based method as the benchmark, the effectiveness and efficiency of the proposed method is verified.  相似文献   

18.
In this paper we re-examine some of the available methods for pricing out the columns in the simplex method and point out their potential advantages and disadvantages. In particular, we show that a simple formula for updating the pricing vector can be used with some advantage in the standard product form simplex algorithm and with very considerable advantage in two recent developments: P.M.J. Harris' dynamic scaling method and the Forrest—Tomlin method for maintaining triangular factors of the basis.  相似文献   

19.
《Optimization》2012,61(11):2277-2287
Two adaptive choices for the parameter of Dai–Liao conjugate gradient (CG) method are suggested. One of which is obtained by minimizing the distance between search directions of Dai–Liao method and a three-term CG method proposed by Zhang et al. and the other one is obtained by minimizing Frobenius condition number of the search direction matrix. Global convergence analyses are made briefly. Numerical results are reported; they demonstrate effectiveness of the suggested adaptive choices.  相似文献   

20.
This paper presents a new method for steplength selection in the frame of spectral gradient methods. The steplength formula is based on the interpolation scheme as well as some modified secant equations. The corresponding algorithm selects the initial positive steplength per iteration according to the satisfaction of the secant condition, and then a backtracking procedure along the negative gradient is performed. The numerical experience shows that this algorithm improves favorably the efficiency property of the standard Barzilai–Borwein method as well as some other recently modified Barzilai–Borwein approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号