首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider pruning steps used in a branch-and-bound algorithm for verified global optimization. A first-order pruning step was given by Ratz using automatic computation of a first-order slope tuple (Ratz, Automatic Slope Computation and its Application in Nonsmooth Global Optimization. Shaker Verlag, Aachen, 1998; J. Global Optim. 14: 365–393, 1999). In this paper, we introduce a second-order pruning step which is based on automatic computation of a second-order slope tuple. We add this second-order pruning step to the algorithm of Ratz. Furthermore, we compare the new algorithm with the algorithm of Ratz by considering some test problems for verified global optimization on a floating-point computer. This paper contains some results from the author’s dissertation [29].  相似文献   

2.
 We consider optimality systems of Karush-Kuhn-Tucker (KKT) type, which arise, for example, as primal-dual conditions characterizing solutions of optimization problems or variational inequalities. In particular, we discuss error bounds and Newton-type methods for such systems. An exhaustive comparison of various regularity conditions which arise in this context is given. We obtain a new error bound under an assumption which we show to be strictly weaker than assumptions previously used for KKT systems, such as quasi-regularity or semistability (equivalently, the R 0-property). Error bounds are useful, among other things, for identifying active constraints and developing efficient local algorithms. We propose a family of local Newton-type algorithms. This family contains some known active-set Newton methods, as well as some new methods. Regularity conditions required for local superlinear convergence compare favorably with convergence conditions of nonsmooth Newton methods and sequential quadratic programming methods. Received: December 10, 2001 / Accepted: July 28, 2002 Published online: February 14, 2003 Key words. KKT system – regularity – error bound – active constraints – Newton method Mathematics Subject Classification (1991): 90C30, 65K05  相似文献   

3.
We review the modern approaches to the synthesis of robust H controllers that ensure optimal damping of oscillations in dynamical systems under uncertainty. In the synthesis method based on Riccati equations, these many-parameter equations can be solved only when the parameters are contained in a bounded parallelepiped with given boundaries. The synthesis of a robust H output control for systems with unknown bounded parameters is reducible to the solution of an optimization problem constrained by a system of linear matrix inequalities. The proposed controller synthesis algorithms are implemented using standard MATLAB procedures. The efficiency of the proposed methods and algorithms is demonstrated in application to optimal damping of oscillations in a parametrically excited pendulum. __________ Translated from Nelineinaya Dinamika i Upravlenie, No. 4, pp. 87–104, 2004.  相似文献   

4.
Recent progress in unconstrained nonlinear optimization without derivatives   总被引:6,自引:0,他引:6  
We present an introduction to a new class of derivative free methods for unconstrained optimization. We start by discussing the motivation for such methods and why they are in high demand by practitioners. We then review the past developments in this field, before introducing the features that characterize the newer algorithms. In the context of a trust region framework, we focus on techniques that ensure a suitable “geometric quality” of the considered models. We then outline the class of algorithms based on these techniques, as well as their respective merits. We finally conclude the paper with a discussion of open questions and perspectives. Current reports available by anonymous ftp from the directory “pub/reports” on thales.math.fundp.ac.be. WWW: http://www.fundp.ac.be/ phtoint/pht/publications.html.  相似文献   

5.
 We propose and analyze a class of penalty-function-free nonmonotone trust-region methods for nonlinear equality constrained optimization problems. The algorithmic framework yields global convergence without using a merit function and allows nonmonotonicity independently for both, the constraint violation and the value of the Lagrangian function. Similar to the Byrd–Omojokun class of algorithms, each step is composed of a quasi-normal and a tangential step. Both steps are required to satisfy a decrease condition for their respective trust-region subproblems. The proposed mechanism for accepting steps combines nonmonotone decrease conditions on the constraint violation and/or the Lagrangian function, which leads to a flexibility and acceptance behavior comparable to filter-based methods. We establish the global convergence of the method. Furthermore, transition to quadratic local convergence is proved. Numerical tests are presented that confirm the robustness and efficiency of the approach. Received: December 14, 2000 / Accepted: August 30, 2001 Published online: September 27, 2002 Key words. nonmonotone trust-region methods – sequential quadratic programming – penalty function – global convergence – equality constraints – local convergence – large-scale optimization Mathematics Subject Classification (2000): 65K05, 90C30  相似文献   

6.
This is a summary of the author’s PhD thesis, supervised by Yaroslav D. Sergeyev and defended on May 5, 2006, at the University of Rome “La Sapienza”. The thesis is written in English and is available from the author upon request. In this work, the global optimization problem of a multidimensional “black-box” function satisfying the Lipschitz condition over a hyperinterval with an unknown Lipschitz constant is considered. The objective function is assumed hard to evaluate. A new efficient diagonal scheme for constructing fast algorithms for solving this problem is examined and illustrated by developing several powerful global optimization methods. A deep theoretical study is performed which highlights the benefit of the approach introduced over traditionally used diagonal algorithms. Theoretical conclusions are confirmed by results of extensive numerical experiments.   相似文献   

7.
For current sequential quadratic programming (SQP) type algorithms, there exist two problems: (i) in order to obtain a search direction, one must solve one or more quadratic programming subproblems per iteration, and the computation amount of this algorithm is very large. So they are not suitable for the large-scale problems; (ii) the SQP algorithms require that the related quadratic programming subproblems be solvable per iteration, but it is difficult to be satisfied. By using ε-active set procedure with a special penalty function as the merit function, a new algorithm of sequential systems of linear equations for general nonlinear optimization problems with arbitrary initial point is presented. This new algorithm only needs to solve three systems of linear equations having the same coefficient matrix per iteration, and has global convergence and local superlinear convergence. To some extent, the new algorithm can overcome the shortcomings of the SQP algorithms mentioned above. Project partly supported by the National Natural Science Foundation of China and Tianyuan Foundation of China.  相似文献   

8.
We introduce and study decompositions of finite sets as well as coverings of their convex hulls, and use these objects to develop various estimates of and formulas for the “hull-volume” of the sets (i.e., the volume of their convex hull). We apply our results to the convergence analysis of the “iterate-sets” associated with each iteration of a reduce-or-retreat optimization method (including pattern-search methods like Nelder–Mead as well as model-based methods).  相似文献   

9.
In this work we consider computing and continuing connecting orbits in parameter dependent dynamical systems. We give details of algorithms for computing connections between equilibria and periodic orbits, and between periodic orbits. The theoretical foundation for these techniques is given by the seminal work of Beyn in 1994, “On well-posed problems for connecting orbits in dynamical systems”, where a numerical technique is also proposed. Our algorithms consist of splitting the computation of the connection from that of the periodic orbit(s). To set up appropriate boundary conditions, we follow the algorithmic approach used by Demmel, Dieci, and Friedman, for the case of connecting orbits between equilibria, and we construct and exploit the smooth block Schur decomposition of the monodromy matrices associated to the periodic orbits. Numerical examples illustrate the performance of the algorithms. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

10.
We develop and test two novel computational approaches for predicting the mean linear response of a chaotic dynamical system to small change in external forcing via the fluctuation–dissipation theorem. Unlike the earlier work in developing fluctuation–dissipation theorem-type computational strategies for chaotic nonlinear systems with forcing and dissipation, the new methods are based on the theory of Sinai–Ruelle–Bowen probability measures, which commonly describe the equilibrium state of such dynamical systems. The new methods take into account the fact that the dynamics of chaotic nonlinear forced-dissipative systems often reside on chaotic fractal attractors, where the classical quasi-Gaussian formula of the fluctuation–dissipation theorem often fails to produce satisfactory response prediction, especially in dynamical regimes with weak and moderate degrees of chaos. A simple new low-dimensional chaotic nonlinear forced-dissipative model is used to study the response of both linear and nonlinear functions to small external forcing in a range of dynamical regimes with an adjustable degree of chaos. We demonstrate that the two new methods are remarkably superior to the classical fluctuation–dissipation formula with quasi-Gaussian approximation in weakly and moderately chaotic dynamical regimes, for both linear and nonlinear response functions. One straightforward algorithm gives excellent results for short-time response while the other algorithm, based on systematic rational approximation, improves the intermediate and long time response predictions.  相似文献   

11.
In the present paper, we propose a new multipoint type global optimization model using a chaotic dynamic model and a synchronization phenomenon in nonlinear dynamic systems for a continuously differentiable optimization problem. We first improve the Discrete Gradient Chaos Model (DGCM), which drives each search point’s autonomous movement, based on theoretical analysis. We then derive a new coupling structure called PD type coupling in order to obtain stable synchronization of all search points with the chaotic dynamic model in a discrete time system. Finally, we propose a new multipoint type global optimization model, in which each search point moves autonomously by improved DGCM and their trajectories are synchronized to elite search points by the PD type coupling model. The proposed model properly achieves diversification and intensification, which are reported to be important strategies for global optimization in the Meta-heuristics research field. Through application to proper benchmark problems [Liang et al. Novel composition test functions for numerical global optimization. In: Proceedings of Swarm Intelligence Symposium, 2005 (SIS 2005), pp. 68–75 (2005); Liang et al. Nat. Comput. 5(1), 83–96, 2006] (in which the drawbacks of typical benchmark problems are improved) with 100 or 1000 variables, we confirm that the proposed model is more effective than other gradient-based methods.  相似文献   

12.
Multiplicative programming problems (MPPs) are global optimization problems known to be NP-hard. In this paper, we employ algorithms developed to compute the entire set of nondominated points of multi-objective linear programmes (MOLPs) to solve linear MPPs. First, we improve our own objective space cut and bound algorithm for convex MPPs in the special case of linear MPPs by only solving one linear programme in each iteration, instead of two as the previous version indicates. We call this algorithm, which is based on Benson’s outer approximation algorithm for MOLPs, the primal objective space algorithm. Then, based on the dual variant of Benson’s algorithm, we propose a dual objective space algorithm for solving linear MPPs. The dual algorithm also requires solving only one linear programme in each iteration. We prove the correctness of the dual algorithm and use computational experiments comparing our algorithms to a recent global optimization algorithm for linear MPPs from the literature as well as two general global optimization solvers to demonstrate the superiority of the new algorithms in terms of computation time. Thus, we demonstrate that the use of multi-objective optimization techniques can be beneficial to solve difficult single objective global optimization problems.  相似文献   

13.
Trust-region methods are globally convergent techniques widely used, for example, in connection with the Newton’s method for unconstrained optimization. One of the most commonly-used iterative approaches for solving trust-region subproblems is the Steihaug–Toint method which is based on conjugate gradient iterations and seeks a solution on Krylov subspaces. This paper contains new theoretical results concerning properties of Lagrange multipliers obtained on these subspaces. AMS subject classification (2000)  65F20  相似文献   

14.
 We present a unified convergence rate analysis of iterative methods for solving the variational inequality problem. Our results are based on certain error bounds; they subsume and extend the linear and sublinear rates of convergence established in several previous studies. We also derive a new error bound for $\gamma$-strictly monotone variational inequalities. The class of algorithms covered by our analysis in fairly broad. It includes some classical methods for variational inequalities, e.g., the extragradient, matrix splitting, and proximal point methods. For these methods, our analysis gives estimates not only for linear convergence (which had been studied extensively), but also sublinear, depending on the properties of the solution. In addition, our framework includes a number of algorithms to which previous studies are not applicable, such as the infeasible projection methods, a separation-projection method, (inexact) hybrid proximal point methods, and some splitting techniques. Finally, our analysis covers certain feasible descent methods of optimization, for which similar convergence rate estimates have been recently obtained by Luo [14]. Received: April 17, 2001 / Accepted: December 10, 2002 Published online: April 10, 2003 RID="⋆" ID="⋆" Research of the author is partially supported by CNPq Grant 200734/95–6, by PRONEX-Optimization, and by FAPERJ. Key Words. Variational inequality – error bound – rate of convergence Mathematics Subject Classification (2000): 90C30, 90C33, 65K05  相似文献   

15.
In this paper we face a classical global optimization problem—minimization of a multiextremal multidimensional Lipschitz function over a hyperinterval. We introduce two new diagonal global optimization algorithms unifying the power of the following three approaches: efficient univariate information global optimization methods, diagonal approach for generalizing univariate algorithms to the multidimensional case, and local tuning on the behaviour of the objective function (estimates of the local Lipschitz constants over different subregions) during the global search. Global convergence conditions of a new type are established for the diagonal information methods. The new algorithms demonstrate quite satisfactory performance in comparison with the diagonal methods using only global information about the Lipschitz constant.  相似文献   

16.
When dealing with extremely hard global optimization problems, i.e. problems with a large number of variables and a huge number of local optima, heuristic procedures are the only possible choice. In this situation, lacking any possibility of guaranteeing global optimality for most problem instances, it is quite difficult to establish rules for discriminating among different algorithms. We think that in order to judge the quality of new global optimization methods, different criteria might be adopted like, e.g.:
1.  efficiency – measured in terms of the computational effort necessary to obtain the putative global optimum
2.  robustness – measured in terms of “percentage of successes”, i.e. of the number of times the algorithm, re-started with different seeds or starting points, is able to end up at the putative global optimum
3.  discovery capability – measured in terms of the possibility that an algorithm discovers, for the first time, a putative optimum for a given problem which is better than the best known up to now.
Of course the third criterion cannot be considered as a compulsory one, as it might be the case that, for a given problem, the best known putative global optimum is indeed the global one, so that no algorithm will ever be able to discover a better one. In this paper we present a computational framework based on a population-based stochastic method in which different candidate solutions for a single problem are maintained in a population which evolves in such a way as to guarantee a sufficient diversity among solutions. This diversity enforcement is obtained through the definition of a dissimilarity measure whose definition is dependent on the specific problem class. We show in the paper that, for some well known and particularly hard test classes, the proposed method satisfies the above criteria, in that it is both much more efficient and robust when compared with other published approaches. Moreover, for the very hard problem of determining the minimum energy conformation of a cluster of particles which interact through short-range Morse potential, our approach was able to discover four new putative optima.  相似文献   

17.
In this work, we present a new set-oriented numerical method for the numerical solution of multiobjective optimization problems. These methods are global in nature and allow to approximate the entire set of (global) Pareto points. After proving convergence of an associated abstract subdivision procedure, we use this result as a basis for the development of three different algorithms. We consider also appropriate combinations of them in order to improve the total performance. Finally, we illustrate the efficiency of these techniques via academic examples plus a real technical application, namely, the optimization of an active suspension system for cars.The authors thank Joachim Lückel for his suggestion to get into the interesting field of multiobjective optimization. Katrin Baptist as well as Frank Scharfeld helped the authors with fruitful discussions. This work was partly supported by the Deutsche Forschungsgemeinschaft within SFB 376 and SFB 614.  相似文献   

18.
We explore in this paper certain rich geometric properties hidden behind quadratic 0–1 programming. Especially, we derive new lower bounding methods and variable fixation techniques for quadratic 0–1 optimization problems by investigating geometric features of the ellipse contour of a (perturbed) convex quadratic function. These findings further lead to some new optimality conditions for quadratic 0–1 programming. Integrating these novel solution schemes into a proposed solution algorithm of a branch-and-bound type, we obtain promising preliminary computational results.  相似文献   

19.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.  相似文献   

20.
There has been significant progress in the development of numerical methods for the determination of optimal trajectories for continuous dynamic systems, especially in the last 20 years. In the 1980s, the principal contribution was new methods for discretizing the continuous system and converting the optimization problem into a nonlinear programming problem. This has been a successful approach that has yielded optimal trajectories for very sophisticated problems. In the last 15–20 years, researchers have applied a qualitatively different approach, using evolutionary algorithms or metaheuristics, to solve similar parameter optimization problems. Evolutionary algorithms use the principle of “survival of the fittest” applied to a population of individuals representing candidate solutions for the optimal trajectories. Metaheuristics optimize by iteratively acting to improve candidate solutions, often using stochastic methods. In this paper, the advantages and disadvantages of these recently developed methods are described and an attempt is made to answer the question of what is now the best extant numerical solution method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号