首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Some algorithms for unconstrained and differentiable optimization problems involve the evaluation of quantities related to high order derivatives. The cost of these evaluations depends widely on the technique used to obtain the derivatives and on some characteristics of the objective function: its size, structure and complexity. Functions with banded Hessian are a special case that we study in this paper. Because of their partial separability, the cost of obtaining their high order derivatives, subtly computed by the technique of automatic differentiation, makes High order Chebyshev methods more interesting for banded systems than for dense functions. These methods have an attractive efficiency as we can improve their convergence order without increasing significantly their algorithmic costs. This paper provides an analysis of the per-iteration complexities of High order Chebyshev methods applied to sparse functions with banded Hessians. The main result can be summarized as: the per-iteration complexity of a High order Chebyshev method is of order of the objective function’s. This theoretical analysis is verified by numerical illustrations.  相似文献   

2.
The simplicial homology global optimisation (SHGO) algorithm is a general purpose global optimisation algorithm based on applications of simplicial integral homology and combinatorial topology. SHGO approximates the homology groups of a complex built on a hypersurface homeomorphic to a complex on the objective function. This provides both approximations of locally convex subdomains in the search space through Sperner’s lemma and a useful visual tool for characterising and efficiently solving higher dimensional black and grey box optimisation problems. This complex is built up using sampling points within the feasible search space as vertices. The algorithm is specialised in finding all the local minima of an objective function with expensive function evaluations efficiently which is especially suitable to applications such as energy landscape exploration. SHGO was initially developed as an improvement on the topographical global optimisation (TGO) method. It is proven that the SHGO algorithm will always outperform TGO on function evaluations if the objective function is Lipschitz smooth. In this paper SHGO is applied to non-convex problems with linear and box constraints with bounds placed on the variables. Numerical experiments on linearly constrained test problems show that SHGO gives competitive results compared to TGO and the recently developed Lc-DISIMPL algorithm as well as the PSwarm, LGO and DIRECT-L1 algorithms. Furthermore SHGO is compared with the TGO, basinhopping (BH) and differential evolution (DE) global optimisation algorithms over a large selection of black-box problems with bounds placed on the variables from the SciPy benchmarking test suite. A Python implementation of the SHGO and TGO algorithms published under a MIT license can be found from https://bitbucket.org/upiamcompthermo/shgo/.  相似文献   

3.
We present an efficient approach to solve resource allocation problems with a single resource, a convex separable objective function, a convex separable resource-usage constraint, and variables that are bounded below and above. Through a combination of function evaluations and median searches, information on whether or not the upper- and lowerbounds are binding is obtained. Once this information is available for all upper and lower bounds, it remains to determine the optimum of a smaller problem with unbounded variables. This can be done through a multiplier search procedure. The information gathered allows for alternative approaches for the multiplier search which can reduce the complexity of this procedure.  相似文献   

4.
This paper presents a meta-algorithm for approximating the Pareto optimal set of costly black-box multiobjective optimization problems given a limited number of objective function evaluations. The key idea is to switch among different algorithms during the optimization search based on the predicted performance of each algorithm at the time. Algorithm performance is modeled using a machine learning technique based on the available information. The predicted best algorithm is then selected to run for a limited number of evaluations. The proposed approach is tested on several benchmark problems and the results are compared against those obtained using any one of the candidate algorithms alone.  相似文献   

5.
We study a colourful generalization of the linear programming feasibility problem, comparing the algorithms introduced by Bárány and Onn with new methods. This is a challenging problem on the borderline of tractability, its complexity is an open question. We perform benchmarking on generic and ill-conditioned problems, as well as recently introduced highly structured problems. We show that some algorithms can lead to cycling or slow convergence and we provide extensive numerical experiments which show that others perform much better than predicted by complexity arguments. We conclude that the most efficient method is a proposed multi-update algorithm.  相似文献   

6.
We propose regularized cutting-plane methods for solving mixed-integer nonlinear programming problems with nonsmooth convex objective and constraint functions. The given methods iteratively search for trial points in certain localizer sets, constructed by employing linearizations of the involved functions. New trial points can be chosen in several ways; for instance, by minimizing a regularized cutting-plane model if functions are costly. When dealing with hard-to-evaluate functions, the goal is to solve the optimization problem by performing as few function evaluations as possible. Numerical experiments comparing the proposed algorithms with classical methods in this area show the effectiveness of our approach.  相似文献   

7.
This note proposes an alternative procedure for identifying violated subtour elimination constraints (SECs) in branch-and-cut algorithms for elementary shortest path problems. The procedure is also applicable to other routing problems, such as variants of travelling salesman or shortest Hamiltonian path problems, on directed graphs. The proposed procedure is based on computing the strong components of the support graph. The procedure possesses a better worst-case time complexity than the standard way of separating SECs, which uses maximum flow algorithms, and is easier to implement.  相似文献   

8.
The purpose of this paper is to introduce and study a new class of combinatorial optimization problems in which the objective function is the algebraic sum of a bottleneck cost function (Min-Max) and a linear cost function (Min-Sum). General algorithms for solving such problems are described and general complexity results are derived. A number of examples of application involving matchings, paths and cutsets, matroid bases, and matroid intersection problems are examined, and the general complexity results are specialized to each of them. The interest of these various problems comes in particular from their strong relation to other important and difficult combinatorial problems such as: weighted edge coloring of a graph; optimum weighted covering with matroid bases; optimum weighted partitioning with matroid intersections, etc. Another important area of application of the algorithms given in the paper is bicriterion analysis involving a Min-Max criterion and a Min-Sum one.  相似文献   

9.
Solving multi-objective problems requires the evaluation of two or more conflicting objective functions, which often demands a high amount of computational power. This demand increases rapidly when estimating values for objective functions of dynamic, stochastic problems, since a number of observations are needed for each evaluation set, of which there could be many. Computer simulation applications of real-world optimisations often suffer due to this phenomenon. Evolutionary algorithms are often applied to multi-objective problems. In this article, the cross-entropy method is proposed as an alternative, since it has been proven to converge quickly in the case of single-objective optimisation problems. We adapted the basic cross-entropy method for multi-objective optimisation and applied the proposed algorithm to known test problems. This was followed by an application to a dynamic, stochastic problem where a computer simulation model provides the objective function set. The results show that acceptable results can be obtained while doing relatively few evaluations.  相似文献   

10.
In this paper, two nonmonotone Levenberg–Marquardt algorithms for unconstrained nonlinear least-square problems with zero or small residual are presented. These algorithms allow the sequence of objective function values to be nonmonotone, which accelerates the iteration progress, especially in the case where the objective function is ill-conditioned. Some global convergence properties of the proposed algorithms are proved under mild conditions which exclude the requirement for the positive definiteness of the approximate Hessian T(x). Some stronger global convergence properties and the local superlinear convergence of the first algorithm are also proved. Finally, a set of numerical results is reported which shows that the proposed algorithms are promising and superior to the monotone Levenberg–Marquardt algorithm according to the numbers of gradient and function evaluations.  相似文献   

11.
Rollout algorithms are innovative methods, recently proposed by Bertsekas et al. [3], for solving NP-hard combinatorial optimization problems. The main advantage of these approaches is related to their capability of magnifying the effectiveness of any given heuristic algorithm. However, one of the main limitations of rollout algorithms in solving large-scale problems is represented by their computational complexity. Innovative versions of rollout algorithms, aimed at reducing the computational complexity in sequential environments, have been proposed in our previous work [9]. In this paper, we show that a further reduction can be accomplished by using parallel technologies. Indeed, rollout algorithms have very appealing characteristics that make them suitable for efficient and effective implementations in parallel environments, thus extending their range of relevant practical applications.We propose two strategies for parallelizing rollout algorithms and we analyze their performance by considering a shared-memory paradigm. The computational experiments have been carried out on a SGI Origin 2000 with 8 processors, by considering two classical combinatorial optimization problems. The numerical results show that a good reduction of the execution time can be obtained by exploiting parallel computing systems.  相似文献   

12.
Global Minimization Algorithms for Holder Functions   总被引:1,自引:0,他引:1  
This paper deals with the one-dimensional global optimization problem where the objective function satisfies a Hölder condition over a closed interval. A direct extension of the popular Piyavskii method proposed for Lipschitz functions to Hölder optimization requires an a priori estimate of the Hölder constant and solution to an equation of degree N at each iteration. In this paper a new scheme is introduced. Three algorithms are proposed for solving one-dimensional Hölder global optimization problems. All of them work without solving equations of degree N. The case (very often arising in applications) when a Hölder constant is not given a priori is considered. It is shown that local information about the objective function used inside the global procedure can accelerate the search signicantly. Numerical experiments show quite promising performance of the new algorithms.  相似文献   

13.
This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no derivative information. We refer to these algorithms as derivative-free algorithms. Fueled by a growing number of applications in science and engineering, the development of derivative-free optimization algorithms has long been studied, and it has found renewed interest in recent time. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a review of derivative-free algorithms, followed by a systematic comparison of 22 related implementations using a test set of 502 problems. The test bed includes convex and nonconvex problems, smooth as well as nonsmooth problems. The algorithms were tested under the same conditions and ranked under several criteria, including their ability to find near-global solutions for nonconvex problems, improve a given starting point, and refine a near-optimal solution. A total of 112,448 problem instances were solved. We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size. For the problems used in this study, TOMLAB/MULTIMIN, TOMLAB/GLCCLUSTER, MCS and TOMLAB/LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2,500 function evaluations. These global solvers outperform local solvers even for convex problems. Finally, TOMLAB/OQNLP, NEWUOA, and TOMLAB/MULTIMIN show superior performance in terms of refining a near-optimal solution.  相似文献   

14.
Greedy algorithms which use only function evaluations are applied to convex optimization in a general Banach space \(X\). Along with algorithms that use exact evaluations, algorithms with approximate evaluations are treated. A priori upper bounds for the convergence rate of the proposed algorithms are given. These bounds depend on the smoothness of the objective function and the sparsity or compressibility (with respect to a given dictionary) of a point in \(X\) where the minimum is attained.  相似文献   

15.
Quantum algorithms and complexity have recently been studied not only for discrete, but also for some numerical problems. Most attention has been paid so far to the integration and approximation problems, for which a speed-up is shown in many important cases by quantum computers with respect to deterministic and randomized algorithms on a classical computer. In this paper, we deal with the randomized and quantum complexity of initial-value problems. For this nonlinear problem, we show that both randomized and quantum algorithms yield a speed-up over deterministic algorithms. Upper bounds on the complexity in the randomized and quantum settings are shown by constructing algorithms with a suitable cost, where the construction is based on integral information. Lower bounds result from the respective bounds for the integration problem.  相似文献   

16.
The computation of Brouwer fixed points is a central tool in economic modeling. Although there have been several algorithms for computing a fixed point of a Brouwer map, starting with Scarf's algorithm of 1965, the question of worst-case complexity was not addressed. It has been conjectured that Scarf's algorithm has typical behavior that is polynomial in the dimension. Here we show that any algorithm for computing the Brouwer fixed point of a function based on function evaluations (a class that includes all known general purpose algorithms) must in the worst case perform a number of function evaluations that is exponential in both the number of digits of accuracy and the dimension. Our lower bounds are very close to the known upper bounds.  相似文献   

17.
Evolutionary multi-objective optimization algorithms aim at finding an approximation of the Pareto set. For hard to solve problems with many conflicting objectives, the number of functions evaluations to represent the Pareto front can be large and time consuming. Parallel computing can reduce the wall-clock time of such algorithms. Previous studies tackled the parallelization of a particular evolutionary algorithm. In this research, we focus on improving one of the most time consuming procedures—the non-dominated sorting—, which is used in the state-of-the-art multi-objective genetic algorithms. Here, three parallel versions of the non-dominated sorting procedure are developed: (1) a multicore (based on Pthreads); (2) a Graphic Processing Unit (GPU) (based on CUDA interface); and (3) a hybrid (based on Pthreads and CUDA). The user can select the most suitable option to efficiently compute the non-dominated sorting procedure depending on the available hardware. Results show that the use of GPU computing provides a substantial improvement in terms of performance. The hybrid approach has the best performance when a good load balance is established among cores and GPU.  相似文献   

18.
We study the complexity of approximating the smallest eigenvalue of -Δ+q with Dirichlet boundary conditions on the d-dimensional unit cube. Here Δ is the Laplacian, and the function q is non-negative and has continuous first order partial derivatives. We consider deterministic and randomized classical algorithms, as well as quantum algorithms using quantum queries of two types: bit queries and power queries. We seek algorithms that solve the problem with accuracy . We exhibit lower and upper bounds for the problem complexity. The upper bounds follow from the cost of particular algorithms. The classical deterministic algorithm is optimal. Optimality is understood modulo constant factors that depend on d. The randomized algorithm uses an optimal number of function evaluations of q when d≤2. The classical algorithms have cost exponential in d since they need to solve an eigenvalue problem involving a matrix with size exponential in d. We show that the cost of quantum algorithms is not exponential in d, regardless of the type of queries they use. Power queries enjoy a clear advantage over bit queries and lead to an optimal complexity algorithm.  相似文献   

19.
In this paper a high-order feasible interior point algorithm for a class of nonmonotonic (P-matrix) linear complementary problem based on large neighborhoods of central path is presented and its iteration complexity is discussed.These algorithms are implicitly associated with a large neighborhood whose size may depend on the dimension of the problems. The complexity of these algorithms bound depends on the size of the neighborhood. It is well known that the complexity of large-step algorithms is greater than that of short- step ones. By using high-order power series (hence the name high-order algorithms), the iteration complexity can be reduced. We show that the upper bound of complexity for our high-order algorithms is equal to that for short-step algorithms.  相似文献   

20.
We outline a relatively new research agenda aiming at building a new approximation paradigm by matching two distinct domains, the polynomial approximation and the exact solution of NP -hard problems by algorithms with guaranteed and non-trivial upper complexity bounds. We show how one can design approximation algorithms achieving ratios that are “forbidden” in polynomial time (unless a very unlikely complexity conjecture is confirmed) with worst-case complexity much lower than that of an exact computation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号