首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we face a classical global optimization problem—minimization of a multiextremal multidimensional Lipschitz function over a hyperinterval. We introduce two new diagonal global optimization algorithms unifying the power of the following three approaches: efficient univariate information global optimization methods, diagonal approach for generalizing univariate algorithms to the multidimensional case, and local tuning on the behaviour of the objective function (estimates of the local Lipschitz constants over different subregions) during the global search. Global convergence conditions of a new type are established for the diagonal information methods. The new algorithms demonstrate quite satisfactory performance in comparison with the diagonal methods using only global information about the Lipschitz constant.  相似文献   

2.
In this article we develop a global optimization algorithm for quasiconvex programming where the objective function is a Lipschitz function which may have “flat parts”. We adapt the Extended Cutting Angle method to quasiconvex functions, which reduces significantly the number of iterations and objective function evaluations, and consequently the total computing time. Applications of such an algorithm to mathematical programming problems in which the objective function is derived from economic systems and location problems are described. Computational results are presented.  相似文献   

3.
Summary.  In this paper, global optimization (GO) Lipschitz problems are considered where the multi-dimensional multiextremal objective function is determined over a hyperinterval. An efficient one-dimensional GO method using local tuning on the behavior of the objective function is generalized to the multi-dimensional case by the diagonal approach using two partition strategies. Global convergence conditions are established for the obtained diagonal geometric methods. Results of a wide numerical comparison show a strong acceleration reached by the new methods working with estimates of the local Lipschitz constants over different subregions of the search domain in comparison with the traditional approach. Received July 13, 2001 / Revised version received March 14, 2002 / Published online October 29, 2002 Mathematics Subject Classification (1991): 65K05, 90C30 Correspondence to: Yaroslav D. Sergeyev  相似文献   

4.
In this paper, we have suggested a penalty method to modify the combinatorial optimization problem with the linear constraints to a global optimization problem with linear constraints. It also deals with a topic of vital significance of pump operation optimization in a water system. In this connection we have done a lot of work to formulate a model based on a simplified flow volume balance to resolve the problem of optimal pump operation settings of switching “ON” and “OFF” with the reduced gradient method. This global solution approach incorporates some benefits for practical application to a real system as is shown in the case study.  相似文献   

5.
A characterization of weakly efficient, efficient and properly efficient solutions of multiobjective optimization problems is given in terms of a scalar optimization problem by using a special “distance” function. The concept of the well-posedness for this special scalar problem is then linked with the properly efficient solutions of the multiobjective problem.  相似文献   

6.
Advanced Genetic Programming Based Machine Learning   总被引:1,自引:0,他引:1  
A Genetic Programming based approach for solving classification problems is presented in this paper. Classification is understood as the act of placing an object into a set of categories, based on the object’s properties; classification algorithms are designed to learn a function which maps a vector of object features into one of several classes. This is done by analyzing a set of input-output examples (“training samples”) of the function. Here we present a method based on the theory of Genetic Algorithms and Genetic Programming that interprets classification problems as optimization problems: Each presented instance of the classification problem is interpreted as an instance of an optimization problem, and a solution is found by a heuristic optimization algorithm. The major new aspects presented in this paper are advanced algorithmic concepts as well as suitable genetic operators for this problem class (mainly the creation of new hypotheses by merging already existing ones and their detailed evaluation). The experimental part of the paper documents the results produced using new hybrid variants of Genetic Algorithms as well as investigated parameter settings. Graphical analysis is done using a novel multiclass classifier analysis concept based on the theory of Receiver Operating Characteristic curves. The work described in this paper was done within the Translational Research Project L282 “GP-Based Techniques for the Design of Virtual Sensors” sponsored by the Austrian Science Fund (FWF).  相似文献   

7.
In this paper new global optimization algorithms are proposed for solving problems where the objective function is univariate and has Lipschitzean first derivatives. To solve this problem, smooth auxiliary functions, which are adaptively improved during the course of the search, are constructed. Three new algorithms are introduced: the first used the exact a priori known Lipschitz constant for derivatives; the second, when this constant is unknown, estimates it during the course of the search and finally, the last method uses neither the exact global Lipschitz constant nor its estimate but instead adaptively estimates the local Lipschitz constants in different sectors of the search region during the course of optimization. Convergence conditions of the methods are investigated from a general viewpoint and some numerical results are also given. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.  相似文献   

8.
9.
This paper presents a computationally effective heuristic method which produces good-quality solutions for large-scale set covering problems with thousands of constraints and about one million variables. The need to solve such large-scale problems arises from a crew scheduling problem of mass transit agencies where the number of work shifts required has to be minimized. This problem may be formulated as a large-scale non-unicost set covering problem whose rows are trips to be performed while columns stand for round trips. The proposed method is mainly based on lagragian relaxation and sub-gradient optimization. After the reduction of the number of rows and columns by the logical tests, “greedy” heuristic algorithms provide upper and lower bounds which are continuously improved to produce goodquality solutions. Computational results, regarding randomly generated problems and real life problems concerning crew scheduling at Italian Railways Company, show that good-quality solutions can be obtained at an acceptable computational cost. This work was supported by the project “Progetto Finalizzato Transporti 2” of National Research Council of Italy (C.N.R.) contract No. 94.01436PF74 and by “Ferrovie dello Stato S.p.A.”  相似文献   

10.
Global Minimization Algorithms for Holder Functions   总被引:1,自引:0,他引:1  
This paper deals with the one-dimensional global optimization problem where the objective function satisfies a Hölder condition over a closed interval. A direct extension of the popular Piyavskii method proposed for Lipschitz functions to Hölder optimization requires an a priori estimate of the Hölder constant and solution to an equation of degree N at each iteration. In this paper a new scheme is introduced. Three algorithms are proposed for solving one-dimensional Hölder global optimization problems. All of them work without solving equations of degree N. The case (very often arising in applications) when a Hölder constant is not given a priori is considered. It is shown that local information about the objective function used inside the global procedure can accelerate the search signicantly. Numerical experiments show quite promising performance of the new algorithms.  相似文献   

11.
Acceptable moves for the “worthwhile-to-move” incremental principle are such that “advantages-to-move” are higher than some fraction of “costs-to-move”. When combined with optimization, this principle gives raise to adaptive local search proximal algorithms. Convergence results are given in two distinctive cases, namely low local costs-to-move and high local costs-to-move. In this last case, one obtains a dynamic cognitive approach to Ekeland’s ϵ-variational principle. Introduction of costs-to-move in the algorithms yields robustness and stability properties.  相似文献   

12.
We report some experience with optimization methods applied to an inverse light scattering problem for spherical, homogeneous particles. Such particles can be identified from experimental data using a least squares global optimization method. However, if there is significant noise in the data, the “best” solution may not correspond well to the “actual” particle. We suggest a way in which the original least squares solution may be improved by using a constrained optimization calculation which considers the position of peaks in the data. This approach is applied first to multi-angle data with varying amounts of artificially introduced noise and then to examples of single-particle experimental data patterns characterized by high noise levels.  相似文献   

13.
In this paper, we study the application of non-monotone derivative-free optimization algorithms to wireless local area networks (WLAN) planning, which can be modeled as an unconstrained minimization problem. We wish to determine the access point (AP) positions that maximize coverage in order to provide connectivity to static and mobile users. As the objective function of the optimization model is not everywhere differentiable, previous research has discarded gradient methods and employed heuristics such as neighborhood search (NS) and simulated annealing (SA). In this paper, we show that the model fulfills the conditions required by recently proposed non-monotone derivative-free (DF) algorithms. Unlike SA, DF has guaranteed convergence. The numerical tests reveal that a tailored DF implementation (termed “zone search”) outperforms NS and SA. A collaboration between U. of Vigo, Spain and USB, Venezuela.  相似文献   

14.
The problem of optimally designing a trajectory for a space mission is considered in this paper. Actual mission design is a complex, multi-disciplinary and multi-objective activity with relevant economic implications. In this paper we will consider some simplified models proposed by the European Space Agency as test problems for global optimization (GTOP database). We show that many trajectory optimization problems can be quite efficiently solved by means of relatively simple global optimization techniques relying on standard methods for local optimization. We show in this paper that our approach has been able to find trajectories which in many cases outperform those already known. We also conjecture that this problem displays a “funnel structure” similar, in some sense, to that of molecular optimization problems.  相似文献   

15.
We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.  相似文献   

16.
Representations of solutions of boundary value problems for simple domains in the Monte Carlo algorithms are widely distributed [2]. In particular, widespread use is made of such a representation for the ball. It allows one to formally write an integral equation of the second kind for the required function in an arbitrary domain with regular boundary. Moreover, with the involvement of the joining conditions [1], one can picture a possible construction of a random process to “solve” the problem. However, the “walk in spheres” process, which solves the first boundary value problem for the Poisson equation, results in ɛ-biased estimators, and so the introduction of a regularization parameter is required. The authors investigate in detail the “walk in hemispheres” method, which has been proposed earlier by A. S. Sipin [10] without full justification. The use of the Green’s function for the hemisphere makes it possible to obtain estimators for the first and the third boundary value problems, as well as for the problem with discontinuous derivative; for a broad class of domains, these estimators are shown to be unbiased. The algorithms proposed feature a high degree of parallelism. Results of solving model problems are presented.  相似文献   

17.
A new dual problem for convex generalized fractional programs with no duality gap is presented and it is shown how this dual problem can be efficiently solved using a parametric approach. The resulting algorithm can be seen as “dual” to the Dinkelbach-type algorithm for generalized fractional programs since it approximates the optimal objective value of the dual (primal) problem from below. Convergence results for this algorithm are derived and an easy condition to achieve superlinear convergence is also established. Moreover, under some additional assumptions the algorithm also recovers at the same time an optimal solution of the primal problem. We also consider a variant of this new algorithm, based on scaling the “dual” parametric function. The numerical results, in case of quadratic-linear ratios and linear constraints, show that the performance of the new algorithm and its scaled version is superior to that of the Dinkelbach-type algorithms. From the computational results it also appears that contrary to the primal approach, the “dual” approach is less influenced by scaling. This research was carried out at the Econometric Institute, Erasmus University, Rotterdam, the Netherlands and was supported by J.N.I.C.T. (Portugal) under contract BD/707/90-RM.  相似文献   

18.
We consider a global optimization problem for Lipschitz-continuous functions with an unknown Lipschitz constant. Our approach is based on the well-known DIRECT (DIviding RECTangles) algorithm and motivated by the diagonal partitioning strategy. One of the main advantages of the diagonal partitioning scheme is that the objective function is evaluated at two points at each hyper-rectangle and, therefore, more comprehensive information about the objective function is considered than using the central sampling strategy used in most DIRECT-type algorithms. In this paper, we introduce a new DIRECT-type algorithm, which we call BIRECT (BIsecting RECTangles). In this algorithm, a bisection is used instead of a trisection which is typical for diagonal-based and DIRECT-type algorithms. The bisection is preferable to the trisection because of the shapes of hyper-rectangles, but usual evaluation of the objective function at the center or at the endpoints of the diagonal are not favorable for bisection. In the proposed algorithm the objective function is evaluated at two points on the diagonal equidistant between themselves and the endpoints of a diagonal. This sampling strategy enables reuse of the sampling points in descendant hyper-rectangles. The developed algorithm gives very competitive numerical results compared to the DIRECT algorithm and its well know modifications.  相似文献   

19.
In this paper, the global optimization problem with a multiextremal objective function satisfying the Lipschitz condition over a hypercube is considered. An algorithm that belongs to the class of information methods introduced by R.G. Strongin is proposed. The knowledge of the Lipschitz constant is not supposed. The local tuning on the behavior of the objective function and a new technique, named the local improvement, are used in order to accelerate the search. Two methods are presented: the first one deals with the one-dimensional problems and the second with the multidimensional ones (by using Peano-type space-filling curves for reduction of the dimension of the problem). Convergence conditions for both algorithms are given. Numerical experiments executed on more than 600 functions show quite a promising performance of the new techniques.  相似文献   

20.
 Multidimensional optimization problems where the objective function and the constraints are multiextremal non-differentiable Lipschitz functions (with unknown Lipschitz constants) and the feasible region is a finite collection of robust nonconvex subregions are considered. Both the objective function and the constraints may be partially defined. To solve such problems an algorithm is proposed, that uses Peano space-filling curves and the index scheme to reduce the original problem to a H?lder one-dimensional one. Local tuning on the behaviour of the objective function and constraints is used during the work of the global optimization procedure in order to accelerate the search. The method neither uses penalty coefficients nor additional variables. Convergence conditions are established. Numerical experiments confirm the good performance of the technique. Received: April 2002 / Accepted: December 2002 Published online: March 21, 2003 RID="⋆" ID="⋆" This research was supported by the following grants: FIRB RBNE01WBBB, FIRB RBAU01JYPN, and RFBR 01–01–00587. Key Words. global optimization – multiextremal constraints – local tuning – index approach  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号