首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 192 毫秒
1.
The Wiener process is a widely used statistical model for stochastic global optimization. One of the first optimization algorithms based on a statistical model, the so-called P-algorithm, was based on the Wiener process. Despite many advantages, this process does not give a realistic model for many optimization problems, particularly from the point of view of local behavior. In the present paper, a version of the P-algorithm is constructed based on a stochastic process with smooth sampling functions. It is shown that, in such a case, the algorithm has a better convergence rate than in the case of the Wiener process. A similar convergence rate is proved for a combination of the Wiener model-based P-algorithm with quadratic fit-based local search.  相似文献   

2.
Algorithms based on statistical models compete favorably with other global optimization algorithms as shown by extensive testing results. A theoretical inadequacy of previously used statistical models for smooth objective functions was eliminated by the authors who, in a recent paper, have constructed a P-algorithm for a statistical model for smooth functions. In the present paper, a modification of that P-algorithm with an improved convergence rate is described.  相似文献   

3.
We consider the problem of minimizing a nondifferentiable function that is the pointwise maximum over a compact family of continuously differentiable functions. We suppose that a certain convex approximation to the objective function can be evaluated. An iterative method is given which uses as successive search directions approximate solutions of semi-infinite quadratic programming problems calculated via a new generalized proximity algorithm. Inexact line searches ensure global convergence of the method to stationary points.This work was supported by Project No. CPBP-02.15/2.1.1.  相似文献   

4.
Whether or not the general asymmetric variational inequality problem can be formulated as a differentiable optimization problem has been an open question. This paper gives an affirmative answer to this question. We provide a new optimization problem formulation of the variational inequality problem, and show that its objective function is continuously differentiable whenever the mapping involved in the latter problem is continuously differentiable. We also show that under appropriate assumptions on the latter mapping, any stationary point of the optimization problem is a global optimal solution, and hence solves the variational inequality problem. We discuss descent methods for solving the equivalent optimization problem and comment on systems of nonlinear equations and nonlinear complementarity problems.  相似文献   

5.
This paper presents a method for minimizing the sum of a possibly nonsmooth convex function and a continuously differentiable function. As in the convex case developed by the author, the algorithm is a descent method which generates successive search directions by solving quadratic programming subproblems. An inexact line search ensures global convergence of the method to stationary points.  相似文献   

6.
This paper presents an algorithm for global optimization problem whose objective functions is Lipschitz continuous but not necessarily differentiable. The proposed algorithm consists of local and global search procedures which are based on and inspired by quasisecant method, respectively. The aim of the global search procedure is to identify “promising” basins in the search space. Once a promising basin is identified, the search procedure skips from an exhausted area to the obtained basin, and the local search procedure is then applied at this basin. It proves that the proposed algorithm converges to the global minimum solution if the local ones are finite and isolated. The proposed method is tested by academic benchmarks, numerical performance and comparison show that it is efficient and robust. Finally, The method is applied to solve the sensor localization problem.  相似文献   

7.
In this paper, we extend the ordinary discrete type facility location problems to continuous type ones. Unlike the discrete type facility location problem in which the objective function isn't everywhere differentiable, the objective function in the continuous type facility location problem is strictly convex and continuously differentiable. An algorithm without line search for solving the continuous type facility location problems is proposed and its global convergence, linear convergence rate is proved. Numerical experiments illustrate that the algorithm suggested in this paper have smaller amount of computation, quicker convergence rate than the gradient method and conjugate direction method in some sense.  相似文献   

8.
The convergence rate of a rectangular partition based algorithm is considered. A hyper-rectangle for the subdivision is selected at each step according to a criterion rooted in the statistical models based theory of global optimization; only the objective function values are used to compute the criterion of selection. The convergence rate is analyzed assuming that the objective functions are twice- continuously differentiable and defined on the unit cube in d-dimensional Euclidean space. An asymptotic bound on the convergence rate is established. The results of numerical experiments are included.  相似文献   

9.
A Radial Basis Function Method for Global Optimization   总被引:5,自引:0,他引:5  
We introduce a method that aims to find the global minimum of a continuous nonconvex function on a compact subset of . It is assumed that function evaluations are expensive and that no additional information is available. Radial basis function interpolation is used to define a utility function. The maximizer of this function is the next point where the objective function is evaluated. We show that, for most types of radial basis functions that are considered in this paper, convergence can be achieved without further assumptions on the objective function. Besides, it turns out that our method is closely related to a statistical global optimization method, the P-algorithm. A general framework for both methods is presented. Finally, a few numerical examples show that on the set of Dixon-Szegö test functions our method yields favourable results in comparison to other global optimization methods.  相似文献   

10.
In this paper, we consider the least l 2-norm solution for a possibly inconsistent system of nonlinear inequalities. The objective function of the problem is only first-order continuously differentiable. By introducing a new smoothing function, the problem is approximated by a family of parameterized optimization problems with twice continuously differentiable objective functions. Then a Levenberg–Marquardt algorithm is proposed to solve the parameterized smooth optimization problems. It is proved that the algorithm either terminates finitely at a solution of the original inequality problem or generates an infinite sequence. In the latter case, the infinite sequence converges to a least l 2-norm solution of the inequality problem. The local quadratic convergence of the algorithm was produced under some conditions.  相似文献   

11.
This paper introduces an algorithm for univariate optimization using linear lower bounding functions (LLBF's). An LLBF over an interval is a linear function which lies below the given function over an interval and matches the given function at one end point of the interval. We first present an algorithm using LLBF's for finding the nearest root of a function in a search direction. When the root-finding method is applied to the derivative of an objective function, it is an optimization algorithm which guarantees to locate the nearest extremum along a search direction. For univariate optimization, we show that this approach is a Newton-type method, which is globally convergent with superlinear convergence rate. The applications of this algorithm to global optimization and other optimization problems are also discussed.  相似文献   

12.
Many real applications can be formulated as nonlinear minimization problems with a single linear equality constraint and box constraints. We are interested in solving problems where the number of variables is so huge that basic operations, such as the evaluation of the objective function or the updating of its gradient, are very time consuming. Thus, for the considered class of problems (including dense quadratic programs), traditional optimization methods cannot be applied directly. In this paper, we define a decomposition algorithm model which employs, at each iteration, a descent search direction selected among a suitable set of sparse feasible directions. The algorithm is characterized by an acceptance rule of the updated point which on the one hand permits to choose the variables to be modified with a certain degree of freedom and on the other hand does not require the exact solution of any subproblem. The global convergence of the algorithm model is proved by assuming that the objective function is continuously differentiable and that the points of the level set have at least one component strictly between the lower and upper bounds. Numerical results on large-scale quadratic problems arising in the training of support vector machines show the effectiveness of an implemented decomposition scheme derived from the general algorithm model.  相似文献   

13.
This paper presents a method for finding the minimum for a class of nonconvex and nondifferentiable functions consisting of the sum of a convex function and a continuously differentiable function. The algorithm is a descent method which generates successive search directions by solving successive convex subproblems. The algorithm is shown to converge to a critical point.The authors wish to express their appreciation to the referees for their careful review and helpful comments.  相似文献   

14.
In the present paper, we propose a new multipoint type global optimization model using a chaotic dynamic model and a synchronization phenomenon in nonlinear dynamic systems for a continuously differentiable optimization problem. We first improve the Discrete Gradient Chaos Model (DGCM), which drives each search point’s autonomous movement, based on theoretical analysis. We then derive a new coupling structure called PD type coupling in order to obtain stable synchronization of all search points with the chaotic dynamic model in a discrete time system. Finally, we propose a new multipoint type global optimization model, in which each search point moves autonomously by improved DGCM and their trajectories are synchronized to elite search points by the PD type coupling model. The proposed model properly achieves diversification and intensification, which are reported to be important strategies for global optimization in the Meta-heuristics research field. Through application to proper benchmark problems [Liang et al. Novel composition test functions for numerical global optimization. In: Proceedings of Swarm Intelligence Symposium, 2005 (SIS 2005), pp. 68–75 (2005); Liang et al. Nat. Comput. 5(1), 83–96, 2006] (in which the drawbacks of typical benchmark problems are improved) with 100 or 1000 variables, we confirm that the proposed model is more effective than other gradient-based methods.  相似文献   

15.
Construction of global optimization algorithms using statistical models and radial basis function models is discussed. A new method of data smoothing using radial basis function and least squares approach is presented. It is shown that the P-algorithm for global optimization in the presence of noise based on a statistical model coincides with the corresponding radial basis algorithm.  相似文献   

16.
The aim of this paper is to show that the new continuously differentiable exact penalty functions recently proposed in literature can play an important role in the field of constrained global optimization. In fact they allow us to transfer ideas and results proposed in unconstrained global optimization to the constrained case.First, by drawing our inspiration from the unconstrained case and by using the strong exactness properties of a particular continuously differentiable penalty function, we propose a sufficient condition for a local constrained minimum point to be global.Then we show that every constrained local minimum point satisfying the second order sufficient conditions is an attraction point for a particular implementable minimization algorithm based on the considered penalty function. This result can be used to define new classes of global algorithms for the solution of general constrained global minimization problems. As an example, in this paper we describe a simulated annealing algorithm which produces a sequence of points converging in probability to a global minimum of the original constrained problem.  相似文献   

17.
The filled function method is considered as an efficient approach to solve the global optimization problems. In this paper, a new filled function method is proposed. Its main idea is as follows: a new continuously differentiable filled function with only one parameter is constructed for unconstrained global optimization when a minimizer of the objective function is found, then a minimizer of the filled function will be found in a lower basin of the objective function, thereafter, a better minimizer of the objective function will be found. The above process is repeated until the global optimal solution is found. The numerical experiments show the efficiency of the proposed filled function method.  相似文献   

18.
A well-recognized one-dimensional global optimization method is generalized to the multidimensional case. The generalization is based on a multidimensional statistical model of multimodal functions constructed by generalizing computationally favorable properties of a popular one-dimensional model—the Wiener process. A simplicial partition of a feasible region is essential for the construction of the model. The basic idea of the proposed method is to search where improvements of the objective function are most probable; a probability of improvement is evaluated with respect to the statistical model. Some results of computational experiments are presented.  相似文献   

19.
Nowadays, solving nonsmooth (not necessarily differentiable) optimization problems plays a very important role in many areas of industrial applications. Most of the algorithms developed so far deal only with nonsmooth convex functions. In this paper, we propose a new algorithm for solving nonsmooth optimization problems that are not assumed to be convex. The algorithm combines the traditional cutting plane method with some features of bundle methods, and the search direction calculation of feasible direction interior point algorithm (Herskovits, J. Optim. Theory Appl. 99(1):121–146, 1998). The algorithm to be presented generates a sequence of interior points to the epigraph of the objective function. The accumulation points of this sequence are solutions to the original problem. We prove the global convergence of the method for locally Lipschitz continuous functions and give some preliminary results from numerical experiments.  相似文献   

20.
We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation techniques to the objective function within an overlapping branch and bound algorithm for convex constrained global optimization. Unlike other branch and bound algorithms, lower bounds are obtained via nonconvex underestimators of the function. For a numerical example, we apply the proposed branch and bound algorithm to radial basis function approximations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号