首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper we propose a new simplicial partition-based deterministic algorithm for global optimization of Lipschitz-continuous functions without requiring any knowledge of the Lipschitz constant. Our algorithm is motivated by the well-known Direct algorithm which evaluates the objective function on a set of points that tries to cover the most promising subregions of the feasible region. Almost all previous modifications of Direct algorithm use hyper-rectangular partitions. However, other types of partitions may be more suitable for some optimization problems. Simplicial partitions may be preferable when the initial feasible region is either already a simplex or may be covered by one or a manageable number of simplices. Therefore in this paper we propose and investigate simplicial versions of the partition-based algorithm. In the case of simplicial partitions, definition of potentially optimal subregion cannot be the same as in the rectangular version. In this paper we propose and investigate two definitions of potentially optimal simplices: one involves function values at the vertices of the simplex and another uses function value at the centroid of the simplex. We use experimental investigation to compare performance of the algorithms with different definitions of potentially optimal partitions. The experimental investigation shows, that proposed simplicial algorithm gives very competitive results to Direct algorithm using standard test problems and performs particularly well when the search space and the numbers of local and global optimizers may be reduced by taking into account symmetries of the objective function.  相似文献   

2.
马玉敏  蔡邢菊 《计算数学》2022,44(2):272-288
增广拉格朗日方法是求解带线性约束的凸优化问题的有效算法.线性化增广拉格朗日方法通过线性化增广拉格朗日函数的二次罚项并加上一个临近正则项,使得子问题容易求解,其中正则项系数的恰当选取对算法的收敛性和收敛速度至关重要.较大的系数可保证算法收敛性,但容易导致小步长.较小的系数允许迭代步长增大,但容易导致算法不收敛.本文考虑求解带线性等式或不等式约束的凸优化问题.我们利用自适应技术设计了一类不定线性化增广拉格朗日方法,即利用当前迭代点的信息自适应选取合适的正则项系数,在保证收敛性的前提下尽量使得子问题步长选择范围更大,从而提高算法收敛速度.我们从理论上证明了算法的全局收敛性,并利用数值实验说明了算法的有效性.  相似文献   

3.
In this paper we propose a new algorithm for solving difficult large-scale global optimization problems. We draw our inspiration from the well-known DIRECT algorithm which, by exploiting the objective function behavior, produces a set of points that tries to cover the most interesting regions of the feasible set. Unfortunately, it is well-known that this strategy suffers when the dimension of the problem increases. As a first step we define a multi-start algorithm using DIRECT as a deterministic generator of starting points. Then, the new algorithm consists in repeatedly applying the previous multi-start algorithm on suitable modifications of the variable space that exploit the information gained during the optimization process. The efficiency of the new algorithm is pointed out by a consistent numerical experimentation involving both standard test problems and the optimization of Morse potential of molecular clusters.  相似文献   

4.
We propose a new globalization strategy that can be used in unconstrained optimization algorithms to support rapid convergence from remote starting points. Our approach is based on using multiple points at each iteration to build a sequence of representative models of the objective function. Using the new information gathered from those multiple points, a local step is gradually improved by updating its direction as well as its length. We give a global convergence result and also provide the parallel implementation details accompanied with a numerical study. Our numerical study shows that the proposed algorithm is a promising alternative as a globalization strategy.  相似文献   

5.
We investigate the global convergence of a factorized distribution algorithm (FDA) with truncation selection. Like conventional genetic algorithms, FDAs maintain and successively improve a population of solutions. In FDAs, a distribution model is built based on the statistical information extracted from a set of selected solutions in the current population, and then the model thus built is used to generate new solutions for the next generation. The variable‐dependence structure of the distribution model in FDAs is determined by the variable‐interaction structure of the objective function. We prove that the FDA with truncation selection converges globally for optimization of a class of additively decomposable functions (ADF). Our results imply that the utilization of appropriately selected dependence relationships is sufficient to guarantee the global convergence of estimation of distribution algorithms (EDAs) for optimization of ADFs. © 2004 Wiley Periodicals, Inc. Complexity 9: 17–23, 2004  相似文献   

6.
In this work, we show that, given a nonlinear programming problem, it is possible to construct a family of dynamical systems, defined on the feasible set of the given problem, so that: (a) the equilibrium points are the unknown critical points of the problem, which are asymptotically stable, (b) each dynamical system admits the objective function of the problem as a Lyapunov function, and (c) explicit formulas are available without involving the unknown critical points of the problem. The construction of the family of dynamical systems is based on the Control Lyapunov Function methodology, which is used in mathematical control theory for the construction of stabilizing feedback. The knowledge of a dynamical system with the previously mentioned properties allows the construction of algorithms, which guarantee the global convergence to the set of the critical points.  相似文献   

7.
We consider a global optimization problem for Lipschitz-continuous functions with an unknown Lipschitz constant. Our approach is based on the well-known DIRECT (DIviding RECTangles) algorithm and motivated by the diagonal partitioning strategy. One of the main advantages of the diagonal partitioning scheme is that the objective function is evaluated at two points at each hyper-rectangle and, therefore, more comprehensive information about the objective function is considered than using the central sampling strategy used in most DIRECT-type algorithms. In this paper, we introduce a new DIRECT-type algorithm, which we call BIRECT (BIsecting RECTangles). In this algorithm, a bisection is used instead of a trisection which is typical for diagonal-based and DIRECT-type algorithms. The bisection is preferable to the trisection because of the shapes of hyper-rectangles, but usual evaluation of the objective function at the center or at the endpoints of the diagonal are not favorable for bisection. In the proposed algorithm the objective function is evaluated at two points on the diagonal equidistant between themselves and the endpoints of a diagonal. This sampling strategy enables reuse of the sampling points in descendant hyper-rectangles. The developed algorithm gives very competitive numerical results compared to the DIRECT algorithm and its well know modifications.  相似文献   

8.
Yang  Minghan  Milzarek  Andre  Wen  Zaiwen  Zhang  Tong 《Mathematical Programming》2022,194(1-2):257-303

In this paper, a novel stochastic extra-step quasi-Newton method is developed to solve a class of nonsmooth nonconvex composite optimization problems. We assume that the gradient of the smooth part of the objective function can only be approximated by stochastic oracles. The proposed method combines general stochastic higher order steps derived from an underlying proximal type fixed-point equation with additional stochastic proximal gradient steps to guarantee convergence. Based on suitable bounds on the step sizes, we establish global convergence to stationary points in expectation and an extension of the approach using variance reduction techniques is discussed. Motivated by large-scale and big data applications, we investigate a stochastic coordinate-type quasi-Newton scheme that allows to generate cheap and tractable stochastic higher order directions. Finally, numerical results on large-scale logistic regression and deep learning problems show that our proposed algorithm compares favorably with other state-of-the-art methods.

  相似文献   

9.
We present a modification of the DIRECT (DIviding RECTangles) algorithm, called DIRECT-G, to solve a box-constrained global optimization problem arising in the detection of gravitational waves emitted by coalescing binary systems of compact objects. This is a hard problem, since the objective function is highly nonlinear and expensive to evaluate, has a huge number of local extrema and unavailable derivatives. DIRECT performs a sampling of the feasible domain over a set of points that becomes dense in the limit, thus ensuring the everywhere dense convergence; however, it becomes ineffective on significant instances of the problem under consideration, because it tends to produce a uniform coverage of the feasible domain, by oversampling regions that are far from the optimal solution. DIRECT has been modified by embodying information provided by a suitable discretization of the feasible domain, based on the signal theory, which takes into account the variability of the objective function. Numerical experiments show that DIRECT-G largely outperforms DIRECT and the grid search, the latter being the reference algorithm in the astrophysics community. Furthermore, DIRECT-G is comparable with a genetic algorithm specifically developed for the problem. However, DIRECT-G inherits the convergence properties of DIRECT, whereas the genetic algorithm has no guarantee of convergence.  相似文献   

10.
Penalty function is an important tool in solving many constrained optimization problems in areas such as industrial design and management. In this paper, we study exactness and algorithm of an objective penalty function for inequality constrained optimization. In terms of exactness, this objective penalty function is at least as good as traditional exact penalty functions. Especially, in the case of a global solution, the exactness of the proposed objective penalty function shows a significant advantage. The sufficient and necessary stability condition used to determine whether the objective penalty function is exact for a global solution is proved. Based on the objective penalty function, an algorithm is developed for finding a global solution to an inequality constrained optimization problem and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the objective penalty function is proved for a local solution. An algorithm is presented in the paper in finding a local solution, with its convergence proved under some conditions. Finally, numerical experiments show that a satisfactory approximate optimal solution can be obtained by the proposed algorithm.  相似文献   

11.
In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values. Support for Luís N. Vicente was provided by Centro de Matemática da Universidade de Coimbra and by FCT under grant POCI/MAT/59442/2004.  相似文献   

12.
In this paper we propose an algorithm using only the values of the objective function and constraints for solving one-dimensional global optimization problems where both the objective function and constraints are Lipschitzean and nonlinear. The constrained problem is reduced to an unconstrained one by the index scheme. To solve the reduced problem a new method with local tuning on the behavior of the objective function and constraints over different sectors of the search region is proposed. Sufficient conditions of global convergence are established. We also present results of some numerical experiments.  相似文献   

13.
In this paper we consider the problem of minimizing a nonlinear function using partial derivative knowledge. Namely, the objective function is such that its derivatives with respect to a pre-specified block of variables cannot be computed. To solve the problem we propose a block decomposition method that takes advantage of both derivative-free and derivative-based iterations to account for the features of the objective function. Under standard assumptions, we manage to prove global convergence of the method to stationary points of the problem.  相似文献   

14.
Polyak's subgradient algorithm for nondifferentiable optimization problems requires prior knowledge of the optimal value of the objective function to find an optimal solution. In this paper we extend the convergence properties of the Polyak's subgradient algorithm with a fixed target value to a more general case with variable target values. Then a target value updating scheme is provided which finds an optimal solution without prior knowledge of the optimal objective value. The convergence proof of the scheme is provided and computational results of the scheme are reported.Most of this research was performed when the first author was visiting Decision and Information Systems Department, College of Business, Arizona State University.  相似文献   

15.
We classify in this paper different augmented Lagrangian functions into three unified classes. Based on two unified formulations, we construct, respectively, two convergent augmented Lagrangian methods that do not require the global solvability of the Lagrangian relaxation and whose global convergence properties do not require the boundedness of the multiplier sequence and any constraint qualification. In particular, when the sequence of iteration points does not converge, we give a sufficient and necessary condition for the convergence of the objective value of the iteration points. We further derive two multiplier algorithms which require the same convergence condition and possess the same properties as the proposed convergent augmented Lagrangian methods. The existence of a global saddle point is crucial to guarantee the success of a dual search. We generalize in the second half of this paper the existence theorems for a global saddle point in the literature under the framework of the unified classes of augmented Lagrangian functions.  相似文献   

16.
Differential evolution algorithms represent an up to date and efficient way of solving complicated optimization tasks. In this article we concentrate on the ability of the differential evolution algorithms to attain the global minimum of the cost function. We demonstrate that although often declared as a global optimizer the classic differential evolution algorithm does not in general guarantee the convergence to the global minimum. To improve this weakness we design a simple modification of the classic differential evolution algorithm. This modification limits the possible premature convergence to local minima and ensures the asymptotic global convergence. We also introduce concepts that are necessary for the subsequent proof of the asymptotic global convergence of the modified algorithm. We test the classic and modified algorithm by numerical experiments and compare the efficiency of finding the global minimum for both algorithms. The tests confirm that the modified algorithm is significantly more efficient with respect to the global convergence than the classic algorithm.  相似文献   

17.
In this article, we consider two classes of discrete bilevel optimization problems which have the peculiarity that the lower level variables do not affect the upper level constraints. In the first case, the objective functions are linear and the variables are discrete at both levels, and in the second case only the lower level variables are discrete and the objective function of the lower level is linear while the one of the upper level can be nonlinear. Algorithms for computing global optimal solutions using Branch and Cut and approximation of the optimal value function of the lower level are suggested. Their convergence is shown and we illustrate each algorithm via an example.  相似文献   

18.
王福胜  张瑞 《计算数学》2018,40(1):49-62
针对带不等式约束的极大极小问题,借鉴一般约束优化问题的模松弛强次可行SQP算法思想,提出了求解不等式约束极大极小问题的一个新型模松弛强次可行SQCQP算法.首先,通过在QCQP子问题中选取合适的罚函数,保证了算法的可行性以及目标函数F(x)的下降性,同时简化QCQP子问题二次约束项参数α_k的选取,可保证算法的可行性和收敛性.其次,算法步长的选取合理简单.最后,在适当的假设条件下证明了算法具有全局收敛性及强收敛性.初步的数值试验结果表明算法是可行有效的.  相似文献   

19.
Augmented Lagrangian function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective penalty function and a modified augmented Lagrangian objective penalty function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective penalty function, which are at least as good as the traditional Lagrangian function's. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective penalty function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective penalty function holds, its saddle point exists. The sufficient and necessary stability conditions used to determine whether the modified augmented Lagrangian objective penalty function is exact for a global solution is proved. Based on the modified augmented Lagrangian objective penalty function, an algorithm is developed to find a global solution to an inequality constrained optimization problem, and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the modified augmented Lagrangian objective penalty function is proved for a local solution. An algorithm is presented in finding a local solution, with its convergence proved under some conditions.  相似文献   

20.
This article proposes a variable selection method termed “subtle uprooting” for linear regression. In this proposal, variable selection is formulated into a single optimization problem by approximating cardinality involved in the information criterion with a smooth function. A technical maneuver is then employed to enforce sparsity of parameter estimates while maintaining smoothness of the objective function. To solve the resulting smooth nonconvex optimization problem, a modified Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm with established global and super-linear convergence is adopted. Both simulated experiments and an empirical example are provided for assessment and illustration. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号