首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
In this article, we aim to extend the firefly algorithm (FA) to solve bound constrained mixed-integer nonlinear programming (MINLP) problems. An exact penalty continuous formulation of the MINLP problem is used. The continuous penalty problem comes out by relaxing the integrality constraints and by adding a penalty term to the objective function that aims to penalize integrality constraint violation. Two penalty terms are proposed, one is based on the hyperbolic tangent function and the other on the inverse hyperbolic sine function. We prove that both penalties can be used to define the continuous penalty problem, in the sense that it is equivalent to the MINLP problem. The solutions of the penalty problem are obtained using a variant of the metaheuristic FA for global optimization. Numerical experiments are given on a set of benchmark problems aiming to analyze the quality of the obtained solutions and the convergence speed. We show that the firefly penalty-based algorithm compares favourably with the penalty algorithm when the deterministic DIRECT or the simulated annealing solvers are invoked, in terms of convergence speed.  相似文献   

2.
带等式约束的光滑优化问题的一类新的精确罚函数   总被引:1,自引:0,他引:1  
罚函数方法是将约束优化问题转化为无约束优化问题的主要方法之一. 不包含目标函数和约束函数梯度信息的罚函数, 称为简单罚函数. 对传统精确罚函数而言, 如果它是简单的就一定是非光滑的; 如果它是光滑的, 就一定不是简单的. 针对等式约束优化问题, 提出一类新的简单罚函数, 该罚函数通过增加一个新的变量来控制罚项. 证明了此罚函数的光滑性和精确性, 并给出了一种解决等式约束优化问题的罚函数算法. 数值结果表明, 该算法对于求解等式约束优化问题是可行的.  相似文献   

3.
The aim of this paper is to show that the new continuously differentiable exact penalty functions recently proposed in literature can play an important role in the field of constrained global optimization. In fact they allow us to transfer ideas and results proposed in unconstrained global optimization to the constrained case.First, by drawing our inspiration from the unconstrained case and by using the strong exactness properties of a particular continuously differentiable penalty function, we propose a sufficient condition for a local constrained minimum point to be global.Then we show that every constrained local minimum point satisfying the second order sufficient conditions is an attraction point for a particular implementable minimization algorithm based on the considered penalty function. This result can be used to define new classes of global algorithms for the solution of general constrained global minimization problems. As an example, in this paper we describe a simulated annealing algorithm which produces a sequence of points converging in probability to a global minimum of the original constrained problem.  相似文献   

4.
Penalty function is an important tool in solving many constrained optimization problems in areas such as industrial design and management. In this paper, we study exactness and algorithm of an objective penalty function for inequality constrained optimization. In terms of exactness, this objective penalty function is at least as good as traditional exact penalty functions. Especially, in the case of a global solution, the exactness of the proposed objective penalty function shows a significant advantage. The sufficient and necessary stability condition used to determine whether the objective penalty function is exact for a global solution is proved. Based on the objective penalty function, an algorithm is developed for finding a global solution to an inequality constrained optimization problem and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the objective penalty function is proved for a local solution. An algorithm is presented in the paper in finding a local solution, with its convergence proved under some conditions. Finally, numerical experiments show that a satisfactory approximate optimal solution can be obtained by the proposed algorithm.  相似文献   

5.
Variable selection is an important aspect of high-dimensional statistical modeling, particularly in regression and classification. In the regularization framework, various penalty functions are used to perform variable selection by putting relatively large penalties on small coefficients. The L1 penalty is a popular choice because of its convexity, but it produces biased estimates for the large coefficients. The L0 penalty is attractive for variable selection because it directly penalizes the number of non zero coefficients. However, the optimization involved is discontinuous and non convex, and therefore it is very challenging to implement. Moreover, its solution may not be stable. In this article, we propose a new penalty that combines the L0 and L1 penalties. We implement this new penalty by developing a global optimization algorithm using mixed integer programming (MIP). We compare this combined penalty with several other penalties via simulated examples as well as real applications. The results show that the new penalty outperforms both the L0 and L1 penalties in terms of variable selection while maintaining good prediction accuracy.  相似文献   

6.
《Optimization》2012,61(7):873-900
This article presents a numerical study on MIDACO, a new global optimization software for mixed integer non-linear programming (MINLP) based on ant colony optimization and the oracle penalty method. Extensive and rigorous numerical tests on a set of 100 non-convex MINLP benchmark problems from the open literature are performed. Results obtained by MIDACO are directly compared to results by a recent study of state-of-the-art deterministic MINLP software on the same test set. Further comparisons with an established MINLP software is undertaken in addition. This study shows that MIDACO is not only competitive to the established MINLP software, but can even outperform those in terms of the number of global optimal solutions found. Moreover, the parallelization capabilities of MIDACO enable it to be even competitive to deterministic software regarding the amount of (serial processed) function evaluation, while the black-box capabilities of MIDACO offer an intriguing new robustness for MINLP.  相似文献   

7.
In this paper, we present constrained simulated annealing (CSA), an algorithm that extends conventional simulated annealing to look for constrained local minima of nonlinear constrained optimization problems. The algorithm is based on the theory of extended saddle points (ESPs) that shows the one-to-one correspondence between a constrained local minimum and an ESP of the corresponding penalty function. CSA finds ESPs by systematically controlling probabilistic descents in the problem-variable subspace of the penalty function and probabilistic ascents in the penalty subspace. Based on the decomposition of the necessary and sufficient ESP condition into multiple necessary conditions, we present constraint-partitioned simulated annealing (CPSA) that exploits the locality of constraints in nonlinear optimization problems. CPSA leads to much lower complexity as compared to that of CSA by partitioning the constraints of a problem into significantly simpler subproblems, solving each independently, and resolving those violated global constraints across the subproblems. We prove that both CSA and CPSA asymptotically converge to a constrained global minimum with probability one in discrete optimization problems. The result extends conventional simulated annealing (SA), which guarantees asymptotic convergence in discrete unconstrained optimization, to that in discrete constrained optimization. Moreover, it establishes the condition under which optimal solutions can be found in constraint-partitioned nonlinear optimization problems. Finally, we evaluate CSA and CPSA by applying them to solve some continuous constrained optimization benchmarks and compare their performance to that of other penalty methods.  相似文献   

8.
In this article, a smoothing objective penalty function for inequality constrained optimization problems is presented. The article proves that this type of the smoothing objective penalty functions has good properties in helping to solve inequality constrained optimization problems. Moreover, based on the penalty function, an algorithm is presented to solve the inequality constrained optimization problems, with its convergence under some conditions proved. Two numerical experiments show that a satisfactory approximate optimal solution can be obtained by the proposed algorithm.  相似文献   

9.
针对不等式约束优化问题, 给出了通过二次函数对低阶精确罚函数进行光滑化逼近的两种函数形式, 得到修正的光滑罚函数. 证明了在一定条件下, 当罚参数充分大, 修正的光滑罚问题的全局最优解是原优化问题的全局最优解. 给出的两个数值例子说明了所提出的光滑化方法的有效性.  相似文献   

10.
Global Optimization Requires Global Information   总被引:5,自引:0,他引:5  
There are many global optimization algorithms which do not use global information. We broaden previous results, showing limitations on such algorithms, even if allowed to run forever. We show that deterministic algorithms must sample a dense set to find the global optimum value and can never be guaranteed to converge only to global optimizers. Further, analogous results show that introducing a stochastic element does not overcome these limitations. An example is simulated annealing in practice. Our results show that there are functions for which the probability of success is arbitrarily small.  相似文献   

11.
This paper is a critical survey of the interval optimization methods aimed at computing global optima for multivariable functions. To overcome some drawbacks of traditional deterministic interval techniques, we outline some ways of constructing stochastic (randomized) algorithms in interval global optimization, in particular, those based on the ideas of random search and simulated annealing.  相似文献   

12.
In this paper, we reformulate a nonlinear semidefinite programming problem into an optimization problem with a matrix equality constraint. We apply a lower-order penalization approach to the reformulated problem. Necessary and sufficient conditions that guarantee the global (local) exactness of the lower-order penalty functions are derived. Convergence results of the optimal values and optimal solutions of the penalty problems to those of the original semidefinite program are established. Since the penalty functions may not be smooth or even locally Lipschitz, we invoke the Ekeland variational principle to derive necessary optimality conditions for the penalty problems. Under certain conditions, we show that any limit point of a sequence of stationary points of the penalty problems is a KKT stationary point of the original semidefinite program. Communicated by Y. Zhang This work was supported by a Postdoctoral Fellowship of Hong Kong Polytechnic University and by the Research Grants Council of Hong Kong.  相似文献   

13.
In this two-part study, we develop a unified approach to the analysis of the global exactness of various penalty and augmented Lagrangian functions for constrained optimization problems in finite-dimensional spaces. This approach allows one to verify in a simple and straightforward manner whether a given penalty/augmented Lagrangian function is exact, i.e., whether the problem of unconstrained minimization of this function is equivalent (in some sense) to the original constrained problem, provided the penalty parameter is sufficiently large. Our approach is based on the so-called localization principle that reduces the study of global exactness to a local analysis of a chosen merit function near globally optimal solutions. In turn, such local analysis can be performed with the use of optimality conditions and constraint qualifications. In the first paper, we introduce the concept of global parametric exactness and derive the localization principle in the parametric form. With the use of this version of the localization principle, we recover existing simple, necessary, and sufficient conditions for the global exactness of linear penalty functions and for the existence of augmented Lagrange multipliers of Rockafellar–Wets’ augmented Lagrangian. We also present completely new necessary and sufficient conditions for the global exactness of general nonlinear penalty functions and for the global exactness of a continuously differentiable penalty function for nonlinear second-order cone programming problems. We briefly discuss how one can construct a continuously differentiable exact penalty function for nonlinear semidefinite programming problems as well.  相似文献   

14.
This paper proposes a self-adaptive penalty function and presents a penalty-based algorithm for solving nonsmooth and nonconvex constrained optimization problems. We prove that the general constrained optimization problem is equivalent to a bound constrained problem in the sense that they have the same global solutions. The global minimizer of the penalty function subject to a set of bound constraints may be obtained by a population-based meta-heuristic. Further, a hybrid self-adaptive penalty firefly algorithm, with a local intensification search, is designed, and its convergence analysis is established. The numerical experiments and a comparison with other penalty-based approaches show the effectiveness of the new self-adaptive penalty algorithm in solving constrained global optimization problems.  相似文献   

15.
A class of generalized variable penalty formulations for solving nonlinear programming problems is presented. The method poses a sequence of unconstrained optimization problems with mechanisms to control the quality of the approximation for the Hessian matrix, which is expressed in terms of the constraint functions and their first derivatives. The unconstrained problems are solved using a modified Newton's algorithm. The method is particularly applicable to solution techniques where an approximate analysis step has to be used (e.g., constraint approximations, etc.), which often results in the violation of the constraints. The generalized penalty formulation contains two floating parameters, which are used to meet the penalty requirements and to control the errors in the approximation of the Hessian matrix. A third parameter is used to vary the class of standard barrier or quasibarrier functions, forming a branch of the variable penalty formulation. Several possibilities for choosing such floating parameters are discussed. The numerical effectiveness of this algorithm is demonstrated on a relatively large set of test examples.The author is thankful for the constructive suggestions of the referees.  相似文献   

16.
含有等式约束非线性规划的全局优化算法   总被引:1,自引:0,他引:1  
针对含有多个等式约束的非线性规划问题,提出一个全局优化算法.该方法基于可行集策略把改进的模拟退火方法与确定的局部算法方法相结合.对算法的收敛性进行了证明,数值结果表明算法的有效性及正确性.  相似文献   

17.
本文对不等式约束优化问题给出了低阶精确罚函数的一种光滑化逼近.提出了通过搜索光滑化后的罚问题的全局解而得到原优化问题的近似全局解的算法.给出了几个数值例子以说明所提出的光滑化方法的有效性.  相似文献   

18.
We present a method for deriving theoptimal solution of a class of mathematical programming problems, associated with discrete-event systems and in particular with queueing models, while using asingle sample path (single simulation experiment) from the underlying process. Our method, called thescore function method, is based on probability measure transformation derived from the efficient score process and generating statistical counterparts to the conventional deterministic optimization procedures (e.g. Lagrange multipliers, penalty functions, etc.). Applications of our method to optimization of various discrete-event systems are presented, and numerical results are given.Research supported by the L. Edelstein Research Fund at the Technion-Israel Institute of Technology.  相似文献   

19.
In this paper, an algorithm of barrier objective penalty function for inequality constrained optimization is studied and a conception–the stability of barrier objective penalty function is presented. It is proved that an approximate optimal solution may be obtained by solving a barrier objective penalty function for inequality constrained optimization problem when the barrier objective penalty function is stable. Under some conditions, the stability of barrier objective penalty function is proved for convex programming. Specially, the logarithmic barrier function of convex programming is stable. Based on the barrier objective penalty function, an algorithm is developed for finding an approximate optimal solution to an inequality constrained optimization problem and its convergence is also proved under some conditions. Finally, numerical experiments show that the barrier objective penalty function algorithm has better convergence than the classical barrier function algorithm.  相似文献   

20.
The global solution of bilevel dynamic optimization problems is discussed. An overview of a deterministic algorithm for bilevel programs with nonconvex functions participating is given, followed by a summary of deterministic algorithms for the global solution of optimization problems with nonlinear ordinary differential equations embedded. Improved formulations for scenario-integrated optimization are proposed as bilevel dynamic optimization problems. Solution procedures for some of the problems are given, while for others open challenges are discussed. Illustrative examples are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号