共查询到20条相似文献,搜索用时 31 毫秒
1.
Convergence and Application of a Decomposition Method Using Duality Bounds for Nonconvex Global Optimization 总被引:4,自引:0,他引:4
N.V. Thoai 《Journal of Optimization Theory and Applications》2002,113(1):165-193
The subject of this article is a class of global optimization problems, in which the variables can be divided into two groups such that, in each group, the functions involved have the same structure (e.g. linear, convex or concave, etc.). Based on the decomposition idea of Benders (Ref. 1), a corresponding master problem is defined on the space of one of the two groups of variables. The objective function of this master problem is in fact the optimal value function of a nonlinear parametric optimization problem. To solve the resulting master problem, a branch-and-bound scheme is proposed, in which the estimation of the lower bounds is performed by applying the well-known weak duality theorem in Lagrange duality. The results of this article concentrate on two subjects: investigating the convergence of the general algorithm and solving dual problems of some special classes of nonconvex optimization problems. Based on results in sensitivity and stability theory and in parametric optimization, conditions for the convergence are established by investigating the so-called dual properness property and the upper semicontinuity of the objective function of the master problem. The general algorithm is then discussed in detail for some nonconvex problems including concave minimization problems with a special structure, general quadratic problems, optimization problems on the efficient set, and linear multiplicative programming problems. 相似文献
2.
In this work, we propose an optimization framework for designing under uncertainty that considers both robustness and reliability issues. This approach is generic enough to be applicable to engineering design problems involving nonconvex objective and constraint functions defined in terms of random variables that follow any distribution. The problem formulation employs an Inverse Reliability Strategy that uses percentile performance to address both robustness objectives and reliability constraints. Robustness is achieved through a design objective that evaluates performance variation as a percentile difference between the right and left trails of the specified goals. Reliability requirements are formulated as Inverse Reliability constraints that are based on equivalent percentile performance levels. The general proposed approach first approximates the formulated problem via a Gaussian Kriging model. This is then used to evaluate the percentile performance characteristics of the different measures inherent in the problem formulation for various design variable settings via a Most Probable Point of Inverse Reliability search algorithm. By using these percentile evaluations in concert with the response surface methodology, a polynomial programming approximation is generated. The resulting problem formulation is finally solved to global optimality using the Reformulation–Linearization Technique (RLT) approach. We demonstrate this overall proposed approach by applying it to solve the problem of reducing piston slap, an undesirable engine noise due to the secondary motion of a piston within a cylinder. 相似文献
3.
Unconstrained and constrained global optimization of polynomial functions in one variable 总被引:1,自引:0,他引:1
In Floudas and Visweswaran (1990), a new global optimization algorithm (GOP) was proposed for solving constrained nonconvex problems involving quadratic and polynomial functions in the objective function and/or constraints. In this paper, the application of this algorithm to the special case of polynomial functions of one variable is discussed. The special nature of polynomial functions enables considerable simplification of the GOP algorithm. The primal problem is shown to reduce to a simple function evaluation, while the relaxed dual problem is equivalent to the simultaneous solution of two linear equations in two variables. In addition, the one-to-one correspondence between the x and y variables in the problem enables the iterative improvement of the bounds used in the relaxed dual problem. The simplified approach is illustrated through a simple example that shows the significant improvement in the underestimating function obtained from the application of the modified algorithm. The application of the algorithm to several unconstrained and constrained polynomial function problems is demonstrated. 相似文献
4.
For a class of global optimization (maximization) problems, with a separable non-concave objective function and a linear constraint a computationally efficient heuristic has been developed.The concave relaxation of a global optimization problem is introduced. An algorithm for solving this problem to optimality is presented. The optimal solution of the relaxation problem is shown to provide an upper bound for the optimal value of the objective function of the original global optimization problem. An easily checked sufficient optimality condition is formulated under which the optimal solution of concave relaxation problem is optimal for the corresponding non-concave problem. An heuristic algorithm for solving the considered global optimization problem is developed.The considered global optimization problem models a wide class of optimal distribution of a unidimensional resource over subsystems to provide maximum total output in a multicomponent systems.In the presented computational experiments the developed heuristic algorithm generated solutions, which either met optimality conditions or had objective function values with a negligible deviation from optimality (less than 1/10 of a percent over entire range of problems tested). 相似文献
5.
The push for better understanding and design of complex systems requires the solution of challenging optimization problems with large numbers of decision variables. This note presents principled results demonstrating the scalable solution of a difficult test function on instances over a billion variables using a parallel implementation of a genetic algorithm (GA). The problem addressed is a noisy, blind problem over a vector of binary decision variables. Noise is added equaling a tenth of the deterministic objective function variance of the problem, thereby making it difficult for simple hillclimbers to find the optimal solution. The genetic algorithm used—the compact GA—is able to find the optimum in the presence of noise quickly, reliably, and accurately, and the solution scalability follows known convergence theories. These results on noisy problem together with other results on problems involving varying modularity, hierarchy, and overlap foreshadow routine solution of billion‐variable problems across the landscape of complexity science. © 2007 Wiley Periodicals, Inc. Complexity 12: 27–29, 2007 相似文献
6.
对广义几何规划问题(GGP)提出了一个确定型全局优化算法,这类优化问题能广泛应用于工程设计和非线性系统的鲁棒稳定性分析等实际问题中,使用指数变换及对目标函数和约束函数的线性下界估计,建立了GGP的松弛线性规划(RLP),通过对RLP可行域的细分以及一系列RLP的求解过程,从理论上证明了算法能收敛到GGP的全局最优解,对一个化学工程设计问题应用本文算法,数值实验表明本文方法是可行的。 相似文献
7.
Many engineering optimization problems frequently encounter discrete variables as well as continuous variables and the presence of nonlinear discrete variables considerably adds to the solution complexity. Very few of the existing methods can find a globally optimal solution when the objective functions are non-convex and non-differentiable. In this paper, we present a mixed-variable evolutionary programming (MVEP) technique for solving these nonlinear optimization problems which contain integer, discrete, zero-one and continuous variables. The MVEP provides an improvement in global search reliability in a mixed-variable space and converges steadily to a good solution. An approach to handle various kinds of variables and constraints is discussed. Some examples of mixed-variable optimization problems in the literature are tested, which demonstrate that the proposed approach is superior to current methods for finding the best solution, in terms of both solution quality and algorithm robustness. 相似文献
8.
9.
In this paper we propose a new algorithm for solving difficult large-scale global optimization problems. We draw our inspiration
from the well-known DIRECT algorithm which, by exploiting the objective function behavior, produces a set of points that tries to cover the most interesting
regions of the feasible set. Unfortunately, it is well-known that this strategy suffers when the dimension of the problem
increases. As a first step we define a multi-start algorithm using DIRECT as a deterministic generator of starting points. Then, the new algorithm consists in repeatedly applying the previous multi-start
algorithm on suitable modifications of the variable space that exploit the information gained during the optimization process.
The efficiency of the new algorithm is pointed out by a consistent numerical experimentation involving both standard test
problems and the optimization of Morse potential of molecular clusters. 相似文献
10.
Ossama Abdelkhalik 《Journal of Optimization Theory and Applications》2013,156(2):450-468
This paper introduces the biologically inspired concept of hidden genes genetic algorithms; they search for optimal solutions to global optimization problems of multimodal objective functions with a variable number of design variables. A fixed chromosome length is assumed for all solutions in the population. Each chromosome is divided into effective and ineffective segments. The effective segment includes the design variables for that solution. The ineffective segment includes only hidden genes. Hidden genes are excluded in objective function evaluations. The effect of the hidden genes on the convergence of the genetic algorithm is studied. Two test cases are presented. 相似文献
11.
针对不连续无约束全局优化问题,构造且运用对数变差积分来进行研究和求解.具体给出了对数变差积分函数的分析性质及其全局优化问题的最优性条件和概念性算法.结合Monte-Carlo技术,特别针对n=100个变量、具有不连续目标函数的三个具体实例进行了数值试验,计算结果也表明所给方法的可行性和有效性. 相似文献
12.
Mixed-integer nonlinear programming (MINLP) problems involving general constraints and objective functions with continuous
and integer variables occur frequently in engineering design, chemical process industry and management. Although many optimization
approaches have been developed for MINLP problems, these methods can only handle signomial terms with positive variables or
find a local solution. Therefore, this study proposes a novel method for solving a signomial MINLP problem with free variables
to obtain a global optimal solution. The signomial MINLP problem is first transformed into another one containing only positive
variables. Then the transformed problem is reformulated as a convex mixed-integer program by the convexification strategies
and piecewise linearization techniques. A global optimum of the signomial MINLP problem can finally be found within the tolerable
error. Numerical examples are also presented to demonstrate the effectiveness of the proposed method. 相似文献
13.
Marian G. Marcovecchio María L. Bergamini Pio A. Aguirre 《Journal of Global Optimization》2006,34(3):339-368
A new algorithm to solve nonconvex NLP problems is presented. It is based on the solution of two problems. The reformulated
problem RP is a suitable reformulation of the original problem and involves convex terms and concave univariate terms. The
main problem MP is a nonconvex NLP that outer-approximates the feasible region and underestimate the objective function. MP
involves convex terms and terms which are the products of concave univariate functions and new variables. Fixing the variables
in the concave terms, a convex NLP that overestimates the feasible region and underestimates the objective function is obtained
from the MP. Like most of the deterministic global optimization algorithms, bounds on all the variables in the nonconvex terms
must be provided. MP forces the objective value to improve and minimizes the difference of upper and lower bound of all the
variables either to zero or to a positive value. In the first case, a feasible solution of the original problem is reached
and the objective function is improved. In general terms, the second case corresponds to an infeasible solution of the original
problem due to the existence of gaps in some variables. A branching procedure is performed in order to either prove that there
is no better solution or reduce the domain, eliminating the local solution of MP that was found. The MP solution indicates
a key point to do the branching. A bound reduction technique is implemented to accelerate the convergence speed. Computational
results demonstrate that the algorithm compares very favorably to other approaches when applied to test problems and process
design problems. It is typically faster and it produces very accurate results. 相似文献
14.
Index branch-and-bound algorithm for Lipschitz univariate global optimization with multiextremal constraints 总被引:1,自引:0,他引:1
Yaroslav D. Sergeyev Domenico Famularo Paolo Pugliese 《Journal of Global Optimization》2001,21(3):317-341
In this paper, Lipschitz univariate constrained global optimization problems where both the objective function and constraints can be multiextremal are considered. The constrained problem is reduced to a discontinuous unconstrained problem by the index scheme without introducing additional parameters or variables. A Branch-and-Bound method that does not use derivatives for solving the reduced problem is proposed. The method either determines the infeasibility of the original problem or finds lower and upper bounds for the global solution. Not all the constraints are evaluated during every iteration of the algorithm, providing a significant acceleration of the search. Convergence conditions of the new method are established. Extensive numerical experiments are presented. 相似文献
15.
This paper is a study of the one-dimensional global optimization problem for continuously differentiable functions. We propose a variant of the so-called P-algorithm, originally proposed for a Wiener process model of an unknown objective function. The original algorithm has proven to be quite effective for global search, though it is not efficient for the local component of the optimization search if the objective function is smooth near the global minimizer. In this paper we construct a P-algorithm for a stochastic model of continuously differentiable functions, namely the once-integrated Wiener process. This process is continuously differentiable, but nowhere does it have a second derivative. We prove convergence properties of the algorithm. 相似文献
16.
A convergent decomposition algorithm for support vector machines 总被引:1,自引:0,他引:1
S. Lucidi L. Palagi A. Risi M. Sciandrone 《Computational Optimization and Applications》2007,38(2):217-234
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In
particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods
cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems
with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental
role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass
of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems,
have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous
methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the
objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be
suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence
of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results
on support vector classification problems with up to 100 thousands variables. 相似文献
17.
Zhiqing Meng Rui Shen Chuangyin Dang Min Jiang 《Numerical Functional Analysis & Optimization》2013,34(11):1471-1492
Augmented Lagrangian function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective penalty function and a modified augmented Lagrangian objective penalty function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective penalty function, which are at least as good as the traditional Lagrangian function's. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective penalty function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective penalty function holds, its saddle point exists. The sufficient and necessary stability conditions used to determine whether the modified augmented Lagrangian objective penalty function is exact for a global solution is proved. Based on the modified augmented Lagrangian objective penalty function, an algorithm is developed to find a global solution to an inequality constrained optimization problem, and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the modified augmented Lagrangian objective penalty function is proved for a local solution. An algorithm is presented in finding a local solution, with its convergence proved under some conditions. 相似文献
18.
A novel value-estimation function method for global optimization problems with inequality constraints is proposed in this paper. The value-estimation function formulation is an auxiliary unconstrained optimization problem with a univariate parameter that represents an estimated optimal value of the objective function of the original optimization problem. A solution is optimal to the original problem if and only if it is also optimal to the auxiliary unconstrained optimization with the parameter set at the optimal objective value of the original problem, which turns out to be the unique root of a basic value-estimation function. A logarithmic-exponential value-estimation function formulation is further developed to acquire computational tractability and efficiency. The optimal objective value of the original problem as well as the optimal solution are sought iteratively by applying either a generalized Newton method or a bisection method to the logarithmic-exponential value-estimation function formulation. The convergence properties of the solution algorithms guarantee the identification of an approximate optimal solution of the original problem, up to any predetermined degree of accuracy, within a finite number of iterations. 相似文献
19.
This two part paper considers the classical problem of finding a truss design with minimal compliance subject to a given external
force and a volume bound. The design variables describe the cross-section areas of the bars. While this problem is well-studied
for continuous bar areas, we treat here the case of discrete areas. This problem is of major practical relevance if the truss
must be built from pre-produced bars with given areas. As a special case, we consider the design problem for a single bar
area, i.e., a 0/1-problem.
In contrast to heuristic methods considered in other approaches, Part I of the paper together with Part II present an algorithmic
framework for the calculation of a global optimizer of the underlying large-scaled mixed integer design problem. This framework
is given by a convergent branch-and-bound algorithm which is based on solving a sequence of nonconvex continuous relaxations.
The main issue of the paper and of the approach lies in the fact that the relaxed nonlinear optimization problem can be formulated
as a quadratic program (QP). Here the paper generalizes and extends the available theory from the literature. Although the
Hessian of this QP is indefinite, it is possible to circumvent the non-convexity and to calculate global optimizers. Moreover,
the QPs to be treated in the branch-and-bound search tree differ from each other just in the objective function.
In Part I we give an introduction to the problem and collect all theory and related proofs for the treatment of the original
problem formulation and the continuous relaxed problems. The implementation details and convergence proof of the branch-and-bound
methodology and the large-scale numerical examples are presented in Part II. 相似文献