首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers constrained and unconstrained parametric global optimization problems in a real Hilbert space. We assume that the gradient of the cost functional is Lipschitz continuous but not smooth. A suitable choice of parameters implies the linear or superlinear (supergeometric) convergence of the iterative method. From the numerical experiments, we conclude that our algorithm is faster than other existing algorithms for continuous but nonsmooth problems, when applied to unconstrained global optimization problems. However, because we solve 2n + 1 subproblems for a large number n of independent variables, our algorithm is somewhat slower than other algorithms, when applied to constrained global optimization.This work was partially supported by the NATO Outreach Fellowship - Mathematics 219.33.We thank Professor Hans D. Mittelmann, Arizona State University, for cooperation and support.  相似文献   

2.
Penalty algorithms for constrained minimax problems typically involve a sequence of unconstrained approximates which is pointwise monotone in each variable. This paper generalizes convergence results for a wider class of algorithms while imposing conditions which are close to being minimal.Sponsored in part by a grant from the Norwegian Council of Scientific and Technological Research.  相似文献   

3.
This paper proposes a way to combine the Mesh Adaptive Direct Search (MADS) algorithm, which extends the Generalized Pattern Search (GPS) algorithm, with the Variable Neighborhood Search (VNS) metaheuristic, for nonsmooth constrained optimization. The resulting algorithm retains the convergence properties of MADS, and allows the far reaching exploration features of VNS to move away from local solutions. The paper also proposes a generic way to use surrogate functions in the VNS search. Numerical results illustrate advantages and limitations of this method.  相似文献   

4.
In this work, we combine outer-approximation (OA) and bundle method algorithms for dealing with mixed-integer non-linear programming (MINLP) problems with nonsmooth convex objective and constraint functions. As the convergence analysis of OA methods relies strongly on the differentiability of the involved functions, OA algorithms may fail to solve general nonsmooth convex MINLP problems. In order to obtain OA algorithms that are convergent regardless the structure of the convex functions, we solve the underlying OA’s non-linear subproblems by a specialized bundle method that provides necessary information to cut off previously visited (non-optimal) integer points. This property is crucial for proving (finite) convergence of OA algorithms. We illustrate the numerical performance of the given proposal on a class of hybrid robust and chance-constrained problems that involve a random variable with finite support.  相似文献   

5.
Nowadays, solving nonsmooth (not necessarily differentiable) optimization problems plays a very important role in many areas of industrial applications. Most of the algorithms developed so far deal only with nonsmooth convex functions. In this paper, we propose a new algorithm for solving nonsmooth optimization problems that are not assumed to be convex. The algorithm combines the traditional cutting plane method with some features of bundle methods, and the search direction calculation of feasible direction interior point algorithm (Herskovits, J. Optim. Theory Appl. 99(1):121–146, 1998). The algorithm to be presented generates a sequence of interior points to the epigraph of the objective function. The accumulation points of this sequence are solutions to the original problem. We prove the global convergence of the method for locally Lipschitz continuous functions and give some preliminary results from numerical experiments.  相似文献   

6.
本文定义了一种新的滤子方法,并提出了求解光滑不等式约束最优化问题的滤子QP-free非可行域方法.通过乘子和分片线性非线性互补函数,构造一个等价于原约束问题一阶KKT条件的非光滑方程组.在此基础上,通过牛顿-拟牛顿迭代得到满足KKT最优条件的解,在迭代中采用了滤子线搜索方法,证明了该算法是可实现,并具有全局收敛性.另外,在较弱条件下可以证明该方法具有超线性收敛性.  相似文献   

7.
We present a new strategy for the constrained global optimization of expensive black box functions using response surface models. A response surface model is simply a multivariate approximation of a continuous black box function which is used as a surrogate model for optimization in situations where function evaluations are computationally expensive. Prior global optimization methods that utilize response surface models were limited to box-constrained problems, but the new method can easily incorporate general nonlinear constraints. In the proposed method, which we refer to as the Constrained Optimization using Response Surfaces (CORS) Method, the next point for costly function evaluation is chosen to be the one that minimizes the current response surface model subject to the given constraints and to additional constraints that the point be of some distance from previously evaluated points. The distance requirement is allowed to cycle, starting from a high value (global search) and ending with a low value (local search). The purpose of the constraint is to drive the method towards unexplored regions of the domain and to prevent the premature convergence of the method to some point which may not even be a local minimizer of the black box function. The new method can be shown to converge to the global minimizer of any continuous function on a compact set regardless of the response surface model that is used. Finally, we considered two particular implementations of the CORS method which utilize a radial basis function model (CORS-RBF) and applied it on the box-constrained Dixon–Szegö test functions and on a simple nonlinearly constrained test function. The results indicate that the CORS-RBF algorithms are competitive with existing global optimization algorithms for costly functions on the box-constrained test problems. The results also show that the CORS-RBF algorithms are better than other algorithms for constrained global optimization on the nonlinearly constrained test problem.  相似文献   

8.
Nonsmooth optimization via quasi-Newton methods   总被引:1,自引:0,他引:1  
We investigate the behavior of quasi-Newton algorithms applied to minimize a nonsmooth function f, not necessarily convex. We introduce an inexact line search that generates a sequence of nested intervals containing a set of points of nonzero measure that satisfy the Armijo and Wolfe conditions if f is absolutely continuous along the line. Furthermore, the line search is guaranteed to terminate if f is semi-algebraic. It seems quite difficult to establish a convergence theorem for quasi-Newton methods applied to such general classes of functions, so we give a careful analysis of a special but illuminating case, the Euclidean norm, in one variable using the inexact line search and in two variables assuming that the line search is exact. In practice, we find that when f is locally Lipschitz and semi-algebraic with bounded sublevel sets, the BFGS (Broyden–Fletcher–Goldfarb–Shanno) method with the inexact line search almost always generates sequences whose cluster points are Clarke stationary and with function values converging R-linearly to a Clarke stationary value. We give references documenting the successful use of BFGS in a variety of nonsmooth applications, particularly the design of low-order controllers for linear dynamical systems. We conclude with a challenging open question.  相似文献   

9.
This paper investigates the global convergence of trust region (TR) methods for solving nonsmooth minimization problems. For a class of nonsmooth objective functions called regular functions, conditions are found on the TR local models that imply three fundamental convergence properties. These conditions are shown to be satisfied by appropriate forms of Fletcher's TR method for solving constrained optimization problems, Powell and Yuan's TR method for solving nonlinear fitting problems, Zhang, Kim and Lasdon's successive linear programming method for solving constrained problems, Duff, Nocedal and Reid's TR method for solving systems of nonlinear equations, and El Hallabi and Tapia's TR method for solving systems of nonlinear equations. Thus our results can be viewed as a unified convergence theory for TR methods for nonsmooth problems.Research supported by AFOSR 89-0363, DOE DEFG05-86ER25017 and ARO 9DAAL03-90-G-0093.Corresponding author.  相似文献   

10.
This paper presents an algorithmic solution, the adaptive projected subgradient method, to the problem of asymptotically minimizing a certain sequence of non-negative continuous convex functions over the fixed point set of a strongly attracting nonexpansive mapping in a real Hilbert space. The method generalizes Polyak's subgradient algorithm for the convexly constrained minimization of a fixed nonsmooth function. By generating a strongly convergent and asymptotically optimal point sequence, the proposed method not only offers unifying principles for many projection-based adaptive filtering algorithms but also enhances the adaptive filtering methods with the set theoretic estimation's armory by allowing a variety of a priori information on the estimandum in the form, for example, of multiple intersecting closed convex sets.  相似文献   

11.
This paper describes two new harmony search (HS) meta-heuristic algorithms for engineering optimization problems with continuous design variables. The key difference between these algorithms and traditional (HS) method is in the way of adjusting bandwidth (bw). bw is very important factor for the high efficiency of the harmony search algorithms and can be potentially useful in adjusting convergence rate of algorithms to optimal solution. First algorithm, proposed harmony search (PHS), introduces a new definition of bandwidth (bw). Second algorithm, improving proposed harmony search (IPHS) employs to enhance accuracy and convergence rate of PHS algorithm. In IPHS, non-uniform mutation operation is introduced which is combination of Yang bandwidth and PHS bandwidth. Various engineering optimization problems, including mathematical function minimization problems and structural engineering optimization problems, are presented to demonstrate the effectiveness and robustness of these algorithms. In all cases, the solutions obtained using IPHS are in agreement or better than those obtained from other methods.  相似文献   

12.
We introduce a new barrier function to build new interior-point algorithms to solve optimization problems with bounded variables. First, we show that this function is a (3/2)n-self-concordant barrier for the unitary hypercube [0,1] n , assuring thus the polynomial property of related algorithms. Second, using the Hessian metric of that barrier, we present new explicit algorithms from the point of view of Riemannian geometry applications. Third, we prove that the central path defined by the new barrier to solve a certain class of linearly constrained convex problems maintains most of the properties of the central path defined by the usual logarithmic barrier. We present also a primal long-step path-following algorithm with similar complexity to the classical barrier. Finally, we introduce a new proximal-point Bregman type algorithm to solve linear problems in [0,1] n and prove its convergence. P.R. Oliveira was partially supported by CNPq/Brazil.  相似文献   

13.
In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true for both nonsmooth and smooth problems. For the latter class, we present also an accelerated scheme with the expected rate of convergence \(O\Big ({n^2 \over k^2}\Big )\), where k is the iteration counter. For stochastic optimization, we propose a zero-order scheme and justify its expected rate of convergence \(O\Big ({n \over k^{1/2}}\Big )\). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, for both smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.  相似文献   

14.
In this paper we introduce into nonsmooth optimization theory in Banach algebras a new class of mathematical programming problems, which generalizes the notion of smooth KT-(p,r)-invexity. In fact, this paper focuses on the optimality conditions for optimization problems in Banach algebras, regarding the generalized KT-(p,r)-invexity notion and Kuhn–Tucker points.  相似文献   

15.
This paper describes two optimal subgradient algorithms for solving structured large-scale convex constrained optimization. More specifically, the first algorithm is optimal for smooth problems with Lipschitz continuous gradients and for Lipschitz continuous nonsmooth problems, and the second algorithm is optimal for Lipschitz continuous nonsmooth problems. In addition, we consider two classes of problems: (i) a convex objective with a simple closed convex domain, where the orthogonal projection onto this feasible domain is efficiently available; and (ii) a convex objective with a simple convex functional constraint. If we equip our algorithms with an appropriate prox-function, then the associated subproblem can be solved either in a closed form or by a simple iterative scheme, which is especially important for large-scale problems. We report numerical results for some applications to show the efficiency of the proposed schemes.  相似文献   

16.
《Optimization》2012,61(6):945-962
Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method.  相似文献   

17.
We propose a family of retrospective optimization (RO) algorithms for optimizing stochastic systems with both integer and continuous decision variables. The algorithms are continuous search procedures embedded in a RO framework using dynamic simplex interpolation (RODSI). By decreasing dimensions (corresponding to the continuous variables) of simplex, the retrospective solutions become closer to an optimizer of the objective function. We present convergence results of RODSI algorithms for stochastic “convex” systems. Numerical results show that a simple implementation of RODSI algorithms significantly outperforms some random search algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO).  相似文献   

18.
We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping, regularized by the group reproducing kernel norm. This class of problems arise naturally from applications in group Lasso, which is a popular technique for variable selection. An effective approach to solve such problems is by the proximal gradient method. In this paper we derive and study theoretically the efficient algorithms for the class of the convex problems, analyze the convergence of the algorithm and its subalgorithm.  相似文献   

19.
This paper deals with new variable-metric algorithms for nonsmooth optimization problems, the so-called adaptive algorithms. The essence of these algorithms is that there are two simultaneously working gradient algorithms: the first is in the main space and the second is in the space of the matrices that modify the main variables. The convergence of these algorithms is proved for different cases. The results of numerical experiments are also given.  相似文献   

20.
We extend the classical affine scaling interior trust region algorithm for the linear constrained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian. We propose and analyze a new affine scaling trust-region method in association with nonmonotonic interior backtracking line search technique for solving the linear constrained LC1 optimization where the second-order derivative of the objective function is explicitly required to be locally Lipschitzian. The general trust region subproblem in the proposed algorithm is defined by minimizing an augmented affine scaling quadratic model which requires both first and second order information of the objective function subject only to an affine scaling ellipsoidal constraint in a null subspace of the augmented equality constraints. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions where twice smoothness of the objective function is not required. Applications of the algorithm to some nonsmooth optimization problems are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号