首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
N. Karmitsa 《Optimization》2016,65(8):1599-1614
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases, the usual subgradient-based optimization methods cannot be used. However, the derivative free methods are applicable since they do not use explicit computation of subgradients. In this paper, we propose an efficient diagonal discrete gradient bundle method for derivative-free, possibly nonconvex, nonsmooth minimization. The convergence of the proposed method is proved for semismooth functions, which are not necessarily differentiable or convex. The method is implemented using Fortran 95, and the numerical experiments confirm the usability and efficiency of the method especially in case of large-scale problems.  相似文献   

2.
《Optimization》2012,61(6):945-962
Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method.  相似文献   

3.
Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of thousands of variables. In the paper [Haarala, Miettinen, Mäkelä, Optimization Methods and Software, 19, (2004), pp. 673–692] we have described an efficient method for large-scale nonsmooth optimization. In this paper, we introduce a new variant of this method and prove its global convergence for locally Lipschitz continuous objective functions, which are not necessarily differentiable or convex. In addition, we give some encouraging results from numerical experiments.  相似文献   

4.
This paper investigates the global convergence of trust region (TR) methods for solving nonsmooth minimization problems. For a class of nonsmooth objective functions called regular functions, conditions are found on the TR local models that imply three fundamental convergence properties. These conditions are shown to be satisfied by appropriate forms of Fletcher's TR method for solving constrained optimization problems, Powell and Yuan's TR method for solving nonlinear fitting problems, Zhang, Kim and Lasdon's successive linear programming method for solving constrained problems, Duff, Nocedal and Reid's TR method for solving systems of nonlinear equations, and El Hallabi and Tapia's TR method for solving systems of nonlinear equations. Thus our results can be viewed as a unified convergence theory for TR methods for nonsmooth problems.Research supported by AFOSR 89-0363, DOE DEFG05-86ER25017 and ARO 9DAAL03-90-G-0093.Corresponding author.  相似文献   

5.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered.  相似文献   

6.
In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primal-dual since they are always able to generate a feasible approximation to the optimum of an appropriately formulated dual problem. Besides other advantages, this useful feature provides the methods with a reliable stopping criterion. The proposed schemes differ from the classical approaches (divergent series methods, mirror descent methods) by presence of two control sequences. The first sequence is responsible for aggregating the support functions in the dual space, and the second one establishes a dynamically updated scale between the primal and dual spaces. This additional flexibility allows to guarantee a boundedness of the sequence of primal test points even in the case of unbounded feasible set (however, we always assume the uniform boundedness of subgradients). We present the variants of subgradient schemes for nonsmooth convex minimization, minimax problems, saddle point problems, variational inequalities, and stochastic optimization. In all situations our methods are proved to be optimal from the view point of worst-case black-box lower complexity bounds.  相似文献   

7.
Typically, exact information of the whole subdifferential is not available for intrinsically nonsmooth objective functions such as for marginal functions. Therefore, the semismoothness of the objective function cannot be proved or is even violated. In particular, in these cases standard nonsmooth methods cannot be used. In this paper, we propose a new approach to develop a converging descent method for this class of nonsmooth functions. This approach is based on continuous outer subdifferentials introduced by us. Further, we introduce on this basis a conceptual optimization algorithm and prove its global convergence. This leads to a constructive approach enabling us to create a converging descent method. Within the algorithmic framework, neither semismoothness nor calculation of exact subgradients are required. This is in contrast to other approaches which are usually based on the assumption of semismoothness of the objective function.  相似文献   

8.
A new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can be applied to find descent directions of nonsmooth functions. The preliminary results of numerical experiments with unconstrained nonsmooth optimization problems as well as the comparison of the proposed method with the nonsmooth optimization solver DNLP from CONOPT-GAMS and the derivative-free optimization solver CONDOR are presented.  相似文献   

9.
In this paper, the rotated cone fitting problem is considered. In case the measured data are generally accurate and it is needed to fit the surface within expected error bound, it is more appropriate to use l∞ norm than l2 norm. l∞ fitting rotated cones need to minimize, under some bound constraints, the maximum function of some nonsmooth functions involving both absolute value and square root functions. Although this is a low dimensional problem, in some practical application, it is needed to fitting large...  相似文献   

10.
In this paper, the rotated cone fitting problem is considered. In case the measured data are generally accurate and it is needed to fit the surface within expected error bound, it is more appropriate to use l∞ norm than 12 norm. l∞ fitting rotated cones need to minimize, under some bound constraints, the maximum function of some nonsmooth functions involving both absolute value and square root functions. Although this is a low dimensional problem, in some practical application, it is needed to fitting large amount of cones repeatedly, moreover, when large amount of measured data are to be fitted to one rotated cone, the number of components in the maximum function is large. So it is necessary to develop efficient solution methods. To solve such optimization problems efficiently, a truncated smoothing Newton method is presented. At first, combining aggregate smoothing technique to the maximum function as well as the absolute value function and a smoothing function to the square root function, a monotonic and uniform smooth approximation to the objective function is constructed. Using the smooth approximation, a smoothing Newton method can be used to solve the problem. Then, to reduce the computation cost, a truncated aggregate smoothing technique is applied to give the truncated smoothing Newton method, such that only a small subset of component functions are aggregated in each iteration point and hence the computation cost is considerably reduced.  相似文献   

11.
In this paper, a functional inequality constrained optimization problem is studied using a discretization method and an adaptive scheme. The problem is discretized by partitioning the interval of the independent parameter. Two methods are investigated as to how to treat the discretized optimization problem. The discretization problem is firstly converted into an optimization problem with a single nonsmooth equality constraint. Since the obtained equality constraint is nonsmooth and does not satisfy the usual constraint qualification condition, relaxation and smoothing techniques are used to approximate the equality constraint via a smooth inequality constraint. This leads to a sequence of approximate smooth optimization problems with one constraint. An adaptive scheme is incorporated into the method to facilitate the computation of the sum in the inequality constraint. The second method is to apply an adaptive scheme directly to the discretization problem. Thus a sequence of optimization problems with a small number of inequality constraints are obtained. Convergence analysis for both methods is established. Numerical examples show that each of the two proposed methods has its own advantages and disadvantages over the other.  相似文献   

12.
A convex nonsmooth optimization problem is replaced by a sequence of line search problems along recursively updated rays. Convergence of the method is proved, and relations to existing methods are discussed.  相似文献   

13.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.  相似文献   

14.
New Bundle Methods for Solving Lagrangian Relaxation Dual Problems   总被引:5,自引:0,他引:5  
Bundle methods have been used frequently to solve nonsmooth optimization problems. In these methods, subgradient directions from past iterations are accumulated in a bundle, and a trial direction is obtained by performing quadratic programming based on the information contained in the bundle. A line search is then performed along the trial direction, generating a serious step if the function value is improved by or a null step otherwise. Bundle methods have been used to maximize the nonsmooth dual function in Lagrangian relaxation for integer optimization problems, where the subgradients are obtained by minimizing the performance index of the relaxed problem. This paper improves bundle methods by making good use of near-minimum solutions that are obtained while solving the relaxed problem. The bundle information is thus enriched, leading to better search directions and less number of null steps. Furthermore, a simplified bundle method is developed, where a fuzzy rule is used to combine linearly directions from near-minimum solutions, replacing quadratic programming and line search. When the simplified bundle method is specialized to an important class of problems where the relaxed problem can be solved by using dynamic programming, fuzzy dynamic programming is developed to obtain efficiently near-optimal solutions and their weights for the linear combination. This method is then applied to job shop scheduling problems, leading to better performance than previously reported in the literature.  相似文献   

15.
A new approximation method is presented for directly minimizing a composite nonsmooth function that is locally Lipschitzian. This method approximates only the generalized gradient vector, enabling us to use directly well-developed smooth optimization algorithms for solving composite nonsmooth optimization problems. This generalized gradient vector is approximated on each design variable coordinate by using only the active components of the subgradient vectors; then, its usability is validated numerically by the Pareto optimum concept. In order to show the performance of the proposed method, we solve four academic composite nonsmooth optimization problems and two dynamic response optimization problems with multicriteria. Specifically, the optimization results of the two dynamic response optimization problems are compared with those obtained by three typical multicriteria optimization strategies such as the weighting method, distance method, and min–max method, which introduces an artificial design variable in order to replace the max-value cost function with additional inequality constraints. The comparisons show that the proposed approximation method gives more accurate and efficient results than the other methods.  相似文献   

16.
An efficient algorithm for solving nonlinear programs with noisy equality constraints is introduced and analyzed. The unknown exact constraints are replaced by surrogates based on the bundle idea, a well-known strategy from nonsmooth optimization. This concept allows us to perform a fast computation of the surrogates by solving simple quadratic optimization problems, control the memory needed by the algorithm, and prove the differentiability properties of the surrogate functions. The latter aspect allows us to invoke a sequential quadratic programming method. The overall algorithm is of the quasi-Newton type. Besides convergence theorems, qualification results are given and numerical test runs are discussed.  相似文献   

17.
Distributed consensus optimization has received considerable attention in recent years and several distributed consensus-based algorithms have been proposed for (nonsmooth) convex and (smooth) nonconvex objective functions. However, the behavior of these distributed algorithms on nonconvex, nonsmooth and stochastic objective functions is not understood. Such class of functions and distributed setting are motivated by several applications, including problems in machine learning and signal processing. This paper presents the first convergence analysis of the decentralized stochastic subgradient method for such classes of problems, over networks modeled as undirected, fixed, graphs.  相似文献   

18.
We consider a convexification method for a class of nonsmooth monotone functions. Specifically, we prove that a semismooth monotone function can be converted into a convex function via certain convexification transformations. The results derived in this paper lay a theoretical base to extend the reach of convexification methods in monotone optimization to nonsmooth situations. Communicated by X. Q. Yang This research was partially supported by the National Natural Science Foundation of China under Grants 70671064 and 60473097 and by the Research Grants Council of Hong Kong under Grant CUHK 4214/01E.  相似文献   

19.
Global Interval Methods for Local Nonsmooth Optimization   总被引:4,自引:0,他引:4  
An interval method for determining local solutions of nonsmooth unconstrained optimization problems is discussed. The objective function is assumed to be locally Lipschitz and to have appropriate interval inclusions. The method consists of two parts, a local search and a global continuation and termination. The local search consists of a globally convergent descent algorithm showing similarities to -bundle methods. While -bundle methods use polytopes as inner approximations of the -subdifferentials, which are the main tools of almost all bundle concepts, our method uses axes parallel boxes as outer approximations of the -subdifferentials. The boxes are determined almost automatically with inclusion techniques of interval arithmetic. The dimension of the boxes is equal to the dimension of the problem and remains constant during the whole computation. The application of boxes does not suffer from the necessity to invest methodical and computational efforts to adapt the polytopes to the latest state of the computation as well as to simplify them when the number of vertices becomes too large, as is the case with the polytopes. The second part of the method applies interval techniques of global optimization to the approximative local solution obtained from the search of the first part in order to determine guaranteed error bounds or to improve the solution if necessary. We present prototype algorithms for both parts of the method as well as a complete convergence theory for them and demonstrate how outer approximations can be obtained.  相似文献   

20.
We propose an inexact proximal bundle method for constrained nonsmooth nonconvex optimization problems whose objective and constraint functions are known through oracles which provide inexact information. The errors in function and subgradient evaluations might be unknown, but are merely bounded. To handle the nonconvexity, we first use the redistributed idea, and consider even more difficulties by introducing inexactness in the available information. We further examine the modified improvement function for a series of difficulties caused by the constrained functions. The numerical results show the good performance of our inexact method for a large class of nonconvex optimization problems. The approach is also assessed on semi-infinite programming problems, and some encouraging numerical experiences are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号