首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
Nonsmooth optimization problems are divided into two categories. The first is composite nonsmooth problems where the generalized gradient can be approximated by information available at the current point. The second is basic nonsmooth problems where the generalized gradient must be approximated using information calculated at previous iterates.Methods for minimizing composite nonsmooth problems where the nonsmooth function is made up from a finite number of smooth functions, and in particular max functions, are considered. A descent method which uses an active set strategy, a nonsmooth line search, and a quasi-Newton approximation to the reduced Hessian of a Lagrangian function is presented. The Theoretical properties of the method are discussed and favorable numerical experience on a wide range of test problems is reported.This work was carried out at the University of Dundee from 1976–1979 and at the University of Kentucky at Lexington from 1979–1980. The provision of facilities in both universities is gratefully acknowledged, as well as the support of NSF Grant No. ECS-79-23272 for the latter period. The first author also wishes to acknowledge financial support from a George Murray Scholarship from the University of Adelaide and a University of Dundee Research Scholarship for the former period.  相似文献   

2.
Yang  Minghan  Milzarek  Andre  Wen  Zaiwen  Zhang  Tong 《Mathematical Programming》2022,194(1-2):257-303

In this paper, a novel stochastic extra-step quasi-Newton method is developed to solve a class of nonsmooth nonconvex composite optimization problems. We assume that the gradient of the smooth part of the objective function can only be approximated by stochastic oracles. The proposed method combines general stochastic higher order steps derived from an underlying proximal type fixed-point equation with additional stochastic proximal gradient steps to guarantee convergence. Based on suitable bounds on the step sizes, we establish global convergence to stationary points in expectation and an extension of the approach using variance reduction techniques is discussed. Motivated by large-scale and big data applications, we investigate a stochastic coordinate-type quasi-Newton scheme that allows to generate cheap and tractable stochastic higher order directions. Finally, numerical results on large-scale logistic regression and deep learning problems show that our proposed algorithm compares favorably with other state-of-the-art methods.

  相似文献   

3.
The complementarity problem is theoretically and practically useful, and has been used to study and formulate various equilibrium problems arising in economics and engineerings. Recently, for solving complementarity problems, various equivalent equation formulations have been proposed and seem attractive. However, such formulations have the difficulty that the equation arising from complementarity problems is typically nonsmooth. In this paper, we propose a new smoothing Newton method for nonsmooth equations. In our method, we use an approximation function that is smooth when the approximation parameter is positive, and which coincides with original nonsmooth function when the parameter takes zero. Then, we apply Newton's method for the equation that is equivalent to the original nonsmooth equation and that includes an approximation parameter as a variable. The proposed method has the advantage that it has only to deal with a smooth function at any iteration and that it never requires a procedure to decrease an approximation parameter. We show that the sequence generated by the proposed method is globally convergent to a solution, and that, under semismooth assumption, its convergence rate is superlinear. Moreover, we apply the method to nonlinear complementarity problems. Numerical results show that the proposed method is practically efficient.  相似文献   

4.
In this paper, a subgradient-type method for solving nonsmooth multiobjective optimization problems on Riemannian manifolds is proposed and analyzed. This method extends, to the multicriteria case, the classical subgradient method for real-valued minimization proposed by Ferreira and Oliveira (J. Optim. Theory Appl. 97:93–104, 1998). The sequence generated by the method converges to a Pareto optimal point of the problem, provided that the sectional curvature of the manifold is nonnegative and the multicriteria function is convex.  相似文献   

5.
The forward–backward splitting method (FBS) for minimizing a nonsmooth composite function can be interpreted as a (variable-metric) gradient method over a continuously differentiable function which we call forward–backward envelope (FBE). This allows to extend algorithms for smooth unconstrained optimization and apply them to nonsmooth (possibly constrained) problems. Since the FBE can be computed by simply evaluating forward–backward steps, the resulting methods rely on a similar black-box oracle as FBS. We propose an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka–?ojasiewicz property at its critical points. Moreover, when using quasi-Newton directions the proposed method achieves superlinear convergence provided that usual second-order sufficiency conditions on the FBE hold at the limit point of the generated sequence. Such conditions translate into milder requirements on the original function involving generalized second-order differentiability. We show that BFGS fits our framework and that the limited-memory variant L-BFGS is well suited for large-scale problems, greatly outperforming FBS or its accelerated version in practice, as well as ADMM and other problem-specific solvers. The analysis of superlinear convergence is based on an extension of the Dennis and Moré theorem for the proposed algorithmic scheme.  相似文献   

6.
《Optimization》2012,61(3):283-304
Given a convex vector optimization problem with respect to a closed ordering cone, we show the connectedness of the efficient and properly efficient sets. The Arrow–Barankin–Blackwell theorem is generalized to nonconvex vector optimization problems, and the connectedness results are extended to convex transformable vector optimization problems. In particular, we show the connectedness of the efficient set if the target function f is continuously transformable, and of the properly efficient set if f is differentiably transformable. Moreover, we show the connectedness of the efficient and properly efficient sets for quadratic quasiconvex multicriteria optimization problems.  相似文献   

7.
In this paper, we consider nonsmooth vector variational-like inequalities and nonsmooth vector optimization problems. By using the scalarization method, we define nonsmooth variational-like inequalities by means of Clarke generalized directional derivative and study their relations with the vector optimizations and the scalarized optimization problems. Some existence results for solutions of our nonsmooth variational-like inequalities are presented under densely pseudomonotonicity or pseudomonotonicity assumption.  相似文献   

8.
In this work we propose a Cauchy-like method for solving smooth unconstrained vector optimization problems. When the partial order under consideration is the one induced by the nonnegative orthant, we regain the steepest descent method for multicriteria optimization recently proposed by Fliege and Svaiter. We prove that every accumulation point of the generated sequence satisfies a certain first-order necessary condition for optimality, which extends to the vector case the well known “gradient equal zero” condition for real-valued minimization. Finally, under some reasonable additional hypotheses, we prove (global) convergence to a weak unconstrained minimizer.As a by-product, we show that the problem of finding a weak constrained minimizer can be viewed as a particular case of the so-called Abstract Equilibrium problem.  相似文献   

9.

In this study, we consider two classes of multicriteria two-stage stochastic programs in finite probability spaces with multivariate risk constraints. The first-stage problem features multivariate stochastic benchmarking constraints based on a vector-valued random variable representing multiple and possibly conflicting stochastic performance measures associated with the second-stage decisions. In particular, the aim is to ensure that the decision-based random outcome vector of interest is preferable to a specified benchmark with respect to the multivariate polyhedral conditional value-at-risk or a multivariate stochastic order relation. In this case, the classical decomposition methods cannot be used directly due to the complicating multivariate stochastic benchmarking constraints. We propose an exact unified decomposition framework for solving these two classes of optimization problems and show its finite convergence. We apply the proposed approach to a stochastic network design problem in the context of pre-disaster humanitarian logistics and conduct a computational study concerning the threat of hurricanes in the Southeastern part of the United States. The numerical results provide practical insights about our modeling approach and show that the proposed algorithm is computationally scalable.

  相似文献   

10.
梯度法简述     
孙聪  张亚 《运筹学学报》2021,25(3):119-132
梯度法是一类求解优化问题的一阶方法。梯度法形式简单、计算开销小,在大规模问题的求解中得到了广泛应用。系统地介绍了光滑无约束问题梯度法的迭代格式、理论框架。梯度法中最重要的参数是步长,步长的选取直接决定了梯度法的收敛性质与收敛速度。从线搜索框架、近似技巧、随机技巧和交替重复步长四方面介绍了梯度步长的构造思想及相应梯度法的收敛性结果,还对非光滑及约束问题的梯度法、梯度法加速技巧和随机梯度法等扩展方向做了简要介绍。  相似文献   

11.
《Optimization》2012,61(12):1491-1509
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases derivative-free methods are the better (or only) choice since they do not use explicit computation of subgradients. However, these methods require a large number of function evaluations even for moderately large problems. In this article, we propose an efficient derivative-free limited memory discrete gradient bundle method for nonsmooth, possibly nonconvex optimization. The convergence of the proposed method is proved for locally Lipschitz continuous functions and the numerical experiments to be presented confirm the usability of the method especially for medium size and large-scale problems.  相似文献   

12.
《Optimization》2012,61(6):945-962
Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method.  相似文献   

13.
We present an approximate bundle method for solving nonsmooth equilibrium problems. An inexact cutting-plane linearization of the objective function is established at each iteration, which is actually an approximation produced by an oracle that gives inaccurate values for the functions and subgradients. The errors in function and subgradient evaluations are bounded and they need not vanish in the limit. A descent criterion adapting the setting of inexact oracles is put forward to measure the current descent behavior. The sequence generated by the algorithm converges to the approximately critical points of the equilibrium problem under proper assumptions. As a special illustration, the proposed algorithm is utilized to solve generalized variational inequality problems. The numerical experiments show that the algorithm is effective in solving nonsmooth equilibrium problems.  相似文献   

14.
A new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can be applied to find descent directions of nonsmooth functions. The preliminary results of numerical experiments with unconstrained nonsmooth optimization problems as well as the comparison of the proposed method with the nonsmooth optimization solver DNLP from CONOPT-GAMS and the derivative-free optimization solver CONDOR are presented.  相似文献   

15.
The majority of engineering optimization problems (design, identification, design of controlled systems, optimization of large-scale systems, operational development of prototypes, and so on) are essentially multicriteria. The correct determination of the feasible solution set is a major challenge in engineering optimization problems. In order to construct the feasible solution set, a method called PSI (Parameter Space Investigation) has been created and successfully integrated into various fields of industry, science, and technology. Owing to the PSI method, it has become possible to formulate and solve a wide range of multicriteria optimization problems. In addition to giving an overview of the PSI method, this paper also describes the methods for approximation of the feasible and Pareto optimal solution sets, identification, decomposition, and aggregation of the large-scale systems.  相似文献   

16.
In this paper, we design a numerical algorithm for solving a simple bilevel program where the lower level program is a nonconvex minimization problem with a convex set constraint. We propose to solve a combined problem where the first order condition and the value function are both present in the constraints. Since the value function is in general nonsmooth, the combined problem is in general a nonsmooth and nonconvex optimization problem. We propose a smoothing augmented Lagrangian method for solving a general class of nonsmooth and nonconvex constrained optimization problems. We show that, if the sequence of penalty parameters is bounded, then any accumulation point is a Karush-Kuch-Tucker (KKT) point of the nonsmooth optimization problem. The smoothing augmented Lagrangian method is used to solve the combined problem. Numerical experiments show that the algorithm is efficient for solving the simple bilevel program.  相似文献   

17.
针对机器学习中广泛存在的一类问题:结构化随机优化问题(其中“结构化”是指问题的可行域具有块状结构,且目标函数的非光滑正则化部分在变量块之间是可分离的),我们研究了小批量随机块坐标下降算法(mSBD)。按照求解非复合问题和复合问题分别给出了基本的mSBD和它的变体,对于非复合问题,分析了算法在没有一致有界梯度方差假设情况下的收敛性质。而对于复合问题,在不需要通常的Lipschitz梯度连续性假设条件下得到了算法的收敛性。最后通过数值实验验证了mSBD的有效性。  相似文献   

18.
This paper presents a parameterized Newton method using generalized Jacobians and a Broyden-like method for solving nonsmooth equations. The former ensures that the method is well-defined even when the generalized Jacobian is singular. The latter is constructed by using an approximation function which can be formed for nonsmooth equations arising from partial differential equations and nonlinear complementarity problems. The approximation function method generalizes the splitting function method for nonsmooth equations. Locally superlinear convergence results are proved for the two methods. Numerical examples are given to compare the two methods with some other methods.This work is supported by the Australian Research Council.  相似文献   

19.
A new generalized Polak-Ribière conjugate gradient algorithm is proposed for unconstrained optimization, and its numerical and theoretical properties are discussed. The new method is, in fact, a particular type of two-dimensional Newton method and is based on a finite-difference approximation to the product of a Hessian and a vector.  相似文献   

20.
In this paper, an adaptive trust region algorithm that uses Moreau–Yosida regularization is proposed for solving nonsmooth unconstrained optimization problems. The proposed algorithm combines a modified secant equation with the BFGS update formula and an adaptive trust region radius, and the new trust region radius utilizes not only the function information but also the gradient information. The global convergence and the local superlinear convergence of the proposed algorithm are proven under suitable conditions. Finally, the preliminary results from comparing the proposed algorithm with some existing algorithms using numerical experiments reveal that the proposed algorithm is quite promising for solving nonsmooth unconstrained optimization problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号