首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
In nonlinear problems, the Hasofer–Lind–Rackwitz–Fiessler algorithm of the first order reliability method sometimes is puzzled by its non-convergence. A new Hasofer–Lind–Rackwitz–Fiessler algorithm incorporating Barzilai–Borwein step is investigated in this paper to speed up the rate of convergence and performs in a stable manner. The algorithm is essentially established on the basis of the global Barzilai–Borwein gradient method, which is dealt with two stages. The first stage, implemented by the traditional steepest descent method with specific decayed step sizes, prepares a good initial point for the global Barzilai–Borwein gradient algorithm in the second stage, which takes the merit function as the objective to locate the most probable failure point. The efficiency and convergence of the proposed method and some other reliability analysis methods are presented and discussed in details by several numerical examples. It is found that the proposed method is stable and very efficient in the nonlinear problems except those super nonlinear ones, even more accurate than the descent direction method with step sizes following the fixed exponential decay strategy.  相似文献   

2.
Yang  Minghan  Milzarek  Andre  Wen  Zaiwen  Zhang  Tong 《Mathematical Programming》2022,194(1-2):257-303

In this paper, a novel stochastic extra-step quasi-Newton method is developed to solve a class of nonsmooth nonconvex composite optimization problems. We assume that the gradient of the smooth part of the objective function can only be approximated by stochastic oracles. The proposed method combines general stochastic higher order steps derived from an underlying proximal type fixed-point equation with additional stochastic proximal gradient steps to guarantee convergence. Based on suitable bounds on the step sizes, we establish global convergence to stationary points in expectation and an extension of the approach using variance reduction techniques is discussed. Motivated by large-scale and big data applications, we investigate a stochastic coordinate-type quasi-Newton scheme that allows to generate cheap and tractable stochastic higher order directions. Finally, numerical results on large-scale logistic regression and deep learning problems show that our proposed algorithm compares favorably with other state-of-the-art methods.

  相似文献   

3.
A new approximation method is presented for directly minimizing a composite nonsmooth function that is locally Lipschitzian. This method approximates only the generalized gradient vector, enabling us to use directly well-developed smooth optimization algorithms for solving composite nonsmooth optimization problems. This generalized gradient vector is approximated on each design variable coordinate by using only the active components of the subgradient vectors; then, its usability is validated numerically by the Pareto optimum concept. In order to show the performance of the proposed method, we solve four academic composite nonsmooth optimization problems and two dynamic response optimization problems with multicriteria. Specifically, the optimization results of the two dynamic response optimization problems are compared with those obtained by three typical multicriteria optimization strategies such as the weighting method, distance method, and min–max method, which introduces an artificial design variable in order to replace the max-value cost function with additional inequality constraints. The comparisons show that the proposed approximation method gives more accurate and efficient results than the other methods.  相似文献   

4.
Since the appearance of the Barzilai-Borwein (BB) step sizes strategy for unconstrained optimization problems, it received more and more attention of the researchers. It was applied in various fields of the nonlinear optimization problems and recently was also extended to optimization problems with bound constraints. In this paper, we further extend the BB step sizes to more general variational inequality (VI) problems, i.e., we adopt them in projection methods. Under the condition that the underlying mapping of the VI problem is strongly monotone and Lipschitz continuous and the modulus of strong monotonicity and the Lipschitz constant satisfy some further conditions, we establish the global convergence of the projection methods with BB step sizes. A series of numerical examples are presented, which demonstrate that the proposed methods are convergent under mild conditions, and are more efficient than some classical projection-like methods.  相似文献   

5.
A successive unconstrained dual optimization (SUDO) method is developed to solve the high order tensors?? best rank-one approximation problems, in the least-squares sense. The constrained dual program of tensors?? rank-one approximation is transformed into a sequence of unconstrained optimization problems, for where a fast gradient method is proposed. We introduce the steepest ascent direction, a initial step length strategy and a backtracking line search rule for each iteration. A proof of the global convergence of the SUDO algorithm is given. Preliminary numerical experiments show that our method outperforms the alternating least squares (ALS) method.  相似文献   

6.
Conjugate gradient methods have been paid attention to, because they can be directly applied to large-scale unconstrained optimization problems. In order to incorporate second order information of the objective function into conjugate gradient methods, Dai and Liao (2001) proposed a conjugate gradient method based on the secant condition. However, their method does not necessarily generate a descent search direction. On the other hand, Hager and Zhang (2005) proposed another conjugate gradient method which always generates a descent search direction.  相似文献   

7.
给求解无约束规划问题的记忆梯度算法中的参数一个特殊取法,得到目标函数的记忆梯度G o ldste in-L av in tin-Po lyak投影下降方向,从而对凸约束的非线性规划问题构造了一个记忆梯度G o ldste in-L av in tin-Po lyak投影算法,并在一维精确步长搜索和去掉迭代点列有界的条件下,分析了算法的全局收敛性,得到了一些较为深刻的收敛性结果.同时给出了结合FR,PR,HS共轭梯度算法的记忆梯度G o ldste in-L av in tin-Po lyak投影算法,从而将经典共轭梯度算法推广用于求解凸约束的非线性规划问题.数值例子表明新算法比梯度投影算法有效.  相似文献   

8.
Based on two modified secant equations proposed by Yuan, and Li and Fukushima, we extend the approach proposed by Andrei, and introduce two hybrid conjugate gradient methods for unconstrained optimization problems. Our methods are hybridizations of Hestenes-Stiefel and Dai-Yuan conjugate gradient methods. Under proper conditions, we show that one of the proposed algorithms is globally convergent for uniformly convex functions and the other is globally convergent for general functions. To enhance the performance of the line search procedure, we propose a new approach for computing the initial value of the steplength for initiating the line search procedure. We give a comparison of the implementations of our algorithms with two efficiently representative hybrid conjugate gradient methods proposed by Andrei using unconstrained optimization test problems from the CUTEr collection. Numerical results show that, in the sense of the performance profile introduced by Dolan and Moré, the proposed hybrid algorithms are competitive, and in some cases more efficient.  相似文献   

9.
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.  相似文献   

10.
Conjugate gradient methods have been widely used as schemes to solve large-scale unconstrained optimization problems. The search directions for the conventional methods are defined by using the gradient of the objective function. This paper proposes two nonlinear conjugate gradient methods which take into account mostly information about the objective function. We prove that they converge globally and numerically compare them with conventional methods. The results show that with slight modification to the direction, one of our methods performs as well as the best conventional method employing the Hestenes–Stiefel formula.  相似文献   

11.
It is well known that trust region methods are very effective for optimization problems. In this article, a new adaptive trust region method is presented for solving unconstrained optimization problems. The proposed method combines a modified secant equation with the BFGS updated formula and an adaptive trust region radius, where the new trust region radius makes use of not only the function information but also the gradient information. Under suitable conditions, global convergence is proved, and we demonstrate the local superlinear convergence of the proposed method. The numerical results indicate that the proposed method is very efficient.  相似文献   

12.
In this paper, a new gradient-related algorithm for solving large-scale unconstrained optimization problems is proposed. The new algorithm is a kind of line search method. The basic idea is to choose a combination of the current gradient and some previous search directions as a new search direction and to find a step-size by using various inexact line searches. Using more information at the current iterative step may improve the performance of the algorithm. This motivates us to find some new gradient algorithms which may be more effective than standard conjugate gradient methods. Uniformly gradient-related conception is useful and it can be used to analyze global convergence of the new algorithm. The global convergence and linear convergence rate of the new algorithm are investigated under diverse weak conditions. Numerical experiments show that the new algorithm seems to converge more stably and is superior to other similar methods in many situations.  相似文献   

13.
Two iterative algorithms are presented in this paper to solve the minimal norm least squares solution to a general linear matrix equations including the well-known Sylvester matrix equation and Lyapunov matrix equation as special cases. The first algorithm is based on the gradient based searching principle and the other one can be viewed as its dual form. Necessary and sufficient conditions for the step sizes in these two algorithms are proposed to guarantee the convergence of the algorithms for arbitrary initial conditions. Sufficient condition that is easy to compute is also given. Moreover, two methods are proposed to choose the optimal step sizes such that the convergence speeds of the algorithms are maximized. Between these two methods, the first one is to minimize the spectral radius of the iteration matrix and explicit expression for the optimal step size is obtained. The second method is to minimize the square sum of the F-norm of the error matrices produced by the algorithm and it is shown that the optimal step size exits uniquely and lies in an interval. Several numerical examples are given to illustrate the efficiency of the proposed approach.  相似文献   

14.
A constrained minimax problem is converted to minimization of a sequence of unconstrained and continuously differentiable functions in a manner similar to Morrison's method for constrained optimization. One can thus apply any efficient gradient minimization technique to do the unconstrained minimization at each step of the sequence. Based on this approach, two algorithms are proposed, where the first one is simpler to program, and the second one is faster in general. To show the efficiency of the algorithms even for unconstrained problems, examples are taken to compare the two algorithms with recent methods in the literature. It is found that the second algorithm converges faster with respect to the other methods. Several constrained examples are also tried and the results are presented.  相似文献   

15.
We introduce a novel approach for analyzing the worst-case performance of first-order black-box optimization methods. We focus on smooth unconstrained convex minimization over the Euclidean space. Our approach relies on the observation that by definition, the worst-case behavior of a black-box optimization method is by itself an optimization problem, which we call the performance estimation problem (PEP). We formulate and analyze the PEP for two classes of first-order algorithms. We first apply this approach on the classical gradient method and derive a new and tight analytical bound on its performance. We then consider a broader class of first-order black-box methods, which among others, include the so-called heavy-ball method and the fast gradient schemes. We show that for this broader class, it is possible to derive new bounds on the performance of these methods by solving an adequately relaxed convex semidefinite PEP. Finally, we show an efficient procedure for finding optimal step sizes which results in a first-order black-box method that achieves best worst-case performance.  相似文献   

16.
Filter methods were initially designed for nonlinear programming problems by Fletcher and Leyffer. In this paper we propose a secant algorithm with line search filter method for nonlinear equality constrained optimization. The algorithm yields the global convergence under some reasonable conditions. By using the Lagrangian function value in the filter we establish that the proposed algorithm can overcome the Maratos effect without using second order correction step, so that fast local superlinear convergence to second order sufficient local solution is achieved. The primary numerical results are presented to confirm the robustness and efficiency of our approach.  相似文献   

17.
刘亚君  刘新为 《计算数学》2016,38(1):96-112
梯度法是求解无约束最优化的一类重要方法.步长选取的好坏与梯度法的数值表现息息相关.注意到BB步长隐含了目标函数的二阶信息,本文将BB法与信赖域方法相结合,利用BB步长的倒数去近似目标函数的Hesse矩阵,同时利用信赖域子问题更加灵活地选取梯度法的步长,给出求解无约束最优化问题的单调和非单调信赖域BB法.在适当的假设条件下,证明了算法的全局收敛性.数值试验表明,与已有的求解无约束优化问题的BB类型的方法相比,非单调信赖域BB法中e_k=‖x_k-x~*‖的下降呈现更明显的阶梯状和单调性,因此收敛速度更快.  相似文献   

18.
Following the approach proposed by Dai and Liao, we introduce two nonlinear conjugate gradient methods for unconstrained optimization problems. One of our proposed methods is based on a modified version of the secant equation proposed by Zhang, Deng and Chen, and Zhang and Xu, and the other is based on the modified BFGS update proposed by Yuan. An interesting feature of our methods is their account of both the gradient and function values. Under proper conditions, we show that one of the proposed methods is globally convergent for general functions and that the other is globally convergent for uniformly convex functions. To enhance the performance of the line search procedure, we also propose a new approach for computing the initial steplength to be used for initiating the procedure. We provide a comparison of implementations of our methods with the efficient conjugate gradient methods proposed by Dai and Liao, and Hestenes and Stiefel. Numerical test results show the efficiency of our proposed methods.  相似文献   

19.
Local search methods for continuous optimization problems tend to be sensitive to the choice of step sizes in their search directions. This paper presents the Local Search with Groups of Step Sizes (LSGSS) method, a derivative-free method that reactively updates groups of promising step sizes for each problem coordinate. The experiments demonstrate LSGSS could find the best solutions for each large-scale benchmark problem when compared to classical methods.  相似文献   

20.
Minimizing two different upper bounds of the matrix which generates search directions of the nonlinear conjugate gradient method proposed by Dai and Liao, two modified conjugate gradient methods are proposed. Under proper conditions, it is briefly shown that the methods are globally convergent when the line search fulfills the strong Wolfe conditions. Numerical comparisons between the implementations of the proposed methods and the conjugate gradient methods proposed by Hager and Zhang, and Dai and Kou, are made on a set of unconstrained optimization test problems of the CUTEr collection. The results show the efficiency of the proposed methods in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号