首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   95篇
  免费   0篇
  国内免费   2篇
化学   1篇
力学   3篇
数学   92篇
物理学   1篇
  2021年   1篇
  2019年   2篇
  2017年   1篇
  2016年   2篇
  2014年   3篇
  2013年   4篇
  2012年   7篇
  2011年   8篇
  2010年   6篇
  2008年   3篇
  2006年   4篇
  2005年   1篇
  2004年   2篇
  2003年   1篇
  2002年   1篇
  2001年   2篇
  2000年   1篇
  1999年   1篇
  1998年   3篇
  1997年   3篇
  1996年   2篇
  1994年   4篇
  1993年   1篇
  1992年   3篇
  1991年   2篇
  1990年   1篇
  1988年   2篇
  1987年   1篇
  1986年   2篇
  1985年   1篇
  1984年   2篇
  1983年   1篇
  1982年   3篇
  1981年   4篇
  1979年   2篇
  1978年   8篇
  1977年   2篇
排序方式: 共有97条查询结果,搜索用时 982 毫秒
1.
We show that strong differentiability at solutions is not necessary for superlinear convergence of quasi-Newton methods for solving nonsmooth equations. We improve the superlinear convergence result of Ip and Kyparisis for general quasi-Newton methods as well as the Broyden method. For a special example, the Newton method is divergent but the Broyden method is superlinearly convergent.  相似文献   
2.
We consider the symmetric rank-one, quasi-Newton formula. The hereditary properties of this formula do not require quasi-Newton directions of search. Therefore, this formula is easy to use in constrained optimization algorithms; no explicit projections of either the Hessian approximations or the parameter changes are required. Moreover, the entire Hessian approximation is available at each iteration for determining the direction of search, which need not be a quasi-Newton direction. Theoretical difficulties, however, exist. Even for a positive-definite, quadratic function with no constraints, it is possible that the symmetric rank-one update may not be defined at some iteration. In this paper, we first demonstrate that such failures of definition correspond to either losses of independence in the directions of search being generated or to near-singularity of the Hessian approximation being generated. We then describe a procedure that guarantees that these updates are well-defined for any nonsingular quadratic function. This procedure has been incorporated into an algorithm for minimizing a function subject to box constraints. Box constraints arise naturally in the minimization of a function with many minima or a function that is defined only in some subregion of the space.  相似文献   
3.
We present a unified technique for updating approximations to Jacobian or Hessian matrices when any linear structure can be imposed. The updates are derived by variational means, where an operator-weighted Frobenius norm is used, and are finally expressed as solutions of linear equations and/or unconstrained extrema. A certain behavior of the solutions is discussed for certain perturbations of the operator and the constraints. Multiple secant relations are then considered. For the nonsparse case, an explicit family of updates is obtained including Broyden, DFP, and BFGS. For the case where some of the matrix elements are prescribed, explicit solutions are obtained if certain conditions are satisfied. When symmetry is assumed, we show, in addition, the connection with the DFP and BFGS updates.This work was partially supported by a grant from Control Data  相似文献   
4.
This work concerns the derivation of formulae for updating quasi-Newton matrices used in algorithms for computing approximate minima of smooth unconstrained functions. The paper concentrates strictly on the techniques used to derive update formulae. It demonstrates a technique in which problems of finding matrices in ℝ n ×n of minimum Frobenius norm are converted to equivalent problems, using vector representations in ℝ n2 and ℝ n(n+1)/2 of these matrices, and then solvingl 2-minimization problems. These problems are more directly dealt with, and indeed, the paper demonstrates how this technique may be used to handle weighted sparse updates.  相似文献   
5.
利用前一步得到的曲率信息代替xk到xk+1段二次模型的曲率给出一个具有和BFGS类似的收敛性质的类BFGS算法,并揭示新算法与自调比拟牛顿法的关系.从试验函数库CUTE中选择标准试验函数,对比标准BFGS算法及其它改进BFGS算法进行数值试验.试验结果表明这个新算法的表现有点象自调比拟牛顿算法.  相似文献   
6.
We introduce a quasi-Newton update for nonlinear equations which have a Jacobian with sparse triangular factors and consider its application, through an algorithm of Deuflhard, to the solution of boundary value problems by multiple shooting.  相似文献   
7.
Variable metric methods from the Broyden family are well known and commonly used for unconstrained minimization. These methods have good theoretical and practical convergence properties which depend on a selection of free parameters. We demonstrate, using extensive computational experiments, the influence of both the Biggs stabilization parameter and Oren scaling parameter on 12 individual variable metric updates, two of which are new. This paper focuses on a class of variable metric updates belonging to the so-called preconvex part of the Broyden family. These methods outperform the more familiar BFGS method. We also experimentally demonstrate the efficiency of the controlled scaling strategy for problems of sufficient size and sparsity.  相似文献   
8.
Variable metric bundle methods: From conceptual to implementable forms   总被引:7,自引:0,他引:7  
To minimize a convex function, we combine Moreau-Yosida regularizations, quasi-Newton matrices and bundling mechanisms. First we develop conceptual forms using “reversal” quasi-Newton formulae and we state their global and local convergence. Then, to produce implementable versions, we incorporate a bundle strategy together with a “curve-search”. No convergence results are given for the implementable versions; however some numerical illustrations show their good behaviour even for large-scale problems.  相似文献   
9.
Optimality (or KKT) systems arise as primal-dual stationarity conditions for constrained optimization problems. Under suitable constraint qualifications, local minimizers satisfy KKT equations but, unfortunately, many other stationary points (including, perhaps, maximizers) may solve these nonlinear systems too. For this reason, nonlinear-programming solvers make strong use of the minimization structure and the naive use of nonlinear-system solvers in optimization may lead to spurious solutions. Nevertheless, in the basin of attraction of a minimizer, nonlinear-system solvers may be quite efficient. In this paper quasi-Newton methods for solving nonlinear systems are used as accelerators of nonlinear-programming (augmented Lagrangian) algorithms, with equality constraints. A periodically-restarted memoryless symmetric rank-one (SR1) correction method is introduced for that purpose. Convergence results are given and numerical experiments that confirm that the acceleration is effective are presented. This work was supported by FAPESP, CNPq, PRONEX-Optimization (CNPq / FAPERJ), FAEPEX, UNICAMP.  相似文献   
10.
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell–Hestenes–Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号