首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   316篇
  国内免费   11篇
  完全免费   19篇
  数学   346篇
  2015年   1篇
  2014年   11篇
  2013年   7篇
  2012年   21篇
  2011年   14篇
  2010年   12篇
  2009年   26篇
  2008年   19篇
  2007年   18篇
  2006年   19篇
  2005年   10篇
  2004年   12篇
  2003年   15篇
  2002年   14篇
  2001年   16篇
  2000年   17篇
  1999年   9篇
  1998年   5篇
  1997年   12篇
  1996年   5篇
  1995年   14篇
  1994年   4篇
  1993年   4篇
  1992年   8篇
  1991年   4篇
  1990年   7篇
  1989年   5篇
  1988年   2篇
  1987年   3篇
  1986年   6篇
  1985年   3篇
  1984年   3篇
  1982年   3篇
  1981年   2篇
  1980年   5篇
  1979年   6篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
排序方式: 共有346条查询结果,搜索用时 62 毫秒
1.
An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primal-dual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented. Received: May 1996 / Accepted: August 18, 2000?Published online October 18, 2000  相似文献
2.
A general alternative theorem for convexlike functions is given. This permits the establishment of optimality conditions for convexlike programming problems in which both inequality and equality constraints are considered. It is shown that the main results of the paper contain, in particular, those of Craven, Giannessi, Jeyakumar, Hayashi, and Komiya, Simons, Zlinescu, and a recent result of Tamminen.  相似文献
3.
Inexact spectral projected gradient methods on convex sets   总被引:9,自引:0,他引:9  
A new method is introduced for large-scale convex constrainedoptimization. The general model algorithm involves, at eachiteration, the approximate minimization of a convex quadraticon the feasible set of the original problem and global convergenceis obtained by means of nonmonotone line searches. A specificalgorithm, the Inexact Spectral Projected Gradient method (ISPG),is implemented using inexact projections computed by Dykstra'salternating projection method and generates interior iterates.The ISPG method is a generalization of the Spectral ProjectedGradient method (SPG), but can be used when projections aredifficult to compute. Numerical results for constrained least-squaresrectangular matrix problems are presented.  相似文献
4.
We derive compact representations of BFGS and symmetric rank-one matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broyden's update for solving systems of nonlinear equations.These authors were supported by the Air Force Office of Scientific Research under Grant AFOSR-90-0109, the Army Research Office under Grant DAAL03-91-0151 and the National Science Foundation under Grants CCR-8920519 and CCR-9101795.This author was supported by the U.S. Department of Energy, under Grant DE-FG02-87ER25047-A001, and by National Science Foundation Grants CCR-9101359 and ASC-9213149.  相似文献
5.
利用SQP方法,广义投影技术和强次可行方(向)法思想,建立不等式约束优化一个新的初始点任意的快速算法,算法每次迭代仅需解一个总存在可行解的二次子规划,或用广义投影计算“一阶”强次可行下降辅助搜索方向,采用曲线搜索与直线搜索相结合的方法产生步长,在较温的条件下,算法具有全局收敛性,强收敛性,超线性与二次收敛性,给出了算法有效的数值试验。  相似文献
6.
非线性约束条件下的SQP可行方法   总被引:9,自引:0,他引:9  
本文对非线性规划问题给出了一个具有一步超线性收敛速度的可行方法。由于此算法每步迭代均在可行域内进行,并且每步迭代只需计算一个二次子规划和一个逆矩阵,因而算法具有较好的实用价值。本文还在较弱的条件下证明了算法的全局收敛和一步超线性收敛性。  相似文献
7.
On a subproblem of trust region algorithms for constrained optimization   总被引:8,自引:0,他引:8  
We study a subproblem that arises in some trust region algorithms for equality constrained optimization. It is the minimization of a general quadratic function with two special quadratic constraints. Properties of such subproblems are given. It is proved that the Hessian of the Lagrangian has at most one negative eigenvalue, and an example is presented to show that the Hessian may have a negative eigenvalue when one constraint is inactive at the solution.Research supported by a Research Fellowship of Fitzwilliam College, Cambridge, and by a research grant from the Chinese Academy of Sciences.  相似文献
8.
In this paper, a recursive quadratic programming algorithm for solving equality constrained optimization problems is proposed and studied. The line search functions used are approximations to Fletcher's differentiable exact penalty function. Global convergence and local superlinear convergence results are proved, and some numerical results are given.  相似文献
9.
A new, robust recursive quadratic programming algorithm model based on a continuously differentiable merit function is introduced. The algorithm is globally and superlinearly convergent, uses automatic rules for choosing the penalty parameter, and can efficiently cope with the possible inconsistency of the quadratic search subproblem. The properties of the algorithm are studied under weak a priori assumptions; in particular, the superlinear convergence rate is established without requiring strict complementarity. The behavior of the algorithm is also investigated in the case where not all of the assumptions are met. The focus of the paper is on theoretical issues; nevertheless, the analysis carried out and the solutions proposed pave the way to new and more robust RQP codes than those presently available.  相似文献
10.
In this paper, some Newton and quasi-Newton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences {x k } convergingq-superlinearly to the solution. Furthermore, under mild assumptions, aq-quadratic convergence rate inx is also attained. Other features of these algorithms are that only the solution of linear systems of equations is required at each iteration and that the strict complementarity assumption is never invoked. First, the superlinear or quadratic convergence rate of a Newton-like algorithm is proved. Then, a simpler version of this algorithm is studied, and it is shown that it is superlinearly convergent. Finally, quasi-Newton versions of the previous algorithms are considered and, provided the sequence defined by the algorithms converges, a characterization of superlinear convergence extending the result of Boggs, Tolle, and Wang is given.This research was supported by the National Research Program Metodi di Ottimizzazione per la Decisioni, MURST, Roma, Italy.  相似文献
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号