全文获取类型
收费全文 | 91篇 |
免费 | 6篇 |
国内免费 | 5篇 |
专业分类
化学 | 1篇 |
力学 | 8篇 |
数学 | 88篇 |
物理学 | 5篇 |
出版年
2023年 | 1篇 |
2022年 | 1篇 |
2020年 | 1篇 |
2019年 | 3篇 |
2018年 | 2篇 |
2016年 | 3篇 |
2015年 | 2篇 |
2014年 | 10篇 |
2013年 | 4篇 |
2012年 | 4篇 |
2011年 | 4篇 |
2010年 | 1篇 |
2009年 | 8篇 |
2008年 | 4篇 |
2007年 | 6篇 |
2006年 | 6篇 |
2005年 | 4篇 |
2004年 | 2篇 |
2003年 | 2篇 |
2002年 | 5篇 |
2001年 | 3篇 |
2000年 | 4篇 |
1999年 | 5篇 |
1998年 | 4篇 |
1997年 | 2篇 |
1996年 | 7篇 |
1995年 | 2篇 |
1994年 | 1篇 |
1987年 | 1篇 |
排序方式: 共有102条查询结果,搜索用时 893 毫秒
21.
22.
《Optimization》2012,61(9):1387-1400
Although the Hesteness and Stiefel (HS) method is a well-known method, if an inexact line search is used, researches about its convergence rate are very rare. Recently, Zhang, Zhou and Li [Some descent three-term conjugate gradient methods and their global convergence, Optim. Method Softw. 22 (2007), pp. 697–711] proposed a three-term Hestenes–Stiefel method for unconstrained optimization problems. In this article, we investigate the convergence rate of this method. We show that the three-term HS method with the Wolfe line search will be n-step superlinearly and even quadratically convergent if some restart technique is used under reasonable conditions. Some numerical results are also reported to verify the theoretical results. Moreover, it is more efficient than the previous ones. 相似文献
23.
C. Brezinski 《Journal of Computational and Applied Mathematics》2012,236(8):2063-2077
For the solution of full-rank ill-posed linear systems a new approach based on the Arnoldi algorithm is presented. Working with regularized systems, the method theoretically reconstructs the true solution by means of the computation of a suitable function of matrix. In this sense, the method can be referred to as an iterative refinement process. Numerical experiments arising from integral equations and interpolation theory are presented. Finally, the method is extended to work in connection with the standard Tikhonov regularization with the right-hand side contaminated by noise. 相似文献
24.
Peter Benner Grece El Khoury Miloud Sadkane 《Numerical Linear Algebra with Applications》2014,21(5):645-665
A squared Smith type algorithm for solving large‐scale discrete‐time Stein equations is developed. The algorithm uses restarted Krylov spaces to compute approximations of the squared Smith iterations in low‐rank factored form. Fast convergence results when very few iterations of the alternating direction implicit method are applied to the Stein equation beforehand. The convergence of the algorithm is discussed and its performance is demonstrated by several test examples. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
25.
Zheng Zhaochang Ren Gexue Department of Engineering Mechanics Tsinghua University Beijing 《Acta Mechanica Solida Sinica》1996,9(2):95-103
Based on Arnoldi's method, a version of generalized Arnoldi algorithm has been devel-oped for the reduction of gyroscopic eigenvalue problems. By utilizing the skew symmetry of systemmatrix, a very simple recurrence scheme, named gyroscopic Arnoldi reduction algorithm has been ob-tained, which is even simpler than the Lanczos algorithm for symmetric eigenvalue problems. Thecomplex number computation is completely avoided. A restart technique is used to enable the reductionalgorithm to have iterative characteristics. It has been found that the restart technique is not only ef-fective for the convergence of multiple eigenvalues but it also furnishes the reduction algorithm with atechnique to check and compute missed eigenvalues. By combining it with the restart technique, the al-gorithm is made practical for large-scale gyroscopic eigenvalue problems. Numerical examples are giv-en to demonstrate the effectiveness of the method proposed. 相似文献
26.
Mohammedi R. Abdel-Aziz 《Numerical Functional Analysis & Optimization》2013,34(3-4):319-336
An algorithm for solving the problem of minimizing a quadratic function subject to ellipsoidal constraints is introduced. This algorithm is based on the impHcitly restarted Lanczos method to construct a basis for the Krylov subspace in conjunction with a model trust region strategy to choose the step. The trial step is computed on the small dimensional subspace that lies inside the trust region. One of the main advantages of this algorithm is the way that the Krylov subspace is terminated. We introduce a terminationcondition that allows the gradient to be decreased on that subspace. A convergence theory for this algorithm is presented. It is shown that this algorithm is globally convergent and it shouldcope quite well with large scale minimization problems. This theory is sufficiently general that it holds for any algorithm that projects the problem on a lower dimensional subspace. 相似文献
27.
The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses contour integration to obtain the eigenvectors of a matrix for the eigenvalues that are located in any user‐defined region in the complex plane. By computing small numbers of eigenvalues in specific regions of the complex plane, FEAST is able to naturally parallelize the solution of eigenvalue problems by solving for multiple eigenpairs simultaneously. The traditional FEAST algorithm is implemented by directly solving collections of shifted linear systems of equations; in this paper, we describe a variation of the FEAST algorithm that uses iterative Krylov subspace algorithms for solving the shifted linear systems inexactly. We show that this iterative FEAST algorithm (which we call IFEAST) is mathematically equivalent to a block Krylov subspace method for solving eigenvalue problems. By using Krylov subspaces indirectly through solving shifted linear systems, rather than directly using them in projecting the eigenvalue problem, it becomes possible to use IFEAST to solve eigenvalue problems using very large dimension Krylov subspaces without ever having to store a basis for those subspaces. IFEAST thus combines the flexibility and power of Krylov methods, requiring only matrix–vector multiplication for solving eigenvalue problems, with the natural parallelism of the traditional FEAST algorithm. We discuss the relationship between IFEAST and more traditional Krylov methods and provide numerical examples illustrating its behavior. 相似文献
28.
Yan-feiWang 《应用数学学报(英文版)》2003,19(1):31-40
This paper presents a restarted conjugate gradient iterative algorithm for solving ill-posed problems.The damped Morozov‘s discrepancy principle is used as a stopping rule,Numerical experiments are given to illustrate the efficiency of the method. 相似文献
29.
The problem of finding interior eigenvalues of a large nonsymmetric matrix is examined. A procedure for extracting approximate eigenpairs from a subspace is discussed. It is related to the Rayleigh–Ritz procedure, but is designed for finding interior eigenvalues. Harmonic Ritz values and other approximate eigenvalues are generated. This procedure can be applied to the Arnoldi method, to preconditioning methods, and to other methods for nonsymmetric eigenvalue problems that use the Rayleigh–Ritz procedure. The subject of estimating the boundary of the entire spectrum is briefly discussed, and the importance of preconditioning for interior eigenvalue problems is mentioned. © 1998 John Wiley & Sons, Ltd. 相似文献
30.
Zhongxiao Jia 《Numerical Linear Algebra with Applications》1996,3(6):491-512
The incomplete orthogonalization method (IOM(q)), a truncated version of the full orthogonalization method (FOM) proposed by Saad, has been used for solving large unsymmetric linear systems. However, no convergence analysis has been given. In this paper, IOM(q) is analysed in detail from a theoretical point of view. A number of important results are derived showing how the departure of the matrix A from symmetric affects the basis vectors generated by IOM(q), and some relationships between the residuals for IOM(q) and FOM are established. The results show that IOM(q) behaves much like FOM once the basis vectors generated by it are well conditioned. However, it is proved that IOM(q) may generate an ill-conditioned basis for a general unsymmetric matrix such that IOM(q) may fail to converge or at least cannot behave like FOM. Owing to the mathematical equivalence between IOM(q) and the truncated ORTHORES(q) developed by Young and Jea, insights are given into the convergence of the latter. A possible strategy is proposed for choosing the parameter q involved in IOM(q). Numerical experiments are reported to show convergence behaviour of IOM(q) and of its restarted version. 相似文献