首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4篇
  免费   0篇
  国内免费   1篇
数学   5篇
  2012年   1篇
  2009年   1篇
  1999年   1篇
  1998年   1篇
  1996年   1篇
排序方式: 共有5条查询结果,搜索用时 0 毫秒
1
1.
In this paper, we address the accuracy of the results for the overdetermined full rank linear least‐squares problem. We recall theoretical results obtained in (SIAM J. Matrix Anal. Appl. 2007; 29 (2):413–433) on conditioning of the least‐squares solution and the components of the solution when the matrix perturbations are measured in Frobenius or spectral norms. Then we define computable estimates for these condition numbers and we interpret them in terms of statistical quantities when the regression matrix and the right‐hand side are perturbed. In particular, we show that in the classical linear statistical model, the ratio of the variance of one component of the solution by the variance of the right‐hand side is exactly the condition number of this solution component when only perturbations on the right‐hand side are considered. We explain how to compute the variance–covariance matrix and the least‐squares conditioning using the libraries LAPACK (LAPACK Users' Guide (3rd edn). SIAM: Philadelphia, 1999) and ScaLAPACK (ScaLAPACK Users' Guide. SIAM: Philadelphia, 1997) and we give the corresponding computational cost. Finally we present a small historical numerical example that was used by Laplace (Théorie Analytique des Probabilités. Mme Ve Courcier, 1820; 497–530) for computing the mass of Jupiter and a physical application if the area of space geodesy. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
2.
The representation tree lies at the heart of the algorithm of Multiple Relatively Robust Representations for computing orthogonal eigenvectors of a symmetric tridiagonal matrix without Gram–Schmidt. A representation tree describes the incremental shift relations between relatively robust representations of eigenvalue clusters of an unreduced tridiagonal matrix, which are needed to strongly separate close eigenvalues in the relative sense. At the bottom of the representation tree, each leaf defines a relatively isolated eigenvalue to high relative accuracy. The shape of the representation tree plays a pivotal role for complexity and available parallelism: a deeper tree consisting of multiple levels of nodes involves tasks associated to more work (i.e., eigenvalue refinement to resolve eigenvalue clusters) and less parallelism (i.e., a longer critical path as well as potential data movement and synchronization). An embarrassingly parallel, ideal tree on the other hand consists of a root and leaves only. As highly parallel hybrid graphics processing unit/multicore platforms with large memory now become available as commodity platforms, exploiting parallelism in traditional algorithms becomes key to modernizing the components of standard software libraries such as LAPACK. This paper focuses on LAPACK's Multiple Relatively Robust Representations algorithm and investigates the critical case where a representation tree contains a long sequential chain of large (fat) nodes that hamper parallelism. This key problem needs to be addressed as it concerns all sorts of computing environments, distributed computing, symmetric multiprocessor, as well as hybrid graphics processing unit/multicore architectures. We present an improved representation tree that often offers a significantly shorter critical path and finer computational granularity of smaller tasks that are easier to schedule. In a study of selected synthetic and application matrices, we show that an average 75% reduction in the length of the critical path and 82% reduction in task granularity can be achieved. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
3.
Theory, algorithms and LAPACK-style software for computing a pair of deflating subspaces with specified eigenvalues of a regular matrix pair (A, B) and error bounds for computed quantities (eigenvalues and eigenspaces) are presented. Thereordering of specified eigenvalues is performed with a direct orthogonal transformation method with guaranteed numerical stability. Each swap of two adjacent diagonal blocks in the real generalized Schur form, where at least one of them corresponds to a complex conjugate pair of eigenvalues, involves solving a generalized Sylvester equation and the construction of two orthogonal transformation matrices from certain eigenspaces associated with the diagonal blocks. The swapping of two 1×1 blocks is performed using orthogonal (unitary) Givens rotations. Theerror bounds are based on estimates of condition numbers for eigenvalues and eigenspaces. The software computes reciprocal values of a condition number for an individual eigenvalue (or a cluster of eigenvalues), a condition number for an eigenvector (or eigenspace), and spectral projectors onto a selected cluster. By computing reciprocal values we avoid overflow. Changes in eigenvectors and eigenspaces are measured by their change in angle. The condition numbers yield bothasymptotic andglobal error bounds. The asymptotic bounds are only accurate for small perturbations (E, F) of (A, B), while the global bounds work for all (E, F.) up to a certain bound, whose size is determined by the conditioning of the problem. It is also shown how these upper bounds can be estimated. Fortran 77software that implements our algorithms for reordering eigenvalues, computing (left and right) deflating subspaces with specified eigenvalues and condition number estimation are presented. Computational experiments that illustrate the accuracy, efficiency and reliability of our software are also described.  相似文献   
4.
The null space method is a standard method for solving the linear least squares problem subject to equality constraints (the LSE problem). We show that three variants of the method, including one used in LAPACK that is based on the generalized QR factorization, are numerically stable. We derive two perturbation bounds for the LSE problem: one of standard form that is not attainable, and a bound that yields the condition number of the LSE problem to within a small constant factor. By combining the backward error analysis and perturbation bounds we derive an approximate forward error bound suitable for practical computation. Numerical experiments are given to illustrate the sharpness of this bound.  相似文献   
5.
Complex symmetric matrices whose real and imaginary parts are positive definite are shown to have a growth factor bounded by 2 for LU factorization. This result adds to the classes of matrix for which it is known to be safe not to pivot in LU factorization. Block factorization with the pivoting strategy of Bunch and Kaufman is also considered, and it is shown that for such matrices only pivots are used and the same growth factor bound of 2 holds, but that interchanges that destroy band structure may be made. The latter results hold whether the pivoting strategy uses the usual absolute value or the modification employed in LINPACK and LAPACK.

  相似文献   

1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号