首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This work is concerned with the numerical solution of large‐scale linear matrix equations . The most straightforward approach computes from the solution of an mn × mn linear system, typically limiting the feasible values of m,n to a few hundreds at most. Our new approach exploits the fact that X can often be well approximated by a low‐rank matrix. It combines greedy low‐rank techniques with Galerkin projection and preconditioned gradients. In turn, only linear systems of size m × m and n × n need to be solved. Moreover, these linear systems inherit the sparsity of the coefficient matrices, which allows to address linear matrix equations as large as m = n = O(105). Numerical experiments demonstrate that the proposed methods perform well for generalized Lyapunov equations. Even for the case of standard Lyapunov equations, our methods can be advantageous, as we do not need to assume that C has low rank. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Let k( ? , ? ) be a continuous kernel defined on Ω × Ω, Ω compact subset of , , and let us consider the integral operator from into ( set of continuous functions on Ω) defined as the map is a compact operator and therefore its spectrum forms a bounded sequence having zero as unique accumulation point. Here, we first consider in detail the approximation of by using rectangle formula in the case where Ω = [0,1], and the step is h = 1 ∕ n. The related linear application can be represented as a matrix An of size n. In accordance with the compact character of the continuous operator, we prove that {An} ~ σ0 and {An} ~ λ0, that is, the considered sequence has singular values and eigenvalues clustered at zero. Moreover, the cluster is strong in perfect analogy with the compactness of . Several generalizations are sketched, with special attention to the general case of pure sampling sequences, and few examples and numerical experiments are critically discussed, including the use of GMRES and preconditioned GMRES for large linear systems coming from the numerical approximation of integral equations of the form (1) with and datum g(x). Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Let n = (n1,n2,…,nk) and α = (α1,α2,…,αk) be integer k‐tuples with αi∈{1,2,…,ni?1} and for all i = 1,2,…,k. Multilevel block α ‐circulants are (k + 1)‐level block matrices, where the first k levels have the block αi‐circulant structure with orders and the entries in the (k + 1)‐st level are unstructured rectangular matrices with the same size . When k = 1, Trench discussed on his paper "Inverse problems for unilevel block α‐circulants" the Procrustes problems and inverse problems of unilevel block α‐circulants and their approximations. But the results are not perfect for the case gcd( α , n ) > 1 (i.e., gcd(α1,n1) > 1). In this paper, we also discuss Procrustes problems for multilevel block α ‐circulants. Our results can further make up for the deficiency when k = 1. When , inverse eigenproblems for this kind of matrices are also solved. By using the related results, we can design an artificial Hopfield neural network system that possesses the prescribed equilibria, where the Jacobian matrix of this system has the constrained multilevel α ‐circulative structure. Finally, some examples are employed to illustrate the effectiveness of the proposed results. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Standard numerical algorithms, such as the fast multipole method or ‐matrix schemes, rely on low‐rank approximations of the underlying kernel function. For high‐frequency problems, the ranks grow rapidly as the mesh is refined, and standard techniques are no longer attractive. Directional compression techniques solve this problem by using decompositions based on plane waves. Taking advantage of hierarchical relations between these waves' directions, an efficient approximation is obtained. This paper is dedicated to directionalmatrices that employ local low‐rank approximations to handle directional representations efficiently. The key result is an algorithm that takes an arbitrary matrix and finds a quasi‐optimal approximation of this matrix as a directional ‐matrix using a prescribed block tree. The algorithm can reach any given accuracy, and the approximation requires only units of storage, where n is the matrix dimension, κ is the wave number, and k is the local rank. In particular, we have a complexity of if κ is constant and for high‐frequency problems characterized by κ2n. Because the algorithm can be applied to arbitrary matrices, it can serve as the foundation of fast techniques for constructing preconditioners.  相似文献   

5.
As proposed by R. H. Chan and M. K. Ng (1993), linear systems of the form T [ f ] x = b , where T [ f ] denotes the n×n Toeplitz matrix generated by the function f, can be solved using iterative solvers with as a preconditioner. This article aims at generalizing this approach to the case of Toeplitz‐block matrices and matrix‐valued generating functions F . We prove that if F is Hermitian positive definite, most eigenvalues of the preconditioned matrix T [ F −1]T[ F ] are clustered around one. Numerical experiments demonstrate the performance of this preconditioner.  相似文献   

6.
The rational Arnoldi process is a popular method for the computation of a few eigenvalues of a large non‐Hermitian matrix and for the approximation of matrix functions. The method is particularly attractive when the rational functions that determine the process have only few distinct poles , because then few factorizations of matrices of the form A ? zjI have to be computed. We discuss recursion relations for orthogonal bases of rational Krylov subspaces determined by rational functions with few distinct poles. These recursion formulas yield a new implementation of the rational Arnoldi process. Applications of the rational Arnoldi process to the approximation of matrix functions as well as to the computation of eigenvalues and pseudospectra of A are described. The new implementation is compared to several available implementations.  相似文献   

7.
In this paper, we propose and analyze the numerical algorithms for fast solution of periodic elliptic problems in random media in d $$ {\mathbb{R}}^d $$ , d = 2 , 3 $$ d=2,3 $$ . Both the two-dimensional (2D) and three-dimensional (3D) elliptic problems are considered for the jumping equation coefficients built as a checkerboard type configuration of bumps randomly distributed on a large L × L $$ L\times L $$ , or L × L × L $$ L\times L\times L $$ lattice, respectively. The finite element method discretization procedure on a 3D n × n × n $$ n\times n\times n $$ uniform tensor grid is described in detail, and the Kronecker tensor product approach is proposed for fast generation of the stiffness matrix. We introduce tensor techniques for the construction of the low Kronecker rank spectrally equivalent preconditioner in a periodic setting to be used in the framework of the preconditioned conjugate gradient iteration. The discrete 3D periodic Laplacian pseudo-inverse is first diagonalized in the Fourier basis, and then the diagonal matrix is reshaped into a fully populated third-order tensor of size n × n × n $$ n\times n\times n $$ . The latter is approximated by a low-rank canonical tensor by using the multigrid Tucker-to-canonical tensor transform. As an example, we apply the presented solver in numerical analysis of stochastic homogenization method where the 3D elliptic equation should be solved many hundred times, and where for every random sampling of the equation coefficient one has to construct the new stiffness matrix and the right-hand side. The computational characteristics of the presented solver in terms of a lattice parameter L $$ L $$ and the grid-size, n d $$ {n}^d $$ , in both 2D and 3D cases are illustrated in numerical tests. Our solver can be used in various applications where the elliptic problem should be solved for a number of different coefficients for example, in many-particle dynamics, protein docking problems or stochastic modeling.  相似文献   

8.
In this paper, we consider efficient algorithms for solving the algebraic equation , 0<α<1, where is a properly scaled symmetric and positive definite matrix obtained from finite difference or finite element approximations of second‐order elliptic problems in , d=1,2,3. This solution is then written as with with β positive integer. The approximate solution method we propose and study is based on the best uniform rational approximation of the function tβα for 0<t≤1 and on the assumption that one has at hand an efficient method (e.g., multigrid, multilevel, or other fast algorithms) for solving equations such as , c≥0. The provided numerical experiments confirm the efficiency of the proposed algorithms.  相似文献   

9.
The preconditioned inverse iteration is an efficient method to compute the smallest eigenpair of a symmetric positive definite matrix M. Here we use this method to find the smallest eigenvalues of a hierarchical matrix. The storage complexity of the data‐sparse ‐matrices is almost linear. We use ‐arithmetic to precondition with an approximate inverse of M or an approximate Cholesky decomposition of M. In general, ‐arithmetic is of linear‐polylogarithmic complexity, so the computation of one eigenvalue is cheap. We extend the ideas to the computation of inner eigenvalues by computing an invariant subspace S of (M ? μI)2 by subspace preconditioned inverse iteration. The eigenvalues of the generalized matrix Rayleigh quotient μM(S) are the desired inner eigenvalues of M. The idea of using (M ? μI)2 instead of M is known as the folded spectrum method. As we rely on the positive definiteness of the shifted matrix, we cannot simply apply shifted inverse iteration therefor. Numerical results substantiate the convergence properties and show that the computation of the eigenvalues is superior to existing algorithms for non‐sparse matrices.Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
We consider linear vibrational systems described by a system of second‐order differential equations of the form , where M and K are positive definite matrices, representing mass and stiffness, respectively. The damping matrix D is assumed to be positive semidefinite. We are interested in finding an optimal damping matrix that will damp a certain (critical) part of the eigenfrequencies. For this, we use an optimization criterion based on the minimization of the average total energy of the system. This is equivalent to the minimization of the trace of the solution of the corresponding Lyapunov equation AX + XAT = ?GGT, where A is the matrix obtained from linearizing the second‐order differential equation, and G depends on the critical part of the eigenfrequencies to be damped. The main result is the efficient approximation and the corresponding error bound for the trace of the solution of the Lyapunov equation obtained through dimension reduction, which includes the influence of the right‐hand side GGT and allows us to control the accuracy of the trace approximation. This trace approximation yields a very accelerated optimization algorithm for determining the optimal damping. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
We compare some alternative methods for computing solutions of underdetermined linear systems, Ax=b. Each method involves solving an associated system with a different nonsingular coefficient matrix, . We obtain bounds on the condition numbers of these nonsingular matrices and test the methods on numerical examples. We discuss implications for computing eigenvector derivatives and make some recommendations.  相似文献   

12.
We generalize the matrix Kronecker product to tensors and propose the tensor Kronecker product singular value decomposition that decomposes a real k‐way tensor into a linear combination of tensor Kronecker products with an arbitrary number of d factors. We show how to construct , where each factor is also a k‐way tensor, thus including matrices (k=2) as a special case. This problem is readily solved by reshaping and permuting into a d‐way tensor, followed by a orthogonal polyadic decomposition. Moreover, we introduce the new notion of general symmetric tensors (encompassing symmetric, persymmetric, centrosymmetric, Toeplitz and Hankel tensors, etc.) and prove that when is structured then its factors will also inherit this structure.  相似文献   

13.
The local convergence of alternating optimization methods with overrelaxation for low-rank matrix and tensor problems is established. The analysis is based on the linearization of the method which takes the form of an SOR iteration for a positive semidefinite Hessian and can be studied in the corresponding quotient geometry of equivalent low-rank representations. In the matrix case, the optimal relaxation parameter for accelerating the local convergence can be determined from the convergence rate of the standard method. This result relies on a version of Young's SOR theorem for positive semidefinite 2 × 2 $$ 2\times 2 $$ block systems.  相似文献   

14.
We consider the B‐spline isogeometric analysis approximation of the Laplacian eigenvalue problem −Δu = λu over the d‐dimensional hypercube (0,1)d. By using tensor‐product arguments, we show that the eigenvalue–eigenvector structure of the resulting discretization matrix is completely determined by the eigenvalue–eigenvector structure of the matrix arising from the isogeometric analysis approximation based on B‐splines of degree p of the unidimensional problem . Here, n is the mesh fineness parameter, and the size of is N(n,p) = n + p − 2. In previous works, it was established that the normalized sequence enjoys an asymptotic spectral distribution described by a function ep(θ), the so‐called spectral symbol. The contributions of this paper can be summarized as follows:
  1. the eigenvalues of are arranged in ascending order, ;
  2. is a sequence of functions from [0,π] to , which depends only on p;
  3. h = 1/n and θj,n = jπh for j = 1,…,n; and
  4. is the remainder, which satisfies for some constant depending only on α and p. We also provide a proof of this expansion for α = 0 and j = 1,…,N(n,p) −(4p − 2), where 4p − 2 represents a theoretical estimate of the number of outliers .
  5. We show through numerical experiments that, for p ≥ 3 and k ≥ 1, there exists a point θ( p,k) ∈ (0,π) such that vanishes on [0,θ( p,k)]. Moreover, as it is suggested by the numerics of this paper, the infimum of θ(p,k) over all k ≥ 1, say yp, is strictly positive, and the equation holds numerically whenever θj,n < θ( p), where θ( p) is a point in (0,yp] which grows with p.
  6. For p ≥ 3, based on the asymptotic expansion in the above item 3, we propose a parallel interpolation–extrapolation algorithm for computing the eigenvalues of , excluding the outliers. The performance of the algorithm is illustrated through numerical experiments. Note that, by the previous item 4, the algorithm is actually not necessary for computing the eigenvalues corresponding to points θj,n < θ( p).
  相似文献   

15.
We describe a randomized Krylov‐subspace method for estimating the spectral condition number of a real matrix A or indicating that it is numerically rank deficient. The main difficulty in estimating the condition number is the estimation of the smallest singular value σ min of A. Our method estimates this value by solving a consistent linear least squares problem with a known solution using a specific Krylov‐subspace method called LSQR. In this method, the forward error tends to concentrate in the direction of a right singular vector corresponding to σ min . Extensive experiments show that the method is able to estimate well the condition number of a wide array of matrices. It can sometimes estimate the condition number when running dense singular value decomposition would be impractical due to the computational cost or the memory requirements. The method uses very little memory (it inherits this property from LSQR), and it works equally well on square and rectangular matrices.  相似文献   

16.
In the present article, we consider a class of elliptic partial differential equations with Dirichlet boundary conditions and where the operator is div(?a( x )?·), with a continuous and positive over Ω , Ω being an open and bounded subset of R d , d≥1. For the numerical approximation, we consider the classical P k Finite Elements, in the case of Friedrichs–Keller triangulations, leading, as usual, to sequences of matrices of increasing size. The new results concern the spectral analysis of the resulting matrix‐sequences in the direction of the global distribution in the Weyl sense, with a concise overview on localization, clustering, extremal eigenvalues, and asymptotic conditioning. We study in detail the case of constant coefficients on Ω=(0,1)2 and we give a brief account in the more involved case of variable coefficients and more general domains. Tools are drawn from the Toeplitz technology and from the rather new theory of Generalized Locally Toeplitz sequences. Numerical results are shown for a practical evidence of the theoretical findings.  相似文献   

17.
In this paper, we consider the exact/approximate general joint block diagonalization (GJBD) problem of a matrix set { A i } i = 0 p ( p ≥ 1), where a nonsingular matrix W (often referred to as a diagonalizer) needs to be found such that the matrices W HAiW 's are all exactly/approximately block‐diagonal matrices with as many diagonal blocks as possible. We show that the diagonalizer of the exact GJBD problem can be given by W = [x1,x2,…,xn]Π, where Π is a permutation matrix and xi's are eigenvectors of the matrix polynomial P ( λ ) = i = 0 p λ i A i , satisfying that [x1,x2,…,xn] is nonsingular and where the geometric multiplicity of each λi corresponding with xi is equal to 1. In addition, the equivalence of all solutions to the exact GJBD problem is established. Moreover, a theoretical proof is given to show why the approximate GJBD problem can be solved similarly to the exact GJBD problem. Based on the theoretical results, a three‐stage method is proposed, and numerical results show the merits of the method.  相似文献   

18.
For studying spectral properties of a nonnormal matrix A C n × n , information about its spectrum σ(A) alone is usually not enough. Effects of perturbations on σ(A) can be studied by computing ε‐pseudospectra, i.e. the level sets of the resolvent norm function g ( z ) = ( z I ? A ) ? 1 2 . The computation of ε‐pseudospectra requires determining the smallest singular values σ min ( z I ? A ) for all z on a portion of the complex plane. In this work, we propose a reduced basis approach to pseudospectra computation, which provides highly accurate estimates of pseudospectra in the region of interest, in particular, for pseudospectra estimates in isolated parts of the spectrum containing few eigenvalues of A. It incorporates the sampled singular vectors of zI ? A for different values of z, and implicitly exploits their smoothness properties. It provides rigorous upper and lower bounds for the pseudospectra in the region of interest. In addition, we propose a domain splitting technique for tackling numerically more challenging examples. We present a comparison of our algorithms to several existing approaches on a number of numerical examples, showing that our approach provides significant improvement in terms of computational time.  相似文献   

19.
We present a successive projection method for solving the unbalanced Procrustes problem: given matrix A∈Rn×n and B∈Rn×k, n>k, minimize the residual‖AQ-B‖F with the orthonormal constraint QTQ = Ik on the variant Q∈Rn×k. The presented algorithm consists of solving k least squares problems with quadratic constraints and an expanded balance problem at each sweep. We give a detailed convergence analysis. Numerical experiments reported in this paper show that our new algorithm is superior to other existing methods.  相似文献   

20.
This paper describes and develops a fast and accurate path following algorithm that computes the field of values boundary curve F ( A ) $$ \partial F(A) $$ for every conceivable complex or real square matrix A $$ A $$ . It relies on the matrix flow decomposition algorithm that finds a proper block-diagonal flow representation for the associated hermitean matrix flow A ( t ) = cos ( t ) H + sin ( t ) K $$ {\mathcal{F}}_A(t)=\cos (t)H+\sin (t)K $$ under unitary similarity if that is possible. Here A ( t ) $$ {\mathcal{F}}_A(t) $$ is the 1-parameter-varying linear combination of the real and skew part matrices H = ( A + A ) / 2 $$ H=\left(A+{A}^{\ast}\right)/2 $$ and K = ( A A ) / ( 2 i ) $$ K=\left(A-{A}^{\ast}\right)/(2i) $$ of A $$ A $$ . For indecomposable matrix flows, A ( t ) $$ {\mathcal{F}}_A(t) $$ has just one block and the ZNN based field of values algorithm works with A ( t ) $$ {\mathcal{F}}_A(t) $$ directly. For decomposing flows A ( t ) $$ {\mathcal{F}}_A(t) $$ , the algorithm decomposes the given matrix A $$ A $$ unitarily into block-diagonal form U A U = diag ( A j ) $$ {U}^{\ast } AU=\operatorname{diag}\left({A}_j\right) $$ with j > 1 $$ j>1 $$ diagonal blocks A j $$ {A}_j $$ whose individual sizes add up to the size of A $$ A $$ . It then computes the field of values boundaries separately for each diagonal block A j $$ {A}_j $$ using the path following ZNN eigenvalue method. The convex hull of all sub-fields of values boundary points F ( A j ) $$ \partial F\left({A}_j\right) $$ finally determines the field of values boundary curve correctly for decomposing matrices A $$ A $$ . The algorithm removes standard restrictions for path following FoV methods that generally cannot deal with decomposing matrices A $$ A $$ due to possible eigencurve crossings of A ( t ) $$ {\mathcal{F}}_A(t) $$ . Tests and numerical comparisons are included. Our ZNN based method is coded for sequential and parallel computations and both versions run very accurately and fast when compared with Johnson's Francis QR eigenvalue and Bendixson rectangle based method and compute global eigenanalyses of A ( t k ) $$ {\mathcal{F}}_A\left({t}_k\right) $$ for large discrete sets of angles t k [ 0 , 2 π ] $$ {t}_k\in \left[0,2\pi \right] $$ more slowly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号