首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One of the scalability bottlenecks for the large-scale usage of Gaussian processes is the computation of the maximum likelihood estimates of the parameters of the covariance matrix. The classical approach requires a Cholesky factorization of the dense covariance matrix for each optimization iteration. In this work, we present an estimating equations approach for the parameters of zero-mean Gaussian processes. The distinguishing feature of this approach is that no linear system needs to be solved with the covariance matrix. Our approach requires solving an optimization problem for which the main computational expense for the calculation of its objective and gradient is the evaluation of traces of products of the covariance matrix with itself and with its derivatives. For many problems, this is an O(nlog?n) effort, and it is always no larger than O(n2). We prove that when the covariance matrix has a bounded condition number, our approach has the same convergence rate as does maximum likelihood in that the Godambe information matrix of the resulting estimator is at least as large as a fixed fraction of the Fisher information matrix. We demonstrate the effectiveness of the proposed approach on two synthetic examples, one of which involves more than 1 million data points.  相似文献   

2.
This article proposes a new algorithm for cross-validation of the best linear unbiased predictor. The algorithm relies on a new technique for downdating the inverse of a Cholesky factor. Given n data points, the new algorithm has complexity O(n3), compared to O(n4), which is the order for the more traditional delete one and recalculate method.  相似文献   

3.
In this paper we develop a fast collocation method for second boundary integral equations by the trigonometric polynomials. We propose a convenient way to compress the dense matrix representation of a compact integral operator with a smooth kernel under the Fourier basis and the corresponding collocation functionals. The compression leads to a sparse matrix with only O(nlog2n) number of nonzero entries, where 2n+1 denotes the order of the matrix. Thus we develop a fast Fourier-collocation method. We prove that the fast Fourier-collocation method gives the optimal convergence order up to a logarithmic factor. Moreover, we design a fast scheme for solving the corresponding truncated linear system. We establish that this algorithm preserves the quasi-optimal convergence of the approximate solution with requiring a number of O(nlog3n) multiplications.  相似文献   

4.
We study asymptotically fast multiplication algorithms for matrix pairs of arbitrary di- mensions, and optimize the exponents of their arithmetic complexity bounds. For a large class of input matrix pairs, we improve the known exponents. We also show some applications of our results:(i) we decrease from O(n~2 n~(1 o)(1)logq)to O(n~(1.9998) n~(1 o(1))logq)the known arithmetic complexity bound for the univariate polynomial factorization of degree n over a finite field with q elements; (ii) we decrease from 2.837 to 2.7945 the known exponent of the work and arithmetic processor bounds for fast deterministic(NC)parallel evaluation of the determinant, the characteristic polynomial, and the inverse of an n×n matrix, as well as for the solution to a nonsingular linear system of n equations; (iii)we decrease from O(m~(1.575)n)to O(m~(1.5356)n)the known bound for computing basic solutions to a linear programming problem with m constraints and n variables.  相似文献   

5.
We consider the numerical solution of the generalized Lyapunov and Stein equations in \(\mathbb {R}^{n}\), arising respectively from stochastic optimal control in continuous- and discrete-time. Generalizing the Smith method, our algorithms converge quadratically and have an O(n3) computational complexity per iteration and an O(n2) memory requirement. For large-scale problems, when the relevant matrix operators are “sparse”, our algorithm for generalized Stein (or Lyapunov) equations may achieve the complexity and memory requirement of O(n) (or similar to that of the solution of the linear systems associated with the sparse matrix operators). These efficient algorithms can be applied to Newton’s method for the solution of the rational Riccati equations. This contrasts favourably with the naive Newton algorithms of O(n6) complexity or the slower modified Newton’s methods of O(n3) complexity. The convergence and error analysis will be considered and numerical examples provided.  相似文献   

6.
Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets n be the number of constraints and d be the number of variables, with n >> d{n \gg d}. Then, existing exact methods find a solution vector in O(nd 2) time. We present two randomized algorithms that provide accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if n is sufficiently larger than d, the approximate solution can be computed in O(nd ln d) time.  相似文献   

7.
A procedure is developed that enables the encoding of a subjectiven-dimensional joint normal probability density function through the assessment of its marginal means and variances andn(n–1)/2 conditional means. The new method is based on the theory of conjugate directions for quadratic forms, and it exploits the fact that normal distributions have quadratic equal-likelihood surfaces. Unlike previous approaches, this new method enables easy detection and resolution of inconsistencies in the assessments that could lead to an indefinite estimate of the covariance matrix.  相似文献   

8.
We present real, complex, and quaternionic versions of a simple randomized polynomial time algorithm to approximate the permanent of a nonnegative matrix and, more generally, the mixed discriminant of positive semidefinite matrices. The algorithm provides an unbiased estimator, which, with high probability, approximates the true value within a factor of O(cn), where n is the size of the matrix (matrices) and where c ≈ 0.28 for the real version, c ≈ 0.56 for the complex version, and c ≈ 0.76 for the quaternionic version. We discuss possible extensions of our method as well as applications of mixed discriminants to problems of combinatorial counting. ©1999 John Wiley & Sons, Inc. Random Struct. Alg., 14, 29–61, 1999  相似文献   

9.
The QR algorithm is one of the classical methods to compute the eigendecomposition of a matrix. If it is applied on a dense n × n matrix, this algorithm requires O(n3) operations per iteration step. To reduce this complexity for a symmetric matrix to O(n), the original matrix is first reduced to tridiagonal form using orthogonal similarity transformations. In the report (Report TW360, May 2003) a reduction from a symmetric matrix into a similar semiseparable one is described. In this paper a QR algorithm to compute the eigenvalues of semiseparable matrices is designed where each iteration step requires O(n) operations. Hence, combined with the reduction to semiseparable form, the eigenvalues of symmetric matrices can be computed via intermediate semiseparable matrices, instead of tridiagonal ones. The eigenvectors of the intermediate semiseparable matrix will be computed by applying inverse iteration to this matrix. This will be achieved by using an O(n) system solver, for semiseparable matrices. A combination of the previous steps leads to an algorithm for computing the eigenvalue decompositions of semiseparable matrices. Combined with the reduction of a symmetric matrix towards semiseparable form, this algorithm can also be used to calculate the eigenvalue decomposition of symmetric matrices. The presented algorithm has the same order of complexity as the tridiagonal approach, but has larger lower order terms. Numerical experiments illustrate the complexity and the numerical accuracy of the proposed method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, we consider solving non-convolution type integral equations by the preconditioned conjugate gradient method. The fast dense matrix method is a fast multiplication scheme that provides a dense discretization matrix A approximating a given integral equation. The dense matrix A can be constructed in O(n) operations and requires only O(n) storage where n is the size of the matrix. Moreover, the matrix-vector multiplication A xcan be done in O(n log n) operations. Thus if the conjugate gradient method is used to solve the discretized system, the cost per iteration is O(n log n) operations. However, for some integral equations, such as the Fredholm integral equations of the first kind, the system will be ill-conditioned and therefore the convergence rate of the method will be slow. In these cases, preconditioning is required to speed up the convergence rate of the method. A good choice of preconditioner is the optimal circulant preconditioner which is the minimizer of CA F in Frobenius norm over all circulant matrices C. It can be obtained by taking arithmetic averages of all the entries of A and therefore the cost of constructing the preconditioner is of O(n 2) operations for general dense matrices. In this paper, we develop an O(n log n) method of constructing the preconditioner for dense matrices A obtained from the fast dense matrix method. Application of these ideas to boundary integral equations from potential theory will be given. These equations are ill-conditioned whereas their optimal circulant preconditioned equations will be well-conditioned. The accuracy of the approximation A, the fast construction of the preconditioner and the fast convergence of the preconditioned systems will be illustrated by numerical examples.  相似文献   

11.
A fast solution algorithm is proposed for solving block banded block Toeplitz systems with non-banded Toeplitz blocks. The algorithm constructs the circulant transformation of a given Toeplitz system and then by means of the Sherman-Morrison-Woodbury formula transforms its inverse to an inverse of the original matrix. The block circulant matrix with Toeplitz blocks is converted to a block diagonal matrix with Toeplitz blocks, and the resulting Toeplitz systems are solved by means of a fast Toeplitz solver.The computational complexity in the case one uses fast Toeplitz solvers is equal to ξ(m,n,k)=O(mn3)+O(k3n3) flops, there are m block rows and m block columns in the matrix, n is the order of blocks, 2k+1 is the bandwidth. The validity of the approach is illustrated by numerical experiments.  相似文献   

12.
The splitting-integrating method is proposed to normalize digital images and patterns inn dimensions under inverse transformation. This method is much simpler than other approaches because no solutions of nonlinear algebraic equations are required. Also, the splitting-integrating method produces images free from superfluous holes and blanks, which often occur in transforming digitized images by other methods.The splitting-integrating method has been applied successfully to pattern recognition and image processing; but no error analysis has been provided so far. Because the image greyness is represented as an integral value, we can derive by numerical analysis error bounds of approximate greyness solutions, to show that when piecewise constant and multi-linear interpolations are used, convergence ratesO(1/N) andO(1/N 2) can be obtained respectively, whereN is a division number such that a pixel in then-dimensional images is split intoN n subpixels. Moreover, numerical and graphical experiments are carried out for a sample of binary images in two dimensions, to confirm the convergence rates derived.  相似文献   

13.
We consider a new method for sparse covariance matrix estimation which is motivated by previous results for the so-called Stein-type estimators. Stein proposed a method for regularizing the sample covariance matrix by shrinking together the eigenvalues; the amount of shrinkage is chosen to minimize an unbiased estimate of the risk (UBEOR) under the entropy loss function. The resulting estimator has been shown in simulations to yield significant risk reductions over the maximum likelihood estimator. Our method extends the UBEOR minimization problem by adding an ?1 penalty on the entries of the estimated covariance matrix, which encourages a sparse estimate. For a multivariate Gaussian distribution, zeros in the covariance matrix correspond to marginal independences between variables. Unlike the ?1-penalized Gaussian likelihood function, our penalized UBEOR objective is convex and can be minimized via a simple block coordinate descent procedure. We demonstrate via numerical simulations and an analysis of microarray data from breast cancer patients that our proposed method generally outperforms other methods for sparse covariance matrix estimation and can be computed efficiently even in high dimensions.  相似文献   

14.
The peeling of a d-dimensional set of points is usually performed with successive calls to a convex hull algorithm; the optimal worst-case convex hull algorithm, known to have an O(n˙ Log (n)) execution time, may give an O(n˙n˙ Log (n)) to peel all the set; an O(n˙n) convex hull algorithm, m being the number of extremal points, is shown to peel every set with an O(n-n) time, and proved to be optimal; an implementation of this algorithm is given for planar sets and spatial sets, but the latter give only an approximate O(n˙n) performance.  相似文献   

15.
Quadratic models of objective functions are highly useful in many optimization algorithms. They are updated regularly to include new information about the objective function, such as the difference between two gradient vectors. We consider the case, however, when each model interpolates some function values, so an update is required when a new function value replaces an old one. We let the number of interpolation conditions, m say, be such that there is freedom in each new quadratic model that is taken up by minimizing the Frobenius norm of the second derivative matrix of the change to the model. This variational problem is expressed as the solution of an (m+n+1)×(m+n+1) system of linear equations, where n is the number of variables of the objective function. Further, the inverse of the matrix of the system provides the coefficients of quadratic Lagrange functions of the current interpolation problem. A method is presented for updating all these coefficients in ({m+n}2) operations, which allows the model to be updated too. An extension to the method is also described that suppresses the constant terms of the Lagrange functions. These techniques have a useful stability property that is investigated in some numerical experiments.  相似文献   

16.
In this paper, we consider an approximate block diagonalization algorithm of an n×n real Hankel matrix in which the successive transformation matrices are upper triangular Toeplitz matrices, and propose a new fast approach to compute the factorization in O(n 2) operations. This method consists on using the revised Bini method (Lin et al., Theor Comp Sci 315: 511–523, 2004). To motivate our approach, we also propose an approximate factorization variant of the customary fast method based on Schur complementation adapted to the n×n real Hankel matrix. All algorithms have been implemented in Matlab and numerical results are included to illustrate the effectiveness of our approach.  相似文献   

17.
The semi‐linear equation −uxx − ϵuyy = f(x, y, u) with Dirichlet boundary conditions is solved by an O(h4) finite difference method, which has local truncation error O(h2) at the mesh points neighboring the boundary and O(h4) at most interior mesh points. It is proved that the finite difference method is O(h4) uniformly convergent as h → 0. The method is considered in the form of a system of algebraic equations with a nine diagonal sparse matrix. The system of algebraic equations is solved by an implicit iterative method combined with Gauss elimination. A Mathematica module is designed for the purpose of testing and using the method. To illustrate the method, the equation of twisting a springy rod is solved. © 2000 John Wiley & Sons, Inc. Numer Methods Partial Differential Eq 16: 395–407, 2000  相似文献   

18.
The method fast inverse using nested dissection (FIND) was proposed to calculate the diagonal entries of the inverse of a large sparse symmetric matrix. In this paper, we show how the FIND algorithm can be generalized to calculate off‐diagonal entries of the inverse that correspond to ‘short’ geometric distances within the computational mesh of the original matrix. The idea is to extend the downward pass in FIND that eliminates all nodes outside of each node cluster. In our advanced downwards pass, it eliminates all nodes outside of each ‘node cluster pair’ from a subset of all node cluster pairs. The complexity depends on how far (i,j) is from the main diagonal. In the extension of the algorithm, all entries of the inverse that correspond to vertex pairs that are geometrically closer than a predefined length limit l will be calculated. More precisely, let α be the total number of nodes in a two‐dimensional square mesh. We will show that our algorithm can compute O(α3 ∕ 2 + 2ε) entries of the inverse in O(α3 ∕ 2 + 2ε) time where l = O(α1 ∕ 4 + ε) and 0 ≤ ε ≤1 ∕ 4. Numerical examples are given to illustrate the efficiency of the proposed method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
The numerical implementation of the extended to the limit sparse LDLT factorization solution methods for three-dimensional self-adjoint elliptic partial differential equations [3] is given. Two FORTRAN routines for the approximate (or exact) factorization of the coefficient matrix and solution of the resulting finite difference equations are supplied. The amount of fill-in terms can be controlled by the user through parameters R1, R2 the limiting case being when the matrix is factorized exactly.  相似文献   

20.
Parallel iterative methods are powerful in solving large systems of linear equations (LEs). The existing parallel computing research results focus mainly on sparse systems or others with particular structure. Most are based on parallel implementation of the classical relaxation methods such as Gauss-Seidel, SOR, and AOR methods which can be efficiently carried out on multiprocessor system. In this paper, we propose a novel parallel splitting operator method in which we divide the coefficient matrix into two or three parts. Then we convert the original problem (LEs) into a monotone (linear) variational inequality problem (VI) with separable structure. Finally, an inexact parallel splitting augmented Lagrangian method is proposed to solve the variational inequality problem (VI). To avoid dealing with the matrix inverse operator, we introduce proper inexact terms in subproblems such that the complexity of each iteration of the proposed method is O(n2). In addition, the proposed method does not require any special structure of system of LEs under consideration. Convergence of the proposed methods in dealing with two and three separable operators respectively, is proved. Numerical computations are provided to show the applicability and robustness of the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号