首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
自适应稀疏伪谱逼近法是广义混沌多项式类方法的最新进展,相对于其它方法具有计算精度高、速度快的优点.但它仍存在如下缺点:1)终止判据对逼近误差的估计精度偏低;2)只适用于单输出问题.本文提出了适用于多输出问题且具有更高逼近精度的自适应稀疏伪谱逼近新方法.本文首先提出了新型终止判据及基于此新型终止判据的自适应稀疏伪谱逼近新方法,并以命题的形式证明了新型终止判据相比于现有终止判据具有更高的估计精度,从而使基于此的逼近函数精度更接近于预期精度;进而,本文基于指标集的统一策略和新型终止判据,提出了适用于多输出问题的自适应稀疏伪谱逼近新方法,该方法因能充分利用各输出变量的抽样结果,具有比将单输出方法直接推广到多输出问题更高的计算效率.多个算例验证了本文所提出新方法的有效性和正确性.  相似文献   

2.
Many physical processes appear to exhibit fractional order behavior that may vary with time or space. The continuum of order in the fractional calculus allows the order of the fractional operator to be considered as a variable. Numerical methods and analysis of stability and convergence of numerical scheme for the variable fractional order partial differential equations are quite limited and difficult to derive. This motivates us to develop efficient numerical methods as well as stability and convergence of the implicit numerical methods for the space-time variable fractional order diffusion equation on a finite domain. It is worth mentioning that here we use the Coimbra-definition variable time fractional derivative which is more efficient from the numerical standpoint and is preferable for modeling dynamical systems. An implicit Euler approximation is proposed and then the stability and convergence of the numerical scheme are investigated. Finally, numerical examples are provided to show that the implicit Euler approximation is computationally efficient.  相似文献   

3.
An approach to constructing a Pareto front approximation to computationally expensive multiobjective optimization problems is developed. The approximation is constructed as a sub-complex of a Delaunay triangulation of a finite set of Pareto optimal outcomes to the problem. The approach is based on the concept of inherent nondominance. Rules for checking the inherent nondominance of complexes are developed and applying the rules is demonstrated with examples. The quality of the approximation is quantified with error estimates. Due to its properties, the Pareto front approximation works as a surrogate to the original problem for decision making with interactive methods.  相似文献   

4.
Summary. We present an approximate-inertial-manifold-based postprocess to enhance Chebyshev or Legendre spectral Galerkin methods. We prove that the postprocess improves the order of convergence of the Galerkin solution, yielding the same accuracy as the nonlinear Galerkin method. Numerical experiments show that the new method is computationally more efficient than Galerkin and nonlinear Galerkin methods. New approximation results for Chebyshev polynomials are presented. Received January 5, 1998 / Revised version received September 7, 1999 / Published online June 8, 2000  相似文献   

5.
Nonlinear state-space models driven by differential equations have been widely used in science. Their statistical inference generally requires computing the mean and covariance matrix of some nonlinear function of the state variables, which can be done in several ways. For example, such computations may be approximately done by Monte Carlo, which is rather computationally expensive. Linear approximation by the first-order Taylor expansion is a fast alternative. However, the approximation error becomes non-negligible with strongly nonlinear functions. Unscented transformation was proposed to overcome these difficulties, but it lacks theoretical justification. In this paper, we derive some theoretical properties of the unscented transformation and contrast it with the method of linear approximation. Particularly, we derive the convergence rate of the unscented transformation.  相似文献   

6.
The computation of integrals in higher dimensions and on general domains, when no explicit cubature rules are known, can be "easily" addressed by means of the quasi-Monte Carlo method. The method, simple in its formulation, becomes computationally inefficient when the space dimension is growing and the integration domain is particularly complex. In this paper we present two new approaches to the quasi-Monte Carlo method for cubature based on nonnegative least squares and approximate Fekete points. The main idea is to use less points and especially good points for solving the system of the moments. Good points are here intended as points with good interpolation properties, due to the strict connection between interpolation and cubature. Numerical experiments show that, in average, just a tenth of the points should be used to maintain the same approximation order of the quasi-Monte Carlo method. The method has been satisfactorily applied to 2- and 3-dimensional problems on quite complex domains.  相似文献   

7.
Thresholding based iterative algorithms have the trade-off between effectiveness and optimality. Some are effective but involving sub-matrix inversions in every step of iterations. For systems of large sizes, such algorithms can be computationally expensive and/or prohibitive. The null space tuning algorithm with hard thresholding and feedbacks (NST+HT+FB) has a mean to expedite its procedure by a suboptimal feedback, in which sub-matrix inversion is replaced by an eigenvalue-based approximation. The resulting suboptimal feedback scheme becomes exceedingly effective for large system recovery problems. An adaptive algorithm based on thresholding, suboptimal feedback and null space tuning (AdptNST+HT+subOptFB) without a prior knowledge of the sparsity level is also proposed and analyzed. Convergence analysis is the focus of this article.  相似文献   

8.
We present a new approach to the approximation of nonlinear operators in probability spaces. The approach is based on a combination of the specific iterative procedure and the best approximation problem solution with a quadratic approximant. We show that the combination of these new techniques allow us to build a computationally efficient and flexible method. The algorithm of the method and its application to the optimal filtering of stochastic signals are given.  相似文献   

9.
When conducting Bayesian inference, delayed-acceptance (DA) Metropolis–Hastings (MH) algorithms and DA pseudo-marginal MH algorithms can be applied when it is computationally expensive to calculate the true posterior or an unbiased estimate thereof, but a computationally cheap approximation is available. A first accept-reject stage is applied, with the cheap approximation substituted for the true posterior in the MH acceptance ratio. Only for those proposals that pass through the first stage is the computationally expensive true posterior (or unbiased estimate thereof) evaluated, with a second accept-reject stage ensuring that detailed balance is satisfied with respect to the intended true posterior. In some scenarios, there is no obvious computationally cheap approximation. A weighted average of previous evaluations of the computationally expensive posterior provides a generic approximation to the posterior. If only the k-nearest neighbors have nonzero weights then evaluation of the approximate posterior can be made computationally cheap provided that the points at which the posterior has been evaluated are stored in a multi-dimensional binary tree, known as a KD-tree. The contents of the KD-tree are potentially updated after every computationally intensive evaluation. The resulting adaptive, delayed-acceptance [pseudo-marginal] Metropolis–Hastings algorithm is justified both theoretically and empirically. Guidance on tuning parameters is provided and the methodology is applied to a discretely observed Markov jump process characterizing predator–prey interactions and an ODE system describing the dynamics of an autoregulatory gene network. Supplementary material for this article is available online.  相似文献   

10.
Maximum likelihood (ML) estimation is a popular method for parameter estimation when modeling discrete or count observations but unfortunately it may be sensitive to outliers. Alternative robust methods like minimum Hellinger distance (MHD) have been proposed for estimation. However, in the multivariate case, the MHD method leads to computer intensive estimation especially when the joint probability density function is complicated. In this paper, a Hellinger type distance measure based on the probability generating function is proposed as a tool for quick and robust parameter estimation. The proposed method yields consistent estimators, performs well for simulated and real data, and can be computationally much faster than ML or MHD estimation.  相似文献   

11.
The challenges of understanding the impacts of air pollution require detailed information on the state of air quality. While many modeling approaches attempt to treat this problem, physically-based deterministic methods are often overlooked due to their costly computational requirements and complicated implementation. In this work we extend a non-intrusive Reduced Basis Data Assimilation method (known as PBDW state estimation) to large pollutant dispersion case studies relying on equations involved in chemical transport models for air quality modeling. This, with the goal of rendering methods based on parameterized partial differential equations (PDE) feasible in air quality modeling applications requiring quasi-real-time approximation and correction of model error in imperfect models. Reduced basis methods (RBM) aim to compute a cheap and accurate approximation of a physical state using approximation spaces made of a suitable sample of solutions to the model. One of the keys of these techniques is the decomposition of the computational work into an expensive one-time offline stage and a low-cost parameter-dependent online stage. Traditional RBMs require modifying the assembly routines of the computational code, an intrusive procedure which may be impossible in cases of operational model codes. We propose a less intrusive reduced order method using data assimilation for measured pollution concentrations, adapted for consideration of the scale and specific application to exterior pollutant dispersion as can be found in urban air quality studies. Common statistical techniques of data assimilation in use in these applications require large historical data sets, or time-consuming iterative methods. The method proposed here avoids both disadvantages. In the case studies presented in this work, the method allows to correct for unmodeled physics and treat cases of unknown parameter values, all while significantly reducing online computational time.  相似文献   

12.
In this article, we analyze approximate methods for undertaking a principal components analysis (PCA) on large datasets. PCA is a classical dimension reduction method that involves the projection of the data onto the subspace spanned by the leading eigenvectors of the covariance matrix. This projection can be used either for exploratory purposes or as an input for further analysis, for example, regression. If the data have billions of entries or more, the computational and storage requirements for saving and manipulating the design matrix in fast memory are prohibitive. Recently, the Nyström and column-sampling methods have appeared in the numerical linear algebra community for the randomized approximation of the singular value decomposition of large matrices. However, their utility for statistical applications remains unclear. We compare these approximations theoretically by bounding the distance between the induced subspaces and the desired, but computationally infeasible, PCA subspace. Additionally we show empirically, through simulations and a real data example involving a corpus of emails, the trade-off of approximation accuracy and computational complexity.  相似文献   

13.
Kernels are important in developing a variety of numerical methods, such as approximation, interpolation, neural networks, machine learning and meshless methods for solving engineering problems. A common problem of these kernel-based methods is to calculate inverses of kernel matrices generated by a kernel function and a set of points. Due to the denseness of these matrices, finding their inverses is computationally costly. To overcome this difficulty, we introduce in this paper an approximation of the kernel matrices by appropriate multilevel circulant matrices so that the fast Fourier transform can be applied to reduce the computational cost. Convergence analysis for the proposed approximation is established based on certain decay properties of the kernels.  相似文献   

14.
Summary  In the inference of contingency table, when the cell counts are not large enough for asymptotic approximation, conditioning exact method is used and often computationally impractical for large tables. Instead, various sampling methods can be used. Based on permutation, the Monte Carlo sampling may become again impractical for large tables. For this, existing the Markov chain method is to sample a few elements of the table at each iteration and is inefficient. Here we consider a Markov chain, in which a sub-table of user specified size is updated at each iteration, and it achieves high sampling efficiency. Some theoretical properties of the chain and its applications to some commonly used tables are discussed. As an illustration, this method is applied to the exact test of the Hardy-Weinberg equilibrium in the population genetics context.  相似文献   

15.
Along with the computation and analysis for nonlinear system being more and more involved in the fields such as automation control, electronic technique and electrical power system, the nonlinear theory has become quite a attractive field for academic research. In this paper, we derives the solutions for state equation of nonlinear system by using the inverse operator method (IOM) for the first time. The corresponding algorithm and the operator expression of the solutions is obtained. An actual computation example is given, giving a comparison between IOM and Runge-kutta method. It has been proved by our investigation that IOM has some distinct advantages over usual approximation methods in that it is computationally convenient, rapidly convergent, provides accurate solutions not requiring perturbation, linearization, or the massive computations inherent in discrietization methods such as finite differences. So the IOM provides an effective method for the solution of nonlinear system, is of potential application valuable in nonlinear computation.  相似文献   

16.
Datasets in the fields of climate and environment are often very large and irregularly spaced. To model such datasets, the widely used Gaussian process models in spatial statistics face tremendous challenges due to the prohibitive computational burden. Various approximation methods have been introduced to reduce the computational cost. However, most of them rely on unrealistic assumptions for the underlying process and retaining statistical efficiency remains an issue. We develop a new approximation scheme for maximum likelihood estimation. We show how the composite likelihood method can be adapted to provide different types of hierarchical low rank approximations that are both computationally and statistically efficient. The improvement of the proposed method is explored theoretically; the performance is investigated by numerical and simulation studies; and the practicality is illustrated through applying our methods to two million measurements of soil moisture in the area of the Mississippi River basin, which facilitates a better understanding of the climate variability. Supplementary material for this article is available online.  相似文献   

17.
This paper is devoted to globally convergent methods for solving large sparse systems of nonlinear equations with an inexact approximation of the Jacobian matrix. These methods include difference versions of the Newton method and various quasi-Newton methods. We propose a class of trust region methods together with a proof of their global convergence and describe an implementable globally convergent algorithm which can be used as a realization of these methods. Considerable attention is concentrated on the application of conjugate gradient-type iterative methods to the solution of linear subproblems. We prove that both the GMRES and the smoothed COS well-preconditioned methods can be used for the construction of globally convergent trust region methods. The efficiency of our algorithm is demonstrated computationally by using a large collection of sparse test problems.  相似文献   

18.
For optimization problems with computationally demanding objective functions and subgradients, inexact subgradient methods (IXS) have been introduced by using successive approximation schemes within subgradient optimization methods (Au et al., 1994). In this paper, we develop alternative solution procedures when the primal-dual information of IXS is utilized. This approach is especially useful when the projection operation onto the feasible set is difficult. We also demonstrate its applicability to stochastic linear programs.  相似文献   

19.
This article presents a method for generating samples from an unnormalized posterior distribution f(·) using Markov chain Monte Carlo (MCMC) in which the evaluation of f(·) is very difficult or computationally demanding. Commonly, a less computationally demanding, perhaps local, approximation to f(·) is available, say f**x(·). An algorithm is proposed to generate an MCMC that uses such an approximation to calculate acceptance probabilities at each step of a modified Metropolis–Hastings algorithm. Once a proposal is accepted using the approximation, f(·) is calculated with full precision ensuring convergence to the desired distribution. We give sufficient conditions for the algorithm to converge to f(·) and give both theoretical and practical justifications for its usage. Typical applications are in inverse problems using physical data models where computing time is dominated by complex model simulation. We outline Bayesian inference and computing for inverse problems. A stylized example is given of recovering resistor values in a network from electrical measurements made at the boundary. Although this inverse problem has appeared in studies of underground reservoirs, it has primarily been chosen for pedagogical value because model simulation has precisely the same computational structure as a finite element method solution of the complete electrode model used in conductivity imaging, or “electrical impedance tomography.” This example shows a dramatic decrease in CPU time, compared to a standard Metropolis–Hastings algorithm.  相似文献   

20.
Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation–Gaussian Process regression–can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号