首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a new algorithm for approximating the non-asymptotic second moment of the marginal likelihood estimate, or normalizing constant, provided by a particle filter. The computational cost of the new method is O(M) per time step, independently of the number of particles N in the particle filter, where M is a parameter controlling the quality of the approximation. This is in contrast to O(M N) for a simple averaging technique using M i.i.d. replicates of a particle filter with N particles. We establish that the approximation delivered by the new algorithm is unbiased, strongly consistent and, under standard regularity conditions, increasing M linearly with time is sufficient to prevent growth of the relative variance of the approximation, whereas for the simple averaging technique it can be necessary to increase M exponentially with time in order to achieve the same effect. This makes the new algorithm useful as part of strategies for estimating Monte Carlo variance. Numerical examples illustrate performance in the context of a stochastic Lotka–Volterra system and a simple AR(1) model.  相似文献   

2.
In the following article, we investigate a particle filter for approximating Feynman–Kac models with indicator potentials and we use this algorithm within Markov chain Monte Carlo (MCMC) to learn static parameters of the model. Examples of such models include approximate Bayesian computation (ABC) posteriors associated with hidden Markov models (HMMs) or rare-event problems. Such models require the use of advanced particle filter or MCMC algorithms to perform estimation. One of the drawbacks of existing particle filters is that they may “collapse,” in that the algorithm may terminate early, due to the indicator potentials. In this article, using a newly developed special case of the locally adaptive particle filter, we use an algorithm that can deal with this latter problem, while introducing a random cost per-time step. In particular, we show how this algorithm can be used within MCMC, using particle MCMC. It is established that, when not taking into account computational time, when the new MCMC algorithm is applied to a simplified model it has a lower asymptotic variance in comparison to a standard particle MCMC algorithm. Numerical examples are presented for ABC approximations of HMMs.  相似文献   

3.
In this article, we propose a new second-order infeasible primal-dual path-following algorithm for symmetric cone optimization. The algorithm further improves the complexity bound of a wide infeasible primal-dual path-following algorithm. The theory of Euclidean Jordan algebras is used to carry out our analysis. The convergence is shown for a commutative class of search directions. In particular, the complexity bound is 𝒪(r5/4log ??1) for the Nesterov-Todd direction, and 𝒪(r7/4log ??1) for the xs and sx directions, where r is the rank of the associated Euclidean Jordan algebra and ? is the required precision. If the starting point is strictly feasible, then the corresponding bounds can be reduced by a factor of r3/4. Some preliminary numerical results are provided as well.  相似文献   

4.
Particle methods are typically O(N2), where N is the number of computational elements. We present an O(N) particle method for the equations for the conservation of potential vorticity. This method is based on the idea of grouping the particles. The necessary expansions and truncation errors are given. The accuracy and speed of the method are presented for both scalar and vector machines. © 1993 John Wiley & Sons, Inc.  相似文献   

5.
Starting from first‐principle many‐body quantum dynamics, we show that the dynamics of Bose‐Einstein condensates can be approximated by the time‐dependent nonlinear Gross‐Pitaevskii equation, giving a bound on the rate of the convergence. Initial data are constructed on the bosonic Fock space applying an appropriate Bogoliubov transformation on a coherent state with expected number of particles N. The Bogoliubov transformation plays a crucial role; it produces the correct microscopic correlations among the particles. Our analysis shows that, on the level of the one‐particle reduced density, the form of the initial data is preserved by the many‐body evolution, up to a small error that vanishes as N?1/2 in the limit of large N.© 2015 Wiley Periodicals, Inc.  相似文献   

6.
A new computational algorithm based on a fast way of computing integral operators describing the coagulation process is proposed for a mathematical model of coagulating particles. Using this algorithm, the computational complexity of each timestep of an explicit difference scheme can be substantially reduced. For each step, the complexity of execution is reduced from O(NM2) arithmetic operations to O(NMRlnM), where N is the number of mesh points along the physical coordinates of particles, M is the number of mesh points in a grid corresponding to sizes of coagulating particles, and R is the rank of a matrix corresponding to the values of the function of a coagulation kernel at mesh points. Using this approach, computations can be greatly accelerated, provided that kernel rank R is small.  相似文献   

7.
Oversampled functions can be evaluated using generalized sinc-series and filter functions connected with these series. A standard filter function is given by exp ((ζ2 ? 1)?1). We show that the Fourier transform of this filter posseses the convergence order 0(exp (?√x)), improving an estimation given in [10]. Moreover, we define a family of filter functions with convergence order O(x · exp (?xσ)) with σ arbitrary close to 1.  相似文献   

8.
This paper proposes a method of estimating computational complexity of problem through analyzing its input condition for N-vehicle exploration problem. The N-vehicle problem is firstly formulated to determine the optimal replacement in the set of permutations of 1 to N. The complexity of the problem is factorial of N (input scale of problem). To balance accuracy and efficiency of general algorithms, this paper mentions a new systematic algorithm design and discusses correspondence between complexity of problem and its input condition, other than just putting forward a uniform approximation algorithm as usual. This is a new technique for analyzing computation of NP problems. The method of corresponding is then presented. We finally carry out a simulation to verify the advantages of the method: 1) to decrease computation in enumeration; 2) to efficiently obtain computational complexity for any N-vehicle case; 3) to guide an algorithm design for any N-vehicle case according to its complexity estimated by the method.  相似文献   

9.
Recently Glover, Klingman and Phillips proposed the Partitioning Shortest Path (PSP) algorithm. The PSP algorithm includes as variants most of the known algorithms for the shortest path problem. In a subsequent paper, together with Schneider, they proposed several variants of the PSP and conducted computational tests. Three of the variants were the first polynomially bounded shortest path algoriths to maintain sharp labels as defined by Shier and Witzgall. Two of these variants had computational complexity O(|N|2|A|), the other O(|N|3). In this note, we add a new step to the PSP algorithm resulting in new variants also scanning from sharp labels and having computational complexity O(|N|3) for two of them and O(|N|2) for the other. This new step also provides a test for the early detection of negative length cycles.  相似文献   

10.
Sahu  D.R.  Cho  Y.J.  Dong  Q.L.  Kashyap  M.R.  Li  X.H. 《Numerical Algorithms》2021,87(3):1075-1095

The split feasibility problem is to find a point x? with the property that x?C and Ax?Q, where C and Q are nonempty closed convex subsets of real Hilbert spaces X and Y, respectively, and A is a bounded linear operator from X to Y. The split feasibility problem models inverse problems arising from phase retrieval problems and the intensity-modulated radiation therapy. In this paper, we introduce a new inertial relaxed CQ algorithm for solving the split feasibility problem in real Hilbert spaces and establish weak convergence of the proposed CQ algorithm under certain mild conditions. Our result is a significant improvement of the recent results related to the split feasibility problem.

  相似文献   

11.
GAUSSIAN PIVOTING METHOD FORSOLVING LINEAR COMPLEMENTARITY PROBLEM   总被引:4,自引:0,他引:4  
In this paper, a new direct algorithm for solving linear complementarity problem with Z-matrix is proposed. The algorithm exhibits either a solution or its nonexistence after at most n steps (where n is the dimension of the problem) and the computational complexity is at most 1/3n^2 O(n^2)  相似文献   

12.
Mean-shift is an iterative procedure often used as a nonparametric clustering algorithm that defines clusters based on the modal regions of a density function. The algorithm is conceptually appealing and makes assumptions neither about the shape of the clusters nor about their number. However, with a complexity of O(n2) per iteration, it does not scale well to large datasets. We propose a novel algorithm which performs density-based clustering much quicker than mean shift, yet delivering virtually identical results. This algorithm combines subsampling and a stochastic approximation procedure to achieve a potential complexity of O(n) at each step. Its convergence is established. Its performances are evaluated using simulations and applications to image segmentation, where the algorithm was tens or hundreds of times faster than mean shift, yet causing negligible amounts of clustering errors. The algorithm can be combined with existing approaches to further accelerate clustering.  相似文献   

13.
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples   总被引:37,自引:0,他引:37  
Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For compressible signals, the running time is just O(Nlog2N), where N is the length of the signal.  相似文献   

14.
In this paper, we first optimize the structure of the Wei–Xiao–Chen algorithm for the linear complexity of sequences over GF(q) with period N =  2p n , where p and q are odd primes, and q is a primitive root modulo p 2. The second, an union cost is proposed, so that an efficient algorithm for computing the k-error linear complexity of a sequence with period 2p n over GF(q) is derived, where p and q are odd primes, and q is a primitive root modulo p 2. The third, we give a validity of the proposed algorithm, and also prove that there exists an error sequence e N , where the Hamming weight of e N is not greater than k, such that the linear complexity of (s + e) N reaches the k-error linear complexity c. We also present a numerical example to illustrate the algorithm. Finally, we present the minimum value k for which the k-error linear complexity is strictly less than the linear complexity.  相似文献   

15.

This paper presents a novel algorithm for efficient online estimation of the filter derivatives in general hidden Markov models. The algorithm, which has a linear computational complexity and very limited memory requirements, is furnished with a number of convergence results, including a central limit theorem with an asymptotic variance that can be shown to be uniformly bounded in time. Using the proposed filter derivative estimator, we design a recursive maximum likelihood algorithm updating the parameters according the gradient of the one-step predictor log-likelihood. The efficiency of this online parameter estimation scheme is illustrated in a simulation study.

  相似文献   

16.
In this article, a spatial two-grid finite element (TGFE) algorithm is used to solve a two-dimensional nonlinear space–time fractional diffusion model and improve the computational efficiency. First, the second-order backward difference scheme is used to formulate the time approximation, where the time-fractional derivative is approximated by the weighted and shifted Grünwald difference operator. In order to reduce the computation time of the standard FE method, a TGFE algorithm is developed. The specific algorithm is to iteratively solve a nonlinear system on the coarse grid and then to solve a linear system on the fine grid. We prove the scheme stability of the TGFE algorithm and derive a priori error estimate with the convergence result Ot2 + hr + 1 − η + H2r + 2 − 2η) . Finally, through a two-dimensional numerical calculation, we improve the computational efficiency and reduce the computation time by the TGFE algorithm.  相似文献   

17.
Quadratic Spline Collocation (QSC) methods of optimal order of convergence have been recently developed for the solution of elliptic Partial Differential Equations (PDEs). In this paper, linear solvers based on Fast Fourier Transforms (FFT)are developed for the solution of the QSC equations. The complexity of the FFT solvers is O(N 2 log N), where N is the gridsize in one dimension. These direct solvers can handle PDEs with coefficients in one variable or constant, and Dirichlet, Neumann, alternating Dirichlet-Neumann or periodic boundary conditions, along at least one direction of a rectangular domain. General variable coefficient PDEs are handled by preconditioned iterative solvers. The preconditioner is the QSC matrix arising from a constant coefficient PDE. The convergence analysis of the preconditioner is presented. It is shown that, under certain conditions, the convergence rate is independent of the gridsize. The preconditioner is solved by FFT techniques, and integrated with one-step or acceleration methods, giving rise to asymptotically almost optimal linear solvers, with complexity O(N 2 log N). Numerical experiments verify the effectiveness of the solvers and preconditioners, even on problems more general than the analysis assumes. The development and analysis of FFT solvers and preconditioners is extended to QSC equations corresponding to systems of elliptic PDEs.  相似文献   

18.
A two-way chasing algorithm to reduce a diagonal plus a symmetric semi-separable matrix to a symmetric tridiagonal one and an algorithm to reduce a diagonal plus an unsymmetric semi-separable matrix to a bidiagonal one are considered. Both algorithms are fast and stable, requiring a computational cost of N 2, where N is the order of the considered matrix.  相似文献   

19.
The study of simple stochastic games (SSGs) was initiated by Condon for analyzing the computational power of randomized space-bounded alternating Turing machines. The game is played by two players, MAX and MIN, on a directed multigraph, and when the play terminates at a sink vertex s, MAX wins from MIN a payoff p(s)∈[0,1]. Condon proved that the problem SSG-VALUE—given a SSG, determine whether the expected payoff won by MAX is greater than 1/2 when both players use their optimal strategies—is in NP∩coNP. However, the exact complexity of this problem remains open, as it is not known whether the problem is in P or is hard for some natural complexity class. In this paper, we study the computational complexity of a strategy improvement algorithm by Hoffman and Karp for this problem. The Hoffman–Karp algorithm converges to optimal strategies of a given SSG, but no non-trivial bounds were previously known on its running time. We prove a bound of O(n2/n) on the convergence time of the Hoffman–Karp algorithm, and a bound of O(20.78n) on a randomized variant. These are the first non-trivial upper bounds on the convergence time of these strategy improvement algorithms.  相似文献   

20.
The topic of this paper is the study of four real, linear, possibly constrained minimum norm approximation problems, which arise in connection with the design of linear-phase nonrecursive digital filters and are distinguished by the type of used trigonometric approximation functions. In the case of unconstrained minimax designs these problems are normally solved by the Parks–McClellan algorithm, which is an application of the second algorithm of Remez to these problems and which is one of the most popular tools in filter design. In this paper the four types of approximation problems are investigated for all Lp and lp norms, respectively. It is especially proved that the assumptions for the Remez algorithm are satisfied in all four cases, which has been claimed, but is not obvious for three of them. Furthermore, results on the existence and uniqueness of solutions and on the convergence and the rate of convergence of the approximation errors are derived.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号