首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9篇
  免费   0篇
数学   9篇
  2013年   1篇
  2012年   1篇
  2011年   1篇
  2007年   1篇
  2002年   1篇
  1999年   2篇
  1995年   1篇
  1987年   1篇
排序方式: 共有9条查询结果,搜索用时 15 毫秒
1
1.
A broad class of implicit or partially implicit time discretizations for the Langevin diffusion are considered and used as proposals for the Metropolis–Hastings algorithm. Ergodic properties of our proposed schemes are studied. We show that introducing implicitness in the discretization leads to a process that often inherits the convergence rate of the continuous time process. These contrast with the behavior of the naive or Euler–Maruyama discretization, which can behave badly even in simple cases. We also show that our proposed chains, when used as proposals for the Metropolis–Hastings algorithm, preserve geometric ergodicity of their implicit Langevin schemes and thus behave better than the local linearization of the Langevin diffusion. We illustrate the behavior of our proposed schemes with examples. Our results are described in detail in one dimension only, although extensions to higher dimensions are also described and illustrated.  相似文献   
2.
We study an important problem faced by Blood Centers, of selecting screening tests for donated blood to reduce the risk of “transfusion-transmitted infectious diseases” (TTIs), including the human immunodeficiency virus (HIV), hepatitis viruses, human T-cell lymphotropic virus, syphilis, West Nile Virus, and Chagas’ Disease. This decision has a significant impact on health care quality in both developed and developing countries. The budget-constrained decision-maker needs to construct a portfolio of screening tests, from a set of available tests, each with given efficacy (sensitivity and specificity) and cost, to administer to each unit of donated blood so as to minimize the “risk” of a TTI for blood classified as “infection-free.” While doing this, it is critical, for a viable blood system, that the decision-maker does not falsely (i.e., through screening error) discard too much of the infection-free blood (“waste”). We construct mathematical models of this decision problem, considering the various objective functions (minimization of the TTI risk or the weighted TTI risk) and various constraints (on budget and wasted blood) relevant in practice. Our work generates insights on the test selection problem. We show, for example, that a reduction in risk does not necessarily come at the expense of an increase in waste. This underscores the importance of considering these different metrics in decision-making through an optimization-based model. Our work also highlights the importance of generating region-specific testing schemes that explicitly take into account the regional prevalence and co-infection rates, along with the impacts of the infections on the society and individuals.  相似文献   
3.
Threshold autoregressive (AR) and autoregressive moving average (ARMA) processes with continuous time parameter have been discussed in several recent papers by Brockwellet al. (1991,Statist. Sinica,1, 401–410), Tong and Yeung (1991,Statist. Sinica,1, 411–430), Brockwell and Hyndman (1992,International Journal Forecasting,8, 157–173) and Brockwell (1994,J. Statist. Plann. Inference,39, 291–304). A threshold ARMA process with boundary width 2>0 is easy to define in terms of the unique strong solution of a stochastic differential equation whose coefficients are piecewise linear and Lipschitz. The positive boundary-width is a convenient mathematical device to smooth out the coefficient changes at the boundary and hence to ensure the existence and uniqueness of the strong solution of the stochastic differential equation from which the process is derived. In this paper we give a direct definition of a threshold ARMA processes with =0 in the important case when only the autoregressive coefficients change with the level of the process. (This of course includes all threshold AR processes with constant scale parameter.) The idea is to express the distributions of the process in terms of the weak solution of a certain stochastic differential equation. It is shown that the joint distributions of this solution with =0 are the weak limits as 0 of the distributions of the solution with >0. The sense in which the approximating sequence of processes used by Brockwell and Hyndman (1992,International Journal Forecasting,8, 157–173) converges to this weak solution is also investigated. Some numerical examples illustrate the value of the latter approximation in comparison with the more direct representation of the process obtained from the Cameron-Martin-Girsanov formula. It is used in particular to fit continuous-time threshold models to the sunspot and Canadian lynx series.Research partially supported by National Science Foundation Research Grants DMS 9105745 and 9243648.  相似文献   
4.
Summary Series of new characterizations by zero regression properties are derived for the distributions in the class of natural exponential families with power variance functions. Such a class of distributions has been introduced in Bar-Lev and Enis (1986) in the context of an investigation of reproductible exponential families. This class is broad and includes the following families: normal, Poisson-type, gamma, all families generated by stable distributions with characteristic exponent an element of the unit interval (among these are the inverse Gaussian, Modified Bessel-type, and Whittaker-type distributions), and families of compound Poisson distributions generated by gamma variates. The characterizations by zero regression properties are obtained in a unified approach and are based on certain relations which hold among the cumulants of the distributions in this class. Some remarks are made indicating how the techniques used here can be extended to obtain characterizations of general exponential families.The work of this author was performed while he was a visitor in the Department of Statistics, State University of New York at Buffalo  相似文献   
5.
We consider a class of Langevin diffusions with state-dependent volatility. The volatility of the diffusion is chosen so as to make the stationary distribution of the diffusion with respect to its natural clock, a heated version of the stationary density of interest. The motivation behind this construction is the desire to construct uniformly ergodic diffusions with required stationary densities. Discrete time algorithms constructed by Hastings accept reject mechanisms are constructed from discretisations of the algorithms, and the properties of these algorithms are investigated.  相似文献   
6.
The Metropolis-Hastings algorithm for estimating a distribution is based on choosing a candidate Markov chain and then accepting or rejecting moves of the candidate to produce a chain known to have as the invariant measure. The traditional methods use candidates essentially unconnected to . We show that the class of candidate distributions, developed in Part I (Stramer and Tweedie 1999), which self-target towards the high density areas of , produce Metropolis-Hastings algorithms with convergence rates that appear to be considerably better than those known for the traditional candidate choices, such as random walk. We illustrate this behavior for examples with exponential and polynomial tails, and for a logistic regression model using a Gibbs sampling algorithm. The detailed results are given in one dimension but we indicate how they may extend successfully to higher dimensions.  相似文献   
7.
The problem of formal likelihood-based (either classical or Bayesian) inference for discretely observed multidimensional diffusions is particularly challenging. In principle, this involves data augmentation of the observation data to give representations of the entire diffusion trajectory. Most currently proposed methodology splits broadly into two classes: either through the discretization of idealized approaches for the continuous-time diffusion setup or through the use of standard finite-dimensional methodologies discretization of the diffusion model. The connections between these approaches have not been well studied. This article provides a unified framework that brings together these approaches, demonstrating connections, and in some cases surprising differences. As a result, we provide, for the first time, theoretical justification for the various methods of imputing missing data. The inference problems are particularly challenging for irreducible diffusions, and our framework is correspondingly more complex in that case. Therefore, we treat the reducible and irreducible cases differently within the article. Supplementary materials for the article are available online.  相似文献   
8.
We describe algorithms for estimating a given measure known up to a constant of proportionality, based on a large class of diffusions (extending the Langevin model) for which is invariant. We show that under weak conditions one can choose from this class in such a way that the diffusions converge at exponential rate to , and one can even ensure that convergence is independent of the starting point of the algorithm. When convergence is less than exponential we show that it is often polynomial at verifiable rates. We then consider methods of discretizing the diffusion in time, and find methods which inherit the convergence rates of the continuous time process. These contrast with the behavior of the naive or Euler discretization, which can behave badly even in simple cases. Our results are described in detail in one dimension only, although extensions to higher dimensions are also briefly described.  相似文献   
9.
Discretized simulation is widely used to approximate the transition density of discretely observed diffusions. A recently proposed importance sampler, namely modified Brownian bridge, has gained much attention for its high efficiency relative to other samplers. It is unclear for this sampler, however, how to balance the trade-off between the number of imputed values and the number of Monte Carlo simulations under a given computing resource. This paper provides an asymptotically efficient allocation of computing resource to the importance sampling approach with a modified Brownian bridge as importance sampler. The optimal trade-off is established by investigating two types of errors: Euler discretization error and Monte Carlo error. The main results are illustrated with two simulated examples.   相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号