首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   138篇
  免费   8篇
  国内免费   1篇
化学   2篇
力学   3篇
数学   134篇
物理学   8篇
  2023年   1篇
  2022年   1篇
  2021年   5篇
  2020年   2篇
  2019年   6篇
  2018年   5篇
  2017年   11篇
  2016年   6篇
  2015年   4篇
  2014年   9篇
  2013年   28篇
  2012年   11篇
  2011年   4篇
  2010年   9篇
  2009年   4篇
  2008年   7篇
  2007年   7篇
  2006年   10篇
  2005年   5篇
  2004年   4篇
  2003年   2篇
  2002年   2篇
  2001年   3篇
  2000年   1篇
排序方式: 共有147条查询结果,搜索用时 15 毫秒
1.
研究了艾拉姆咖分布变点估计的非迭代抽样算法(IBF)和MCMC算法.在贝叶斯框架下,选取无信息先验分布,得到关于变点位置的后验分布和各参数的满条件分布,并且详细介绍了IBF算法和MCMC方法的实施步骤.最后进行随机模拟试验,结果表明两种算法都能够有效的估计变点位置,并且IBF算法的计算速度优于MCMC方法.  相似文献   
2.
3.
李素芳  张虎  吴芳 《运筹与管理》2019,28(10):89-99
针对传统面板协整检验在建模过程中易受异常值影响以及其原假设设置的主观选择问题,本文利用动态公共因子刻画面板数据潜在的截面相关结构,提出基于动态因子的截面相关结构的贝叶斯分位面板协整检验,结合各个主要分位数水平下参数的条件后验分布,设计结合卡尔曼滤波的Gibbs抽样算法,进行贝叶斯分位面板协整检验;并进行Monte Carlo仿真实验验证贝叶斯分位面板协整检验的可行性与有效性。同时,采用中国各省金融发展和经济增长的面板数据进行实证研究,结果发现在各主要分位数水平下中国金融发展和经济增长之间具有协整关系。研究结果表明:贝叶斯分位面板协整检验方法避免了传统面板数据协整方法由于原假设设置不同而发生误判的问题,克服了异常值的影响,能够提供全面准确的模型参数估计和协整检验结果。  相似文献   
4.
Classical coupling constructions arrange for copies of the same Markov process started at two different initial states to become equal as soon as possible. In this paper, we consider an alternative coupling framework in which one seeks to arrange for two different Markov (or other stochastic) processes to remain equal for as long as possible, when started in the same state. We refer to this “un-coupling” or “maximal agreement” construction as MEXIT, standing for “maximal exit”. After highlighting the importance of un-coupling arguments in a few key statistical and probabilistic settings, we develop an explicit MEXIT construction for stochastic processes in discrete time with countable state-space. This construction is generalized to random processes on general state-space running in continuous time, and then exemplified by discussion of MEXIT for Brownian motions with two different constant drifts.  相似文献   
5.
We combine two important recent advancements of MCMC algorithms: first, methods utilizing the intrinsic manifold structure of the parameter space; then, algorithms effective for targets in infinite-dimensions with the critical property that their mixing time is robust to mesh refinement.  相似文献   
6.
在经济领域中,时间序列具有序列相关和长记忆等特征,用考虑了时间序列短记忆性和长记忆的ARFIMA来模型分析研究经济时间序列有利于提高拟合及预测的精度。近几十年来对ARFIMA模型参数估计和分数差分算子阶数d的研究越来越多,该模型的应用也越来越广泛。基于贝叶斯方法在参数估计中的优越性,本文结合众多应用此方法的文献所得到的后验分布特点,提出了合理的先验分布,考虑到计算难度,采用MCMC方法对模型的参数进行估计,最后应用我国过去几十年的GDP数据进行实证分析,得到了ARFIMA模型参数的后验分布图、均值、方差及95%的置信区间。  相似文献   
7.
为了准确地量化资产之间的时变相依结构和预测组合风险,本文考虑到投资者对资产风险偏好的差异,假设资产收益率序列的新息服从标准t分布,提出时变Copula-GARCH-M-t模型,推导了模型参数的两步MCMC估计方法,还得到了组合风险(VaR和CVaR)的一步预测方法。最后选取上证综合指数和标准普尔500指数,验证了所提模型及方法的可行性和优越性,同时该模型较为准确地量化了两指数在次贷危机后的时变相依结构特征。  相似文献   
8.
We describe NIMBLE, a system for programming statistical algorithms for general model structures within R. NIMBLE is designed to meet three challenges: flexible model specification, a language for programming algorithms that can use different models, and a balance between high-level programmability and execution efficiency. For model specification, NIMBLE extends the BUGS language and creates model objects, which can manipulate variables, calculate log probability values, generate simulations, and query the relationships among variables. For algorithm programming, NIMBLE provides functions that operate with model objects using two stages of evaluation. The first stage allows specialization of a function to a particular model and/or nodes, such as creating a Metropolis-Hastings sampler for a particular block of nodes. The second stage allows repeated execution of computations using the results of the first stage. To achieve efficient second-stage computation, NIMBLE compiles models and functions via C++, using the Eigen library for linear algebra, and provides the user with an interface to compiled objects. The NIMBLE language represents a compilable domain-specific language (DSL) embedded within R. This article provides an overview of the design and rationale for NIMBLE along with illustrative examples including importance sampling, Markov chain Monte Carlo (MCMC) and Monte Carlo expectation maximization (MCEM). Supplementary materials for this article are available online.  相似文献   
9.
A new Bayesian approach is presented for extracting 2D object boundaries with measures of uncertainty. The boundaries are described by minimal closed sequences of segments and arcs, called mixed polygons. The sequence is minimal in the sense that it is able to describe all the geometrical properties of the boundary without being redundant. Based on geometrical measures evaluated on the object boundary model, a prior distribution is introduced in order to favor a mixed polygon with good geometrical properties, avoiding short sides, collinearity between segments, and so on. The estimation process is based on a two‐stage procedure that combines reversible‐jump MCMC (RJMCMC) and classic MCMC methods. The RJMCMC method is viewed as a model selection technique, and it is used to estimate the correct number of sides of the mixed polygon. The MCMC algorithm provides a sample of mixed polygons through which to evaluate the mixed polygon that best approximates the object boundary and its geometrical uncertainty. A convergence criterion for the RJMCMC method is provided. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
10.
Bayesian analysis provides a convenient setting for the estimation of complex generalized additive regression models (GAMs). Since computational power has tremendously increased in the past decade, it is now possible to tackle complicated inferential problems, for example, with Markov chain Monte Carlo simulation, on virtually any modern computer. This is one of the reasons why Bayesian methods have become increasingly popular, leading to a number of highly specialized and optimized estimation engines and with attention shifting from conditional mean models to probabilistic distributional models capturing location, scale, shape (and other aspects) of the response distribution. To embed many different approaches suggested in literature and software, a unified modeling architecture for distributional GAMs is established that exploits distributions, estimation techniques (posterior mode or posterior mean), and model terms (fixed, random, smooth, spatial,…). It is shown that within this framework implementing algorithms for complex regression problems, as well as the integration of already existing software, is relatively straightforward. The usefulness is emphasized with two complex and computationally demanding application case studies: a large daily precipitation climatology, as well as a Cox model for continuous time with space-time interactions. Supplementary material for this article is available online.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号