首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Statistical Inference for Stochastic Processes - We propose a randomized approach to the consistent statistical analysis of random processes and fields on $${mathbb {R}}^m$$ and $${mathbb {Z}}^m,...  相似文献   

2.
Summary A random measure is said to be selected by a weighted gamma prior probability if the values it assigns to disjoint sets are independent gamma random variables with positive multipliers. If the intensity measure of a nonhomogeneous Poisson point process is selected by a weighted gamma prior probability and if a sample is drawn from the Poisson point process having this intensity measure, then the posterior random intensity measure given the observations is also selected by a weighted gamma prior probability. If the measure space is Euclidean and if the true intensity measure is continuous and finite, the centered posterior process, rescaled by the square root of the sample size, will converge weakly in Skorohod topology to a Wiener process subject to a change of time scale.This research was supported in part by the National Science Foundation Grants MCS 77-10376 and MCS 75-14194  相似文献   

3.
In this paper a comparative evaluation study on popular non-homogeneous Poisson models for count data is performed. For the study the standard homogeneous Poisson model (HOM) and three non-homogeneous variants, namely a Poisson changepoint model (CPS), a Poisson free mixture model (MIX), and a Poisson hidden Markov model (HMM) are implemented in both conceptual frameworks: a frequentist and a Bayesian framework. This yields eight models in total, and the goal of the presented study is to shed some light onto their relative merits and shortcomings. The first major objective is to cross-compare the performances of the four models (HOM, CPS, MIX and HMM) independently for both modelling frameworks (Bayesian and frequentist). Subsequently, a pairwise comparison between the four Bayesian and the four frequentist models is performed to elucidate to which extent the results of the two paradigms (‘Bayesian vs. frequentist’) differ. The evaluation study is performed on various synthetic Poisson data sets as well as on real-world taxi pick-up counts, extracted from the recently published New York City Taxi database.  相似文献   

4.
5.
工序能力Bayes推断   总被引:1,自引:0,他引:1  
王正东 《应用数学》1995,8(2):151-157
本文从Bayes观点研究工序能力,对无信息先验和共轭先验,给出了Cp的后验分布、条件期望估计和最大后验估计、Bayes置信下限和判断工序是否有能力的临界值,适于对相似工序作统计推断。  相似文献   

6.
A common approach to modelling extreme values is to consider the excesses above a high threshold as realisations of a non-homogeneous Poisson process. While this method offers the advantage of modelling using threshold-invariant extreme value parameters, the dependence between these parameters makes estimation more difficult. We present a novel approach for Bayesian estimation of the Poisson process model parameters by reparameterising in terms of a tuning parameter m. This paper presents a method for choosing the optimal value of m that near-orthogonalises the parameters, which is achieved by minimising the correlation between the asymptotic posterior distribution of the parameters. This choice of m ensures more rapid convergence and efficient sampling from the joint posterior distribution using Markov Chain Monte Carlo methods. Samples from the parameterisation of interest are then obtained by a simple transform. Results are presented in the cases of identically and non-identically distributed models for extreme rainfall in Cumbria, UK.  相似文献   

7.
We introduce a new aspect of a risk process, which is a macro approximation of the flow of a risk reserve. We assume that the underlying process consists of a Brownian motion plus negative jumps, and that the process is observed at discrete time points. In our context, each jump size of the process does not necessarily correspond to the each claim size. Therefore our risk process is different from the traditional risk process. We cannot directly observe each jump size because of discrete observations. Our goal is to estimate the adjustment coefficient of our risk process from discrete observations.  相似文献   

8.
In Euclideank-space, the cone of vectors x = (x 1,x 2,...,x k ) satisfyingx 1x 2 ≤ ... ≤x k and $\sum\nolimits_{j = 1}^k {x_j } = 0$ is generated by the vectorsv j = (j ?k, ...,j ?k,j, ...,j) havingj ?k’s in its firstj coordinates andj’s for the remainingk ?j coordinates, for 1 ≤j <k. In this equal weights case, the average angle between v i and v j over all pairs (i, j) with 1 ≤i <j <k is known to be 60°. This paper generalizes the problem by considering arbitrary weights with permutations.  相似文献   

9.
Micro-data of European Union (EU) countries show that capital incomes account for a large part of disparity in populations and follow heavy-tailed distributions in many EU countries. Measuring and comparing the disparity requires incorporating the relative nature of ‘small’ and ‘large,’ and for this reason we employ the newly developed Zenga index of economic inequality. Its non-parametric estimator does not fall into any well known class of statistics. This makes the development of statistical inference a challenge even for light-tailed populations, let alone heavy-tailed ones, as is the case with capital incomes. In this paper we construct a heavy-tailed Zenga estimator, establish its asymptotic distribution, and derive confidence intervals. We explore the performance of the confidence intervals in a simulation study and draw conclusions about capital incomes in EU countries, based on the 2001 wave of the European Community Household Panel (ECHP) survey.  相似文献   

10.
《Optimization》2012,61(5):681-694
As global or combinatorial optimization problems are not effectively tractable by means of deterministic techniques, Monte Carlo methods are used in practice for obtaining ”good“ approximations to the optimum. In order to test the accuracy achieved after a sample of finite size, the Bayesian nonparametric approach is proposed as a suitable context, and the theoretical as well as computational implications of prior distributions in the class of neutral to the right distributions are examined. The feasibility of the approach relatively to particular Monte Carlo procedures is finally illustrated both for the global optimization problem and the {0 - 1} programming problem.  相似文献   

11.
We address the question as to whether a prior distribution on the space of distribution functions exists which generates the posterior produced by Efron's and Rubin's bootstrap techniques, emphasizing the connection with the Dirichlet process. We also introduce a new resampling plan which has two advantages: prior opinions are taken into account and the predictive distribution of the future observations is not forced to be concentrated on observed values.  相似文献   

12.
This paper adapts Bayesian Markov chain Monte Carlo methods for application to some auto-regressive conditional duration models. Subsequently, the properties of these estimators are examined and assessed across a range of possible conditional error distributions and dynamic specifications, including under error mis-specification. A novel model error distribution, employing a truncated skewed Student-t distribution is proposed and the Bayesian estimator assessed for it. The results of an extensive simulation study reveal that favourable estimation properties are achieved under a range of possible error distributions, but that the generalised gamma distribution assumption is most robust and best preserves these properties, including when it is incorrectly specified. The results indicate that the powerful numerical methods underlying the Bayesian estimator allow more efficiency than the (quasi-) maximum likelihood estimator for the cases considered.  相似文献   

13.
This paper introduces a new family of local density separations for assessing robustness of finite-dimensional Bayesian posterior inferences with respect to their priors. Unlike for their global equivalents, under these novel separations posterior robustness is recovered even when the functioning posterior converges to a defective distribution, irrespectively of whether the prior densities are grossly misspecified and of the form and the validity of the assumed data sampling distribution. For exponential family models, the local density separations are shown to form the basis of a weak topology closely linked to the Euclidean metric on the natural parameters. In general, the local separations are shown to measure relative roughness of the prior distribution with respect to its corresponding posterior and provide explicit bounds for the total variation distance between an approximating posterior density to a genuine posterior. We illustrate the application of these bounds for assessing robustness of the posterior inferences for a dynamic time series model of blood glucose concentration in diabetes mellitus patients with respect to alternative prior specifications.  相似文献   

14.
Superstatistics and Tsallis statistics in statistical mechanics are given an interpretation in terms of Bayesian statistical analysis. Subsequently superstatistics is extended by replacing each component of the conditional and marginal densities by Mathai’s pathway model and further both components are replaced by Mathai’s pathway models. This produces a wide class of mathematically and statistically interesting functions for prospective applications in statistical physics. It is pointed out that the final integral is a particular case of a general class of integrals introduced by the authors earlier. Those integrals are also connected to Krätzel integrals in applied analysis, inverse Gaussian densities in stochastic processes, reaction rate integrals in the theory of nuclear astrophysics and Tsallis statistics in nonextensive statistical mechanics. The final results are obtained in terms of Fox’s H-function. Matrix variate analogue of one significant specific case is also pointed out.  相似文献   

15.
The sensitivity of posterior inferences to model specification can be considered as an indicator of the presence of outliers, that are to be considered as highly unlikely values under the assumed model. The occurrence of anomalous values can seriously alter the shape of the likelihood function and lead to posterior distributions far from those one would obtain without these data inadequacies. In order to deal with these hindrances, a robust approach is discussed, which allows us to obtain outliers’ resistant posterior distributions with properties similar to those of a proper posterior distribution. The methodology is based on the replacement of the genuine likelihood by a weighted likelihood function in the Bayes’ formula.  相似文献   

16.
17.
The early work of Zellner on the multivariate Student-t linear model has been extended to Bayesian inference for linear models with dependent non-normal error terms, particularly through various papers by Osiewalski, Steel and coworkers. This article provides a full Bayesian analysis for a spherical linear model. The density generator of the spherical distribution is here allowed to depend both on the precision parameter φ and on the regression coefficients β. Another distinctive aspect of this paper is that proper priors for the precision parameter are discussed.The normal-chi-squared family of prior distributions is extended to a new class, which allows the posterior analysis to be carried out analytically. On the other hand, a direct joint modelling of the data vector and of the parameters leads to conjugate distributions for the regression and the precision parameters, both individually and jointly. It is shown that some model specifications lead to Bayes estimators that do not depend on the choice of the density generator, in agreement with previous results obtained in the literature under different assumptions. Finally, the distribution theory developed to tackle the main problem is useful on its own right.  相似文献   

18.
Equally weighted mixture models are recommended for situations where it is required to draw precise finite sample inferences requiring population parameters, but where the population distribution is not constrained to belong to a simple parametric family. They lead to an alternative procedure to the Laird-DerSimonian maximum likelihood algorithm for unequally weighted mixture models. Their primary purpose lies in the facilitation of exact Bayesian computations via importance sampling. Under very general sampling and prior specifications, exact Bayesian computations can be based upon an application of importance sampling, referred to as Permutable Bayesian Marginalization (PBM). An importance function based upon a truncated multivariatet-distribution is proposed, which refers to a generalization of the maximum likelihood procedure. The estimation of discrete distributions, by binomial mixtures, and inference for survivor distributions, via mixtures of exponential or Weibull distributions, are considered. Equally weighted mixture models are also shown to lead to an alternative Gibbs sampling methodology to the Lavine-West approach.  相似文献   

19.
20.
Some optimal inference results for a class of diffusion processes, including the continuous state branching process and the approximate Wright-Fisher model with selection, are derived.It is then showed how the theory of convergence of experiments, due to Le Cam, can be applied to derive corresponding results for processes approximating these diffusions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号