共查询到20条相似文献,搜索用时 15 毫秒
1.
Arnaud Doucet Vladislav B. Tadić 《Annals of the Institute of Statistical Mathematics》2003,55(2):409-422
Particle filtering techniques are a set of powerful and versatile simulation-based methods to perform optimal state estimation
in nonlinear non-Gaussian state-space models. If the model includes fixed parameters, a standard technique to perform parameter
estimation consists of extending the state with the parameter to transform the problem into an optimal filtering problem.
However, this approach requires the use of special particle filtering techniques which suffer from several drawbacks. We consider
here an alternative approach combining particle filtering and gradient algorithms to perform batch and recursive maximum likelihood
parameter estimation. An original particle method is presented to implement these approaches and their efficiency is assessed
through simulation. 相似文献
2.
Steven J. Lewis Alpan Raval John E. Angus 《Mathematical and Computer Modelling》2008,47(11-12):1198-1216
Hidden Markov models are used as tools for pattern recognition in a number of areas, ranging from speech processing to biological sequence analysis. Profile hidden Markov models represent a class of so-called “left–right” models that have an architecture that is specifically relevant to classification of proteins into structural families based on their amino acid sequences. Standard learning methods for such models employ a variety of heuristics applied to the expectation-maximization implementation of the maximum likelihood estimation procedure in order to find the global maximum of the likelihood function. Here, we compare maximum likelihood estimation to fully Bayesian estimation of parameters for profile hidden Markov models with a small number of parameters. We find that, relative to maximum likelihood methods, Bayesian methods assign higher scores to data sequences that are distantly related to the pattern consensus, show better performance in classifying these sequences correctly, and continue to perform robustly with regard to misspecification of the number of model parameters. Though our study is limited in scope, we expect our results to remain relevant for models with a large number of parameters and other types of left–right hidden Markov models. 相似文献
3.
Comparison of certain value-at-risk estimation methods for the two-parameter Weibull loss distribution 总被引:2,自引:0,他引:2
Omer L. Gebizlioglu Birdal ?eno?luYeliz Mert Kantar 《Journal of Computational and Applied Mathematics》2011,235(11):3304-3314
The Weibull distribution is one of the most important distributions that is utilized as a probability model for loss amounts in connection with actuarial and financial risk management problems. This paper considers the Weibull distribution and its quantiles in the context of estimation of a risk measure called Value-at-Risk (VaR). VaR is simply the maximum loss in a specified period with a pre-assigned probability level. We attempt to present certain estimation methods for VaR as a quantile of a distribution and compare these methods with respect to their deficiency (Def) values. Along this line, the results of some Monte Carlo simulations, that we have conducted for detailed investigations on the efficiency of the estimators as compared to MLE, are provided. 相似文献
4.
This paper employs a multivariate extreme value theory (EVT) approach to study the limit distribution of the loss of a general credit portfolio with low default probabilities. A latent variable model is employed to quantify the credit portfolio loss, where both heavy tails and tail dependence of the latent variables are realized via a multivariate regular variation (MRV) structure. An approximation formula to implement our main result numerically is obtained. Intensive simulation experiments are conducted, showing that this approximation formula is accurate for relatively small default probabilities, and that our approach is superior to a copula-based approach in reducing model risk. 相似文献
5.
Choosing a suitable risk measure to optimize an option portfolio’s performance represents a significant challenge. This paper is concerned with illustrating the advantages of Higher order coherent risk measures to evaluate option risk’s evolution. It discusses the detailed implementation of the resulting dynamic risk optimization problem using stochastic programming. We propose an algorithmic procedure to optimize an option portfolio based on minimization of conditional higher order coherent risk measures. Illustrative examples demonstrate some advantages in the performance of the portfolio’s levels when higher order coherent risk measures are used in the risk optimization criterion. 相似文献
6.
利用贝叶斯网络,将搜集到的操作风险事件分类建立数据网络;在假设一定的分布条件下,分别估计各类损失事件发生频率和损失量的分布参数,用copula函数处理相关节点,再估计总体分布的VaR和ES,从而为巴塞尔协议中操作风险损失的估计提供一种具体的可选方法。 相似文献
7.
Jong-Wuu Wu 《Annals of the Institute of Statistical Mathematics》1996,48(2):283-294
I propose a simply method to estimate the regression parameters in quasi-likelihood model My main approach utilizes the dimension reduction technique to first reduce the dimension of the regressor X to one dimension before solving the quasi-likelihood equations. In addition, the real advantage of using dimension reduction technique is that it provides a good initial estimate for one-step estimator of the regression parameters. Under certain design conditions, the estimators are asymptotically multivariate normal and consistent. Moreover, a Monte Carlo simulation is used to study the practical performance of the procedures, and I also assess the cost of CPU time for computing the estimates.This research partially supported by the National Science Council, R.O.C. (Plan No. NSC 82-0208-M-032-023-T). 相似文献
8.
One of the most important parameters determining the performance of communication networks is network reliability. The network reliability strongly depends on not only topological layout of the communication networks but also reliability and availability of the communication facilities. The selection of optimal network topology is an NP-hard problem so that computation time of enumeration-based methods grows exponentially with network size. This paper presents a new solution approach based on cross-entropy method, called NCE, to design of communication networks. The design problem is to find a network topology with minimum cost such that all-terminal reliability is not less than a given level of reliability. To investigate the effectiveness of the proposed NCE, comparisons with other heuristic approaches given in the literature for the design problem are carried out in a three-stage experimental study. Computational results show that NCE is an effective heuristic approach to design of reliable networks. 相似文献
9.
Ping Zhang 《Annals of the Institute of Statistical Mathematics》1993,45(1):105-111
Estimating the prediction error is a common practice in the statistical literature. Under a linear regression model, lete be the conditional prediction error andê be its estimate. We use (ê, e), the correlation coefficient betweene andê, to measure the performance of a particular estimation method. Reasons are given why correlation is chosen over the more popular mean squared error loss. The main results of this paper conclude that it is generally not possible to obtain good estimates of the prediction error. In particular, we show that (ê, e)=O(n
–1/2) whenn . When the sample size is small, we argue that high values of (ê, e) can be achieved only when the residual error distribution has very heavy tails and when no outlier presents in the data. Finally, we show that in order for (ê, e) to be bounded away from zero asymptotically,ê has to be biased. 相似文献
10.
应用Monte Carlo EM(MCEM)算法给出了多层线性模型参数估计的新方法,解决了EM算法用于模型时积分计算困难的问题,并通过数值模拟将方法的估计结果与EM算法的进行比较,验证了方法的有效性和可行性. 相似文献
11.
12.
Sylvia Frühwirth-Schnatter Leopold Sögner 《Annals of the Institute of Statistical Mathematics》2009,61(1):159-179
This paper discusses practical Bayesian estimation of stochastic volatility models based on OU processes with marginal Gamma
laws. Estimation is based on a parameterization which is derived from the Rosiński representation, and has the advantage of
being a non-centered parameterization. The parameterization is based on a marked point process, living on the positive real
line, with uniformly distributed marks. We define a Markov chain Monte Carlo (MCMC) scheme which enables multiple updates
of the latent point process, and generalizes single updating algorithm used earlier. At each MCMC draw more than one point
is added or deleted from the latent point process. This is particularly useful for high intensity processes. Furthermore,
the article deals with superposition models, where it discuss how the identifiability problem inherent in the superposition
model may be avoided by the use of a Markov prior. Finally, applications to simulated data as well as exchange rate data are
discussed. 相似文献
13.
Linear mixed models are popularly used to fit continuous longitudinal data,and the random effects are commonly assumed to have normal distribution.However,this assumption needs to be tested so that further analysis can be proceeded well.In this paper,we consider the Baringhaus-Henze-Epps-Pulley (BHEP) tests,which are based on an empirical characteristic function.Differing from their case,we consider the normality checking for the random effects which are unobservable and the test should be based on their predictors.The test is consistent against global alternatives,and is sensitive to the local alternatives converging to the null at a certain rate arbitrarily close to 1/√ n where n is sample size.Furthermore,to overcome the problem that the limiting null distribution of the test is not tractable,we suggest a new method: use a conditional Monte Carlo test (CMCT) to approximate the null distribution,and then to simulate p-values.The test is compared with existing methods,the power is examined,and several examples are applied to illustrate the usefulness of our test in the analysis of longitudinal data. 相似文献
14.
Michael Doumpos Constantin Zopounidis Emilios Galariotis 《European Journal of Operational Research》2014
Recent research on robust decision aiding has focused on identifying a range of recommendations from preferential information and the selection of representative models compatible with preferential constraints. This study presents an experimental analysis on the relationship between the results of a single decision model (additive value function) and the ones from the full set of compatible models in classification problems. Different optimization formulations for selecting a representative model are tested on artificially generated data sets with varying characteristics. 相似文献
15.
The sampling distribution of parameter estimators can be summarized by moments, fractiles or quantiles. For nonlinear models, these quantities are often approximated by power series, approximated by transformed systems, or estimated by Monte Carlo sampling. A control variate approach based on a linear approximation of the nonlinear model is introduced here to reduce the Monte Carlo sampling necessary to achieve a given accuracy. The particular linear approximation chosen has several advantages: its moments and other properties are known, it is easy to implement, and there is a correspondence to asymptotic results that permits assessment of control variate effectiveness prior to sampling via measures of nonlinearity. Empirical results for several nonlinear problems are presented.This research was supported in part by the Office of Naval Research under Contract N00014-79-C-0832. 相似文献
16.
Nicolas Chopin 《Annals of the Institute of Statistical Mathematics》2007,59(2):349-366
We consider the problem of detecting change points (structural changes) in long sequences of data, whether in a sequential
fashion or not, and without assuming prior knowledge of the number of these change points. We reformulate this problem as
the Bayesian filtering and smoothing of a non standard state space model. Towards this goal, we build a hybrid algorithm that
relies on particle filtering and Markov chain Monte Carlo ideas. The approach is illustrated by a GARCH change point model. 相似文献
17.
For multivariate copula-based models for which maximum likelihood is computationally difficult, a two-stage estimation procedure has been proposed previously; the first stage involves maximum likelihood from univariate margins, and the second stage involves maximum likelihood of the dependence parameters with the univariate parameters held fixed from the first stage. Using the theory of inference functions, a partitioned matrix in a form amenable to analysis is obtained for the asymptotic covariance matrix of the two-stage estimator. The asymptotic relative efficiency of the two-stage estimation procedure compared with maximum likelihood estimation is studied. Analysis of the limiting cases of the independence copula and Fréchet upper bound help to determine common patterns in the efficiency as the dependence in the model increases. For the Fréchet upper bound, the two-stage estimation procedure can sometimes be equivalent to maximum likelihood estimation for the univariate parameters. Numerical results are shown for some models, including multivariate ordinal probit and bivariate extreme value distributions, to indicate the typical level of asymptotic efficiency for discrete and continuous data. 相似文献
18.
This paper is concerned with the approximate computation of choice probabilities in mixed logit models. The relevant approximations are based on the Taylor expansion of the classical logit function and on the high order moments of the random coefficients. The approximate choice probabilities and their derivatives are used in conjunction with log likelihood maximization for parameter estimation. The resulting method avoids the assumption of an apriori distribution for the random tastes. Moreover experiments with simulation data show that it compares well with the simulation based methods in terms of computational cost. 相似文献
19.
以市场需求波动风险为例,基于蒙特卡罗模拟研究了供应链风险估计问题.首先,对市场需求波动风险及其损失度量进行理论分析,利用市场需求波动风险情境下的供应链系统库存成本损失来度量市场需求波动风险的损失.其次,选择供应链末端需求为蒙特卡罗方法待模拟的随机变量,基于需求建立了市场需求波动风险概率测度模型和风险损失度量模型,确定了市场需求波动风险概率和风险损失为需求的相关量.然后,通过实例的仿真求解验证了模型.最后,给出了利用本模型方法进行供应链风险估计时需要注意的问题及进一步研究的问题.研究表明:蒙特卡罗方法对供应链风险估计具有较强的鲁棒性. 相似文献
20.
应用随机系数化方法对纵向数据的Poisson-Gamma回归模型进行了研究,采用Laplace展开方法得到了关于响应变量的近似似然函数并得到模型系数的随机性检验的Score统计量.通过Monte Carlo模拟分析了检验的渐近功效.最后把得到的检验统计量应用到具体的数值实例分析中说明其有效性. 相似文献