首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper develops a novel importance sampling algorithm for estimating the probability of large portfolio losses in the conditional independence framework. We apply exponential tilts to (i) the distribution of the natural sufficient statistics of the systematic risk factors and (ii) conditional default probabilities, given the simulated values of the systematic risk factors, and select parameter values by minimizing the Kullback–Leibler divergence of the resulting parametric family from the ideal (zero-variance) importance density. Optimal parameter values are shown to satisfy intuitive moment-matching conditions, and the asymptotic behaviour of large portfolios is used to approximate the requisite moments. In a sense we generalize the algorithm of Glasserman and Li (2005) so that it can be applied in a wider variety of models. We show how to implement our algorithm in the t copula model and compare its performance there to the algorithm developed by Chan and Kroese (2010). We find that our algorithm requires substantially less computational time (especially for large portfolios) but is slightly less accurate. Our algorithm can also be used to estimate more general risk measures, such as conditional tail expectations, whereas Chan and Kroese (2010) is specifically designed to estimate loss probabilities.  相似文献   

2.
Parameter estimation in general state-space models using particle methods   总被引:6,自引:0,他引:6  
Particle filtering techniques are a set of powerful and versatile simulation-based methods to perform optimal state estimation in nonlinear non-Gaussian state-space models. If the model includes fixed parameters, a standard technique to perform parameter estimation consists of extending the state with the parameter to transform the problem into an optimal filtering problem. However, this approach requires the use of special particle filtering techniques which suffer from several drawbacks. We consider here an alternative approach combining particle filtering and gradient algorithms to perform batch and recursive maximum likelihood parameter estimation. An original particle method is presented to implement these approaches and their efficiency is assessed through simulation.  相似文献   

3.
Hidden Markov models are used as tools for pattern recognition in a number of areas, ranging from speech processing to biological sequence analysis. Profile hidden Markov models represent a class of so-called “left–right” models that have an architecture that is specifically relevant to classification of proteins into structural families based on their amino acid sequences. Standard learning methods for such models employ a variety of heuristics applied to the expectation-maximization implementation of the maximum likelihood estimation procedure in order to find the global maximum of the likelihood function. Here, we compare maximum likelihood estimation to fully Bayesian estimation of parameters for profile hidden Markov models with a small number of parameters. We find that, relative to maximum likelihood methods, Bayesian methods assign higher scores to data sequences that are distantly related to the pattern consensus, show better performance in classifying these sequences correctly, and continue to perform robustly with regard to misspecification of the number of model parameters. Though our study is limited in scope, we expect our results to remain relevant for models with a large number of parameters and other types of left–right hidden Markov models.  相似文献   

4.
The Weibull distribution is one of the most important distributions that is utilized as a probability model for loss amounts in connection with actuarial and financial risk management problems. This paper considers the Weibull distribution and its quantiles in the context of estimation of a risk measure called Value-at-Risk (VaR). VaR is simply the maximum loss in a specified period with a pre-assigned probability level. We attempt to present certain estimation methods for VaR as a quantile of a distribution and compare these methods with respect to their deficiency (Def) values. Along this line, the results of some Monte Carlo simulations, that we have conducted for detailed investigations on the efficiency of the estimators as compared to MLE, are provided.  相似文献   

5.
This paper employs a multivariate extreme value theory (EVT) approach to study the limit distribution of the loss of a general credit portfolio with low default probabilities. A latent variable model is employed to quantify the credit portfolio loss, where both heavy tails and tail dependence of the latent variables are realized via a multivariate regular variation (MRV) structure. An approximation formula to implement our main result numerically is obtained. Intensive simulation experiments are conducted, showing that this approximation formula is accurate for relatively small default probabilities, and that our approach is superior to a copula-based approach in reducing model risk.  相似文献   

6.
Choosing a suitable risk measure to optimize an option portfolio’s performance represents a significant challenge. This paper is concerned with illustrating the advantages of Higher order coherent risk measures to evaluate option risk’s evolution. It discusses the detailed implementation of the resulting dynamic risk optimization problem using stochastic programming. We propose an algorithmic procedure to optimize an option portfolio based on minimization of conditional higher order coherent risk measures. Illustrative examples demonstrate some advantages in the performance of the portfolio’s levels when higher order coherent risk measures are used in the risk optimization criterion.  相似文献   

7.
This paper proposes an efficient estimation method for some elliptical copula regression models by expressing both copula density and marginal density functions as scale mixtures of normals (SMN). Implementing these models using the SMN is novel and allows efficient estimation via Bayesian methods. An innovative algorithm for the case of complex semicontinuous margins is also presented. We utilize the facts that copulas are invariant to the location and scale of the margins; all elliptical distributions have the same correlation structure; and some densities can be represented by the SMN. Two simulation studies, one on continuous margins and the other on semicontinuous margins, highlight the favorable performance of the proposed methods. Two empirical studies, one on the US excess returns and one on the Thai wage earnings, further illustrate the applicability of the proposals.  相似文献   

8.
In this article,a procedure for estimating the coefficient functions on the functional-coefficient regression models with different smoothing variables in different coefficient functions is defined.First step,by the local linear technique and the averaged method,the initial estimates of the coefficient functions are given.Second step,based on the initial estimates,the efficient estimates of the coefficient functions are proposed by a one-step back-fitting procedure.The efficient estimators share the same asymptotic normalities as the local linear estimators for the functional-coefficient models with a single smoothing variable in different functions.Two simulated examples show that the procedure is effective.  相似文献   

9.
周峤  张曙光 《运筹与管理》2012,21(3):170-175
利用贝叶斯网络,将搜集到的操作风险事件分类建立数据网络;在假设一定的分布条件下,分别估计各类损失事件发生频率和损失量的分布参数,用copula函数处理相关节点,再估计总体分布的VaR和ES,从而为巴塞尔协议中操作风险损失的估计提供一种具体的可选方法。  相似文献   

10.
The internal‐rating‐based Basel II approach increases the need for the development of more realistic default probability models. In this paper, we follow the approach taken in McNeil A and Wendin J 7 (J. Empirical Finance 2007) by constructing generalized linear mixed models for estimating default probabilities from annual data on companies with different credit ratings. The models considered, in contrast to McNeil A and Wendin J 7 (J. Empirical Finance 2007), allow parsimonious parametric models to capture simultaneously dependencies of the default probabilities on time and credit ratings. Macro‐economic variables can also be included. Estimation of all model parameters are facilitated with a Bayesian approach using Markov chain Monte Carlo methods. Special emphasis is given to the investigation of predictive capabilities of the models considered. In particular, predictable model specifications are used. The empirical study using default data from Standard and Poor's gives evidence that the correlation between credit ratings further apart decreases and is higher than the one induced by the autoregressive time dynamics. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
I propose a simply method to estimate the regression parameters in quasi-likelihood model My main approach utilizes the dimension reduction technique to first reduce the dimension of the regressor X to one dimension before solving the quasi-likelihood equations. In addition, the real advantage of using dimension reduction technique is that it provides a good initial estimate for one-step estimator of the regression parameters. Under certain design conditions, the estimators are asymptotically multivariate normal and consistent. Moreover, a Monte Carlo simulation is used to study the practical performance of the procedures, and I also assess the cost of CPU time for computing the estimates.This research partially supported by the National Science Council, R.O.C. (Plan No. NSC 82-0208-M-032-023-T).  相似文献   

12.
One of the most important parameters determining the performance of communication networks is network reliability. The network reliability strongly depends on not only topological layout of the communication networks but also reliability and availability of the communication facilities. The selection of optimal network topology is an NP-hard problem so that computation time of enumeration-based methods grows exponentially with network size. This paper presents a new solution approach based on cross-entropy method, called NCE, to design of communication networks. The design problem is to find a network topology with minimum cost such that all-terminal reliability is not less than a given level of reliability. To investigate the effectiveness of the proposed NCE, comparisons with other heuristic approaches given in the literature for the design problem are carried out in a three-stage experimental study. Computational results show that NCE is an effective heuristic approach to design of reliable networks.  相似文献   

13.
In the present paper, the approximate computation of a multistage stochastic programming problem (MSSPP) is studied. First, the MSSPP and its discretization are defined. Second, the expected loss caused by the usage of the “approximate” solution instead of the “exact” one is studied. Third, new results concerning approximate computation of expectations are presented. Finally, the main results of the paper—an upper bound of the expected loss and an estimate of the convergence rate of the expected loss—are stated.  相似文献   

14.
近20年来,金融中Levy模型与蒙特卡洛仿真技术日益受到重视. 在连续时间过程的金融建模中带跳跃的Levy模型相比于连续轨道的布朗运动模型能很好地刻画市场的跳跃,更好地拟合金融数据的统计特征,更准确地对衍生品定价. 但是,相较于经典的Black-Scholes模型,用Levy模型对衍生品定价以及求解对冲策略的计算复杂度大大增加. 蒙特卡洛仿真成为Levy模型计算中最重要的方法之一. 首先详细地介绍了Levy模型引入的背景,并引出仿真方法在其中重要的应用价值. 最后,简要地给出了Levy过程仿真及其梯度估计的基本方法.  相似文献   

15.
应用Monte Carlo EM(MCEM)算法给出了多层线性模型参数估计的新方法,解决了EM算法用于模型时积分计算困难的问题,并通过数值模拟将方法的估计结果与EM算法的进行比较,验证了方法的有效性和可行性.  相似文献   

16.
信用风险研究足近些年来金融数学中的一个崭新的研究方向.本文主要研究了组合信用风险中的常用方法:违约相关性的Copula方法.本文建立了Copula方法与违约相关性研究中的结构化方法和约化方法的联系.此外对于单个公司的生存概率的研究,本文给出了不同于Lando (1998)的求解和证明方法,而这种方法不需要在现在就知道将...  相似文献   

17.
Estimating the prediction error is a common practice in the statistical literature. Under a linear regression model, lete be the conditional prediction error andê be its estimate. We use (ê, e), the correlation coefficient betweene andê, to measure the performance of a particular estimation method. Reasons are given why correlation is chosen over the more popular mean squared error loss. The main results of this paper conclude that it is generally not possible to obtain good estimates of the prediction error. In particular, we show that (ê, e)=O(n –1/2) whenn . When the sample size is small, we argue that high values of (ê, e) can be achieved only when the residual error distribution has very heavy tails and when no outlier presents in the data. Finally, we show that in order for (ê, e) to be bounded away from zero asymptotically,ê has to be biased.  相似文献   

18.
The normal inverse Gaussian (NIG) distribution is a promising alternative for modelling financial data since it is a continuous distribution that allows for skewness and fat tails. There is an increasing number of applications of the NIG distribution to financial problems. Due to the complicated nature of its density, estimation procedures are not simple. In this paper we propose Bayesian estimation for the parameters of the NIG distribution via an MCMC scheme based on the Gibbs sampler. Our approach makes use of the data augmentation provided by the mixture representation of the distribution. We also extend the model to allow for modelling heteroscedastic regression situations. Examples with financial and simulated data are provided. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
20.
This paper discusses practical Bayesian estimation of stochastic volatility models based on OU processes with marginal Gamma laws. Estimation is based on a parameterization which is derived from the Rosiński representation, and has the advantage of being a non-centered parameterization. The parameterization is based on a marked point process, living on the positive real line, with uniformly distributed marks. We define a Markov chain Monte Carlo (MCMC) scheme which enables multiple updates of the latent point process, and generalizes single updating algorithm used earlier. At each MCMC draw more than one point is added or deleted from the latent point process. This is particularly useful for high intensity processes. Furthermore, the article deals with superposition models, where it discuss how the identifiability problem inherent in the superposition model may be avoided by the use of a Markov prior. Finally, applications to simulated data as well as exchange rate data are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号