首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
This paper estimates the price for restructuring risk in the US corporate bond market during 1999–2005. Comparing quotes from default swap (CDS) contracts with a restructuring event and without, we find that the average premium for restructuring risk represents 6%–8% of the swap rate without restructuring. We show that the restructuring premium depends on firm-specific balance-sheet and macroeconomic variables. And, when default swap rates without a restructuring event increase, the increase in restructuring premia is higher for low-credit-quality firms than for high-credit-quality firms. We propose a reduced-form arbitrage-free model for pricing default swaps that explicitly incorporates the distinction between restructuring and default events. A case study illustrating the model’s implementation is provided.  相似文献   

2.
在回收率非零的情况下,研究了信用违约互换的参照资产和保护卖方有传染违约相关时信用违约互换的定价问题.相关传染违约结构由双方相关的违约强度描述,即一方的违约会导致另一方的违约强度的增加.利用参照资产与保护卖方违约停时的联合概率分布,得到了信用违约互换价格的精确表达式,并且分析了清算期和回收率对清算风险价格和替换成本的影响.数值化的结果说明,在信用违约互换的定价中,不仅不能忽视参照资产对保护卖方违约的影响,还不能忽视清算期和回收率对信用违约互换价格的影响.如果在定价信用违约互换时不考虑回收率,即假定回收率为零时,会严重高估信用违约互换的价格.  相似文献   

3.
Email: ym{at}onetel.net.uk Empirical study of 25 years US Treasury bills data shows thateven when the spot interest rate remains fixed, its volatilityvaries significantly over time. Constant-coefficient modelscannot capture these changes as they give rise to time-homogeneousdistributions. Maximum likelihood fitting of a one-factor time-dependentExtended-CIR model of the term structure, whose closed-formsolution was previously obtained by the author, shows that itcan capture these changes, as well as achieve significantlyhigher likelihood value. It is shown that exploitation of theclosed-form solutions substantially improves the accuracy andefficiency of Monte Carlo simulations over high-order discretizationalgorithms. It is also shown that the feasibility of exact one-to-onecalibration of the model to any continuous yield curve allowsvaluation of bond options significantly more accurately andefficiently.  相似文献   

4.
一篮子信用违约互换定价的偏微分方程方法   总被引:1,自引:0,他引:1  
通过对一篮子信用违约互换的结构性分析,在约化法框架下,用PDE方法提出一个新的计算具有违约相关性的多个公司联合生存概率的方法,在此基础上得到信用互换到期之前一篮子中违约数量的概率分布.应用这个概率分布,在条件独立的假定下,先后建立了首次违约、二次违约的信用违约互换定价模型,并用PDE方法给出了定价的显性表达式,并进一步扩展到解决m次违约的信用违约互换的定价问题.  相似文献   

5.
This paper studies the optimal trade credit term decision in an extended economic ordering quantity (EOQ) framework that incorporates a default risk component. A principal-agent bilevel programming model with costs minimization objectives is set up to derive the incentive-compatible credit term. The supplier determines the credit term as the leader in the first level programming, by balancing her/his financing capacity with the retailer’s default risk, order behavior and cost shifting. At the second level, the retailer makes decisions on ordering and payment time by reacting on the term offered by the supplier. A first order condition solution procedure is derived for the bilevel programming when credit term is confined within the practically feasible interval. Two key results are obtained – the condition to derive incentive-compatible credit term, and an equation system to derive threshold default risk criterion filtering retailers suitable for credit granting. Numerical experiments show that the capital cost of the supplier is the most important factor determining the credit term. Default risk acts like a filtering criterion for selecting retailers suitable for credit granting. Empirical evidence supporting our theoretical considerations is obtained by estimating three panel econometric models, using a dataset from China’s listed companies.  相似文献   

6.
Let Y = m(X) + ε be a regression model with a dichotomous output Y and a one‐step regression function m . In the literature, estimators for the three parameters of m , that is, the breakpoint θ and the levels a and b , are proposed for independent and identically distributed (i.i.d.) observations. We show that these standard estimators also work in a non‐i.i.d. framework, that is, that they are strongly consistent under mild conditions. For that purpose, we use a linear one‐factor model for the input X and a Bernoulli mixture model for the output Y . The estimators for the split point and the risk levels are applied to a problem arising in credit rating systems. In particular, we divide the range of individuals' creditworthiness into two groups. The first group has a higher probability of default and the second group has a lower one. We also stress connections between the standard estimator for the cutoff θ and concepts prevalent in credit risk modeling, for example, receiver operating characteristic. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
The use of a simple Stein-rule method is discussed for estimating the repayment probability of a new credit applicant. It is argued that this type of procedure could provide a superior algorithm when a multiperiod view of repayments is taken and when there is a desire to balance the categorised estimate from the common discriminant scoring procedure with a fairer reflection of the applicant's actual performance.  相似文献   

8.
We investigate the performance of various survival analysis techniques applied to ten actual credit data sets from Belgian and UK financial institutions. In the comparison we consider classical survival analysis techniques, namely the accelerated failure time models and Cox proportional hazards regression models, as well as Cox proportional hazards regression models with splines in the hazard function. Mixture cure models for single and multiple events were more recently introduced in the credit risk context. The performance of these models is evaluated using both a statistical evaluation and an economic approach through the use of annuity theory. It is found that spline-based methods and the single event mixture cure model perform well in the credit risk context.  相似文献   

9.
Given p or c, a semi-greedy heuristic chooses each iteration's decision randomly from among those decisions resulting in objective value improvements either within p% of the best improvement or among the c best improvements. In the context of vehicle routing, we empirically compare the single use of a greedy heuristic with repeated use of a semi-greedy heuristic.  相似文献   

10.
Mixture cure models were originally proposed in medical statistics to model long-term survival of cancer patients in terms of two distinct subpopulations - those that are cured of the event of interest and will never relapse, along with those that are uncured and are susceptible to the event. In the present paper, we introduce mixture cure models to the area of credit scoring, where, similarly to the medical setting, a large proportion of the dataset may not experience the event of interest during the loan term, i.e. default. We estimate a mixture cure model predicting (time to) default on a UK personal loan portfolio, and compare its performance to the Cox proportional hazards method and standard logistic regression. Results for credit scoring at an account level and prediction of the number of defaults at a portfolio level are presented; model performance is evaluated through cross validation on discrimination and calibration measures. Discrimination performance for all three approaches was found to be high and competitive. Calibration performance for the survival approaches was found to be superior to logistic regression for intermediate time intervals and useful for fixed 12 month time horizon estimates, reinforcing the flexibility of survival analysis as both a risk ranking tool and for providing robust estimates of probability of default over time. Furthermore, the mixture cure model’s ability to distinguish between two subpopulations can offer additional insights by estimating the parameters that determine susceptibility to default in addition to parameters that influence time to default of a borrower.  相似文献   

11.
In this paper, we scrutinize the empirical performance of a wavelet-based option pricing model which leverages the powerful computational capability of wavelets in approximating risk-neutral moment-generating functions. We focus on the forecasting and hedging performance of the model in comparison with that of popular alternative models, including the stochastic volatility model with jumps, the practitioner Black–Scholes model and the neural network based model. Using daily index options written on the German DAX 30 index from January 2009 to December 2012, our results suggest that the wavelet-based model compares favorably with all other models except the neural network based one, especially for long-term options. Hence our novel wavelet-based option pricing model provides an excellent nonparametric alternative for valuing option prices.  相似文献   

12.
This study proposes and analyses a novel alternative to credit transition matrices (CTMs) developed by credit rating agencies - bank-sourced CTMs. It provides a unique insight into estimation of bank-sourced CTMs by assessing the extent to which the CTMs depend on the characteristics of the underlying credit risk datasets and the aggregation method and outlines that the choice of aggregation approach has a substantial effect on credit risk model results. Further, we show that bank-sourced CTMs are more dynamic than those of credit rating agencies, with higher off-diagonal transition rates and higher propensity to upgrade. Finally, we create a set of industry-specific CTMs, otherwise unobtainable due to the data sparsity faced by credit rating agencies, and highlight the implications of their differences, signalling the existence of industry-specific business cycles. The study uses a unique and large dataset of internal credit risk estimates from 24 global banks covering monthly observations on more than 26,000 large corporates and employs large-scale Monte Carlo simulations. This approach can be replicated by regulators (e.g., data collected by the European Central Bank in the AnaCredit project) and used by organisations aiming to improve their credit risk models.  相似文献   

13.
We focus on affine term structure models as tools for active bond portfolio management. Our financial exercise comprises the following steps: 1) forecast the future values of the state variables implied by several multi-factor models; 2) approximate the conditional moments of the state vector to come up with discrete scenarios for the future state variables 3) compute bond returns for various maturities at future dates from the theoretical asset pricing relations 4) solve the portfolio problem faced by an investor with a six month horizon who takes into account the possibility to rebalance after one quarter. The sequence of optimal portfolios is evaluated in terms of financial properties. We show that a financial based evaluation of term structure models may yield results conflicting with those obtained from a statistical evaluation.  相似文献   

14.
This paper develops a multivariate statistical model for the analysis of credit default swap spreads. Given the large excess kurtosis of the univariate marginal distributions, it is proposed to model them by means of a mixture of distributions. However, the multivariate extension of this methodology is numerically difficult, so that copulas are used to capture the structure of dependence of the data. It is shown how to estimate the parameters of the marginal distributions via the EM algorithm; then the parameters of the copula are estimated and standard errors computed through the nonparametric bootstrap. An application to credit default swap spreads of some European reference entities and extensive simulation results confirm the effectiveness of the method.  相似文献   

15.
In single-objective optimization it is possible to find a global optimum, while in the multi-objective case no optimal solution is clearly defined, but several that simultaneously optimize all the objectives. However, the majority of this kind of problems cannot be solved exactly as they have very large and highly complex search spaces. Recently, meta-heuristic approaches have become important tools for solving multi-objective problems encountered in industry as well as in the theoretical field. Most of these meta-heuristics use a population of solutions, and hence the runtime increases when the population size grows. An interesting way to overcome this problem is to apply parallel processing. This paper analyzes the performance of several parallel paradigms in the context of population-based multi-objective meta-heuristics. In particular, we evaluate four alternative parallelizations of the Pareto simulated annealing algorithm, in terms of quality of the solutions, and speedup.  相似文献   

16.
When applying the 2-opt heuristic to the travelling salesman problem, selecting the best improvement at each iteration gives worse results on average than selecting the first improvement, if the initial solution is chosen at random. However, starting with ‘greedy’ or ‘nearest neighbor’ constructive heuristics, the best improvement is better and faster on average. Reasons for this behavior are investigated. It appears to be better to use exchanges introducing into the solution a very small edge and fairly large one, which can easily be removed later, than two small ones which are much harder to remove.  相似文献   

17.
Recent results in applied statistics have shown that the presence of periodicity in a time series may have an influence on the estimation of the long memory (long-range dependence) parameter H. In particular, some estimators falsely detect the presence of long-range dependence when periodicity is present. In this paper, we apply various estimation procedures to synthetic periodic time series in order to verify the performance of each estimation method and to determine which estimators should be used when periodicity may be present.  相似文献   

18.
Partial LAD regression uses the L 1 norm associated with least absolute deviations (LAD) regression while retaining the same algorithmic structure of univariate partial least squares (PLS) regression. We use the bootstrap in order to assess the partial LAD regression model performance and to make comparisons to PLS regression. We use a variety of examples coming from NIR experiments as well as two sets of experimental data.  相似文献   

19.
We derive Bayesian confidence intervals for the probability of default (PD), asset correlation (Rho), and serial dependence (Theta) for low default portfolios (LDPs). The goal is to reduce the probability of underestimating credit risk in LDPs. We adopt a generalized method of moments with continuous updating to estimate prior distributions for PD and Rho from historical default data. The method is based on a Bayesian approach without expert opinions. A Markov chain Monte Carlo technique, namely, the Gibbs sampler, is also applied. The performance of the estimation results for LDPs validated by Monte Carlo simulations. Empirical studies on Standard & Poor’s historical default data are also conducted.  相似文献   

20.
This paper evaluates the small and large sample properties of Markov chain time-dependence and time-homogeneity tests. First, we present the Markov chain methodology to investigate various statistical properties of time series. Considering an auto-regressive time series and its associated Markov chain representation, we derive analytical measures of the statistical power of the Markov chain time-dependence and time-homogeneity tests. We later use Monte Carlo simulations to examine the small-sample properties of these tests. It is found that although Markov chain time-dependence test has desirable size and power properties, time-homogeneity test does not perform well in statistical size and power calculations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号