首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary  In this paper we investigate a Bayesian procedure for the estimation of a flexible generalised distribution, notably the MacGillivray adaptation of theg-and-k distribution. This distribution, described through its inverse cdf or quantile function, generalises the standard normal through extra parameters which together describe skewness and kurtosis. The standard quantile-based methods for estimating the parameters of generalised distributions are often arbitrary and do not rely on computation of the likelihood. MCMC, however, provides a simulation-based alternative for obtaining the maximum likelihood estimates of parameters of these distributions or for deriving posterior estimates of the parameters through a Bayesian framework. In this paper we adopt the latter approach. The proposed methodology is illustrated through an application in which the parameter of interest is slightly skewed.  相似文献   

2.
In binary regression, symmetric links such as logit and probit are usually considered as standard. However, in the presence of unbalancing of ones and zeros, these links can be inappropriate and inflexible to fit the skewness in the response curve and likely to lead to misspecification. This is the case of covering some type of insurance, where it can be observed that the probability of a given binary response variable approaches zero at different rates than it approaches one. Furthermore, when usual links are considered, there is not a skewness parameter associated with the distribution chosen that, regardless of the linear predictor, is easily interpreted. In order to overcome such problems, a proposal for the construction of a set of new skew links is developed in this paper, where some of their properties are discussed. In this context, power links and their reversal versions are presented. A Bayesian inference approach using MCMC is developed for the presented models. The methodology is illustrated considering a sample of motor insurance policyholders selected randomly by gender. Results suggest that the proposed link functions are more appropriate than other alternative link functions commonly used in the literature. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
The asymmetry of a univariate continuous distribution is commonly measured by the classical skewness coefficient. Because this estimator is based on the first three moments of the dataset, it is strongly affected by the presence of one or more outliers. This article investigates the medcouple, a robust alternative to the classical skewness coefficient. We show that it has a 25% breakdown value and a bounded influence function. We present a fast algorithm for its computation, and investigate its finite-sample behavior through simulated and real datasets.  相似文献   

4.
本文基于台湾股市和期权市场数据,研究波动率、偏度和峰度等高阶矩风险溢酬的信息含量及影响因素,并结合边际贡献度分析了各常见影响因素对高阶矩风险溢酬的解释力。研究结论表明:波动率间的相关性高于偏度和峰度,波动率也更易于预测;期权市场的隐含矩信息对未来实际矩有较好解释力,且引入多个市场信息的预测效果要优于单一市场。市场情绪和异质信念是各高阶矩风险溢酬的主要影响因素;流动性信息次之;市场因子的解释相对较弱。对高阶矩风险溢酬影响因素的分析还应引入市场极端风险、风险厌恶信息和宏观经济变量等因素的考察。  相似文献   

5.
In this paper we propose several goodness-of-fit tests based on robust measures of skewness and tail weight. They can be seen as generalisations of the Jarque–Bera test (Bera and Jarque in Econ Lett 7:313–318, 1981) based on the classical skewness and kurtosis, and as an alternative to the approach of Moors et al. (Stat Neerl 50:417–430, 1996) using quantiles. The power values and the robustness properties of the different tests are investigated by means of simulations and applications on real data. We conclude that MC-LR, one of our proposed tests, shows the best overall power and that it is moderately influenced by outlying values.  相似文献   

6.
Every conformity control method based on measurements is subject to uncertainty, which distorts the decision. In the traditional conformity control approaches, this uncertainty is an inherent part of the deviation of the observed characteristic; however, the distribution of the real product characteristic may differ from the distribution of measurement uncertainty, which obscures the real conformity or nonconformity. The specification and consideration of this uncertainty are particularly necessary if it is high and/or the consequences associated with the decision errors are severe. This paper studies the effects of the cost structure associated with the decision outcomes and the skewness and kurtosis of the measurement uncertainty distribution. The proposed method can specify when and how the measurement uncertainty should be taken into account to increase the expected profit associated with the decision.  相似文献   

7.
ABSTRACT. In classical theoretical ecology there are numerous standard models which are simple, generally applicable, and have well‐known properties. These standard models are widely used as building blocks for all kinds of theoretical and applied models. In contrast, there is a total lack of standard individual‐based models (IBM's), even though they are badly needed if the advantages of the individual‐based approach are to be exploited more efficiently. We discuss the recently developed ‘field‐of‐neighborhood’ approach as a possible standard for modeling plant populations. In this approach, a plant is characterized by a circular zone of influence that grows with the plant, and a field of neighborhood that for each point within the zone of influence describes the strength of competition, i.e., growth reduction, on neighboring plants. Local competition is thus described phenomenologically. We show that a model of mangrove forest dynamics, KiWi, which is based on the FON approach, is capable of reproducing self‐thinning trajectories in an almost textbook‐like manner. In addition, we show that the entire biomass‐density trajectory (bdt) can be divided into four sections which are related to the skewness of the stem diameter distributions of the cohort. The skewness shows two zero crossings during the complete development of the population. These zero crossings indicate the beginning and the end of the self‐thinning process. A characteristic decay of the positive skewness accompanies the occurrence of a linear bdt section, the well‐known self‐thinning line. Although the slope of this line is not fixed, it is confined in two directions, with morphological constraints determining the lower limit and the strength of neighborhood competition exerted by the individuals marking the upper limit.  相似文献   

8.
The insurance industry is known to have high operating expenses in the financial services sector. Insurers, investors and regulators are interested in models to understand the behavior of expenses. However, the current practice ignores skewness, occasional negative values as well as their temporal dependence.Addressing these three features, this paper develops a longitudinal model of insurance company expenses that can be used for prediction, to identify unusual behavior, and to measure firm efficiency. Specifically, we use a three-parameter asymmetric Laplace density for the marginal distribution of insurers’ expenses in each year. Copula functions are employed to accommodate their temporal dependence. As a function of explanatory variables, the location parameter allows us to analyze an insurer’s expenses in light of the firm’s characteristics. Our model can be interpreted as a longitudinal quantile regression.The analysis is performed using property–casualty insurance company data from the National Association of Insurance Commissioners of years 2001–2006. Due to the long-tailed nature of insurers’ expenses, two alternative approaches are proposed to improve the performance of the longitudinal quantile regression model: rescaling and transformation. Predictive densities are derived that allow one to compare the predictions for individual insurers in a hold-out-sample. Both predictive models are shown to be reasonable with the rescaling method outperforming the transformation method. Compared with standard longitudinal models, our model is shown to be superior in identifying insurers’ unusual behavior.  相似文献   

9.
The insurance industry is known to have high operating expenses in the financial services sector. Insurers, investors and regulators are interested in models to understand the behavior of expenses. However, the current practice ignores skewness, occasional negative values as well as their temporal dependence.Addressing these three features, this paper develops a longitudinal model of insurance company expenses that can be used for prediction, to identify unusual behavior, and to measure firm efficiency. Specifically, we use a three-parameter asymmetric Laplace density for the marginal distribution of insurers’ expenses in each year. Copula functions are employed to accommodate their temporal dependence. As a function of explanatory variables, the location parameter allows us to analyze an insurer’s expenses in light of the firm’s characteristics. Our model can be interpreted as a longitudinal quantile regression.The analysis is performed using property-casualty insurance company data from the National Association of Insurance Commissioners of years 2001-2006. Due to the long-tailed nature of insurers’ expenses, two alternative approaches are proposed to improve the performance of the longitudinal quantile regression model: rescaling and transformation. Predictive densities are derived that allow one to compare the predictions for individual insurers in a hold-out-sample. Both predictive models are shown to be reasonable with the rescaling method outperforming the transformation method. Compared with standard longitudinal models, our model is shown to be superior in identifying insurers’ unusual behavior.  相似文献   

10.
根据加权标准差方法建立有偏总体的极差控制图,它基于有偏总体来计算对应于正态分布的控制图常数,根据样本数据的偏度来计算上下控制限,对于总体是对称分布,该控制图退化为标准的休哈特控制图.最后,用蒙特卡洛方法给出了改进的控制图常数.  相似文献   

11.
In this paper, we conduct skewness term-structure tests to check whether the temporal structure of risk-neutral skewness is consistent with rational expectations. Because risk-neutral skewness is substantially mean reverting, skewness shocks should decay quickly and risk-neutral skewness of more distant option should display the rationally expected smoothing behaviour. Using an equilibrium asset and option-pricing model in a production economy under jump diffusion with stochastic jump intensity, we derive this elasticity analytically. In an empirical application of the model using more than 20 years of data on S&P500 index options, we find that this elasticity turns out to be different than suggested under rational expectations – smaller on the short end (underreaction) and larger on the long end (overreaction) of the ‘skewness curve’.  相似文献   

12.
In finance theory the standard deviation of asset returns is almost universally recognized as a measure of risk. This universality continues to exist even in the presence of known limitations of using the standard deviation and also an extensive and growing literature on alternative risk measures. One possible reason for this persistence is that the sample properties of alternative risk measures are not well understood. This paper attempts to compare the sample distribution of the semi-variance with that of the variance. In particular, the belief that, while there are convincing theoretical reasons to use the semi-variance the volatility of the sample measure is so high as to make the measure impractical in applied work, is investigated. In addition arguments based on stochastic dominance are also used to compare the distribution of the two statistics. Conditions are developed to identify situations in which the semi-variance may be preferred to the variance. An empirical example using equity data from emerging markets demonstrates this approach.  相似文献   

13.
It is argued that the accuracy of risk aggregation in Solvency II can be improved by updating skewness recursively. A simple scheme based on the log-normal distribution is developed and shown to be superior to the standard formula and to adjustments of the Cornish–Fisher type. The method handles tail-dependence if a simple Monte Carlo step is included. A hierarchical Clayton copula is constructed and used to confirm the accuracy of the log-normal approximation and to demonstrate the importance of including tail-dependence. Arguably a log-normal scheme makes the logic in Solvency II consistent, but many other distributions might be used as vehicle, a topic that may deserve further study.  相似文献   

14.
何军 《应用数学和力学》2007,28(11):1325-1332
提出了一个基于结构响应矩的解析方法, 用来计算具有非Gauss特性结构的首次失效时间.在该方法中,首先采用其系数可通过结构反应矩(偏态系数和峰度系数等)计算的幂级数,将非Gauss结构反应变换为标准Gauss过程.然后,利用变换的标准Gauss过程计算原结构反应过程关于某临界界限的平均超越率、平均群超尺度和初始超越概率.最后,在修正超越率为独立的假定下,建立了首次超越时间的计算公式.Gauss过程激励下非线性单自由度振动系统的分析,不仅说明了该方法的应用过程,也通过与Monte Carlo模拟和传统Gauss模型方法的对比分析,证明了该方法的精确性和效率.  相似文献   

15.
This paper introduces a method for simulating multivariate samples that have exact means, covariances, skewness and kurtosis. We introduce a new class of rectangular orthogonal matrix which is fundamental to the methodology and we call these matrices L matrices. They may be deterministic, parametric or data specific in nature. The target moments determine the L matrix then infinitely many random samples with the same exact moments may be generated by multiplying the L matrix by arbitrary random orthogonal matrices. This methodology is thus termed “ROM simulation”. Considering certain elementary types of random orthogonal matrices we demonstrate that they generate samples with different characteristics. ROM simulation has applications to many problems that are resolved using standard Monte Carlo methods. But no parametric assumptions are required (unless parametric L matrices are used) so there is no sampling error caused by the discrete approximation of a continuous distribution, which is a major source of error in standard Monte Carlo simulations. For illustration, we apply ROM simulation to determine the value-at-risk of a stock portfolio.  相似文献   

16.
This paper considers ranking decision alternatives under multiple attributes with imprecise information on both attribute weights and alternative ratings. It is demonstrated that regret results from the decision maker??s inadequate knowledge about the true scenario to occur. Potential optimality analysis is a traditional method to evaluate alternatives with imprecise information. The essence of this approach is to identify any alternative that outperforms the others in its best-case scenario. Our analysis shows that potential optimality analysis is optimistic in nature and may lead to a significant loss if an unfavorable scenario occurs. We suggest a robust optimization analysis approach that ranks alternatives in terms of their worst-case absolute or relative regret. A robust optimal alternative performs reasonably well in all scenarios and is shown to be desirable for a risk-concerned decision maker. Linear programming models are developed to check robust optimality.  相似文献   

17.
In this paper we propose a robust classification rule for skewed unimodal distributions. For low dimensional data, the classifier is based on minimizing the adjusted outlyingness to each group. In the case of high dimensional data, the robustified SIMCA method is adjusted for skewness. The robustness of the methods is investigated through different simulations and by applying it to some datasets.  相似文献   

18.
Complex data such as those where each statistical unit under study is described not by a single observation (or vector variable), but by a unit-specific sample of several or even many observations, are becoming more and more popular. Reducing these sample data by summary statistics, like the average or the median, implies that most inherent information (about variability, skewness or multi-modality) gets lost. Full information is preserved only if each unit is described by a whole distribution. This new kind of data, a.k.a. “distribution-valued data”, require the development of adequate statistical methods. This paper presents a method to group a set of probability density functions (pdfs) into homogeneous clusters, provided that the pdfs have to be estimated nonparametrically from the unit-specific data. Since elements belonging to the same cluster are naturally thought of as samples from the same probability model, the idea is to tackle the clustering problem by defining and estimating a proper mixture model on the space of pdfs. The issue of model building is challenging here because of the infinite-dimensionality and the non-Euclidean geometry of the domain space. By adopting a wavelet-based representation for the elements in the space, the task is accomplished by using mixture models for hyper-spherical data. The proposed solution is illustrated through a simulation experiment and on two real data sets.  相似文献   

19.
中国股市收益率分布特征的实证研究   总被引:1,自引:0,他引:1  
金融数据除了具有尖峰厚尾特性以外,也表现出了尾部概率的非对称性,即偏尾特征。本文采用非对称Lap lace分布对我国沪深股市的样本收益率数据进行了实证分析,研究表明,我国股市的中长期收益率数据存在明显偏尾特征,与Compell的行为金融学解释恰恰相反,这种偏尾特征的具体表现不是左尾比右尾厚,而是右尾比左尾厚,研究还表明深市收益率偏尾特征对时间水平的灵敏度比沪市要高。  相似文献   

20.
In this work we will discuss the solution of an initial value problem of parabolic type. The main objective is to propose an alternative method of solution, one not based on finite difference or finite element or spectral methods. The aim of the present paper is to investigate the application of the Adomian decomposition method for solving the Fokker–Planck equation and some similar equations. This method can successfully be applied to a large class of problems. The Adomian decomposition method needs less work in comparison with the traditional methods. This method decreases considerable volume of calculations. The decomposition procedure of Adomian will be obtained easily without linearizing the problem by implementing the decomposition method rather than the standard methods for the exact solutions. In this approach the solution is found in the form of a convergent series with easily computed components. In this work we are concerned with the application of the decomposition method for the linear and nonlinear Fokker–Planck equation. To give overview of methodology, we have presented several examples in one and two dimensional cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号