首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 172 毫秒
1.
We analyze output from six regional climate models (RCMs) via a spatial Bayesian hierarchical model. The primary advantage of this approach is that the statistical model naturally borrows strength across locations via a spatial model on the parameters of the generalized extreme value distribution. This is especially important in this application as the RCM output we analyze have extensive spatial coverage, but have a relatively short temporal record for characterizing extreme behavior. The hierarchical model we employ is also designed to be computationally efficient as we analyze RCM output for nearly 12000 locations. The aim of this analysis is to compare the extreme precipitation as generated by these RCMs. Our results show that, although the RCMs produce similar spatial patterns for the 100-year return level, their characterizations of extreme precipitation are quite different. Additionally, we examine the spatial behavior of the extreme value index and find differing spatial patterns for the point estimates for the RCMs. However, these differences may not be significant given the uncertainty associated with estimating this parameter.  相似文献   

2.
The modeling and analysis of lifetime data is an important aspect of statistical work in a wide variety of scientific and technological fields. Good (1953) introduced a probability distribution which is commonly used in the analysis of lifetime data. For the first time, based on this distribution, we propose the so-called exponentiated generalized inverse Gaussian distribution, which extends the exponentiated standard gamma distribution (Nadarajah and Kotz, 2006). Various structural properties of the new distribution are derived, including expansions for its moments, moment generating function, moments of the order statistics, and so forth. We discuss maximum likelihood estimation of the model parameters. The usefulness of the new model is illustrated by means of a real data set.  相似文献   

3.
Recent advances in the transformation model have made it possible to use this model for analyzing a variety of censored survival data. For inference on the regression parameters, there are semiparametric procedures based on the normal approximation. However, the accuracy of such procedures can be quite low when the censoring rate is heavy. In this paper, we apply an empirical likelihood ratio method and derive its limiting distribution via U-statistics. We obtain confidence regions for the regression parameters and compare the proposed method with the normal approximation based method in terms of coverage probability. The simulation results demonstrate that the proposed empirical likelihood method overcomes the under-coverage problem substantially and outperforms the normal approximation based method. The proposed method is illustrated with a real data example. Finally, our method can be applied to general U-statistic type estimating equations.  相似文献   

4.
This paper proposes a new approach to analyze stock return asymmetry and quantiles. We also present a new scale mixture of uniform (SMU) representation for the asymmetric Laplace distribution (ALD). The use of the SMU for a probability distribution is a data augmentation technique that simplifies the Gibbs sampler of the Bayesian Markov chain Monte Carlo algorithms. We consider a stochastic volatility (SV) model with an ALD error distribution. With the SMU representation, the full conditional distribution for some parameters is shown to have closed form. It is also known that the ALD can be used to obtain the coefficients of quantile regression models. This paper also considers a quantile SV model by fixing the skew parameter of the ALD at specific quantile level. Simulation study shows that the proposed methodology works well in both SV and quantile SV models using Bayesian approach. In the empirical study, we analyze index returns of the stock markets in Australia, Japan, Hong Kong, Thailand, and the UK and study the effect of S&P 500 on these returns. The results show the significant return asymmetry in some markets and the influence by S&P 500 in all markets at all quantile levels. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
金浩  田铮 《数学研究及应用》2009,29(6):1011-1021
This paper analyzes the problem of testing for parameters change in ARCH errors models with deterministic trend based on residual cusum test. It is shown that the asymptotically limiting distribution of the residual cusum test statistic is still the sup of a standard Brownian bridge under null hypothesis. In order to check this, we carry out a Monte Carlo simulation and examine the return of IBM data. The results from both simulation and real data analysis support our claim. We also can explain this phenomenon from a theoretical viewpoint that the variance in ARCH model in mainly determined by its parameters.  相似文献   

6.
Abstract A fundamental problem of interest to contemporary natural resource scientists is that of assessing whether a critical population parameter such as population proportion p has been maintained above (or below) a specified critical threshold level pc. This problem has been traditionally analyzed using frequentist estimation of parameters with confidence intervals or frequentist hypothesis testing. Bayesian statistical analysis provides an alternative approach that has many advantages. It has a more intuitive interpretation, providing probability assessments of parameters. It provides the Bayesian logic of “if (data), then probability (parameters)” rather than the frequentist logic of “if (parameters), then probability (data).” It provides a sequential, cumulative, scientific approach to analysis, using prior information and reassessing the probability distribution of parameters for adaptive management decision making. It has been integrated with decision theory and provides estimates of risk. Natural resource scientists have the opportunity of using Bayesian statistical analysis to their advantage now that this alternative approach to statistical inference has become practical and accessible.  相似文献   

7.
采用1分钟高频数据,研究iVIX指数与上证50 ETF收益率之间的相关性。运用参数估计和核密度估计描述两者的边缘分布,通过K-S拟合优度检验构建Copula模型。研究表明:Copula模型具有较好的拟合优度,Copula函数相对于Kendall和Spearman分析方法不仅能够捕捉iVIX指数与ETF收益率序列间的秩相关性,而且还能反映iVIX指数与ETF收益率的尾部相关性;iVIX指数与上证50 ETF收益率之间存在负的秩相关性,秩相关性强弱随着不同持有期大致呈现“W”型分布,通过Copula概率密度函数的尾部相关性发现iVIX指数与ETF收益率存在非对称结构特征。  相似文献   

8.
The present work is associated with Bayesian finite element (FE) model updating using modal measurements based on maximizing the posterior probability instead of any sampling based approach. Such Bayesian updating framework usually employs normal distribution in updating of parameters, although normal distribution has usual statistical issues while using non-negative parameters. These issues are proposed to be dealt with incorporating lognormal distribution for non-negative parameters. Detailed formulations are carried out for model updating, uncertainty-estimation and probabilistic detection of changes/damages of structural parameters using combined normal-lognormal probability distribution in this Bayesian framework. Normal and lognormal distributions are considered for eigen-system equation and structural (mass and stiffness) parameters respectively, while these two distributions are jointly considered for likelihood function. Important advantages in FE model updating (e.g. utilization of incomplete measured modal data, non-requirement of mode-matching) are also retained in this combined normal-lognormal distribution based proposed FE model updating approach. For demonstrating the efficiency of this proposed approach, a two dimensional truss structure is considered with multiple damage cases. Satisfactory performances are observed in model updating and subsequent probabilistic estimations, however level of performances are found to be weakened with increasing levels in damage scenario (as usual). Moreover, performances of this proposed FE model updating approach are compared with the typical normal distribution based updating approach for those damage cases demonstrating quite similar level of performances. The proposed approach also demonstrates better computational efficiency (achieving higher accuracy in lesser computation time) in comparison with two prominent Markov Chain Monte Carlo (MCMC) techniques (viz. Metropolis-Hastings algorithm and Gibbs sampling).  相似文献   

9.
This paper considers a continuous time, continuous state stochastic process to determine a theoretical model and empirical parameters for the probability distribution of remigration. A Brownian motion model is used for simplicity, with empirical findings drawn from a study of Israeli return migrants. A negative relationship between remigration (sojourn) time and the probability of return time is used to provide forecasts of remigration which can help governments who seek actively the return of their migrants to reach better decisions regarding the timing of their efforts.  相似文献   

10.
Reliability analysis requires modeling of joint probability distribution of uncertain parameters, which can be a challenge since the random variables representing the parameter uncertainties may be correlated. For convenience, a Gaussian data dependence is commonly assumed for correlated random variables. This paper first investigates the effect of multidimensional non-Gaussian data dependences underlying the multivariate probability distribution on reliability results. Using different bivariate copulas in a vine structure, various data dependences can be modeled. The associated copula parameters are identified from available statistical information by moment matching techniques. After the development of the vine copula model for representing the multivariate probability distribution, the reliability involving correlated random variables is evaluated based on the Rosenblatt transformation. The impact of data dependence is significant because a large deviation in failure probability is observed, which emphasizes the need for accurate dependence characterization. A practical method for dependence modeling based on limited data is thus provided. The result demonstrates that the non-Gaussian data dependences can be real in practice, and the reliability can be biased if the Gaussian dependence is used inappropriately. Moreover, the effect of conditioning order on reliability should not be overlooked except that the vine structure contains only one type of copula.  相似文献   

11.
In conventional multiobjective decision making problems, the estimation of the parameters of the model is often a problematic task. Normally they are either given by the decision maker (DM), who has imprecise information and/or expresses his considerations subjectively, or by statistical inference from past data and their stability is doubtful. Therefore, it is reasonable to construct a model reflecting imprecise data or ambiguity in terms of fuzzy sets for which a lot of fuzzy approaches to multiobjective programming have been developed. In this paper we propose a method to solve a multiobjective linear programming problem involving fuzzy parameters (FP-MOLP), whose possibility distributions are given by fuzzy numbers, estimated from the information provided by the DM. As the parameters, intervening in the model, are fuzzy the solutions will be also fuzzy. We propose a new Pareto Optimal Solution concept for fuzzy multiobjective programming problems. It is based on the extension principle and the joint possibility distribution of the fuzzy parameters of the problem. The method relies on α-cuts of the fuzzy solution to generate its possibility distributions. These ideas are illustrated with a numerical example.  相似文献   

12.
In this paper a new probability density function with bounded domain is presented. The new distribution arises from the generalized Lindley distribution proposed by Zakerzadeh and Dolati (2010). This new distribution that depends on two parameters can be considered as an alternative to the classical beta distribution. It presents the advantage of not including any special function in its formulation. After studying its most important properties, some useful results regarding insurance and inventory management applications are obtained. In particular, in insurance, we suggest a special class of distorted premium principles based on this distribution and we compare it with the well-known power dual premium principle. Since the mean of the new distribution can be normalized to give a simple parameter, this new model is appropriate to be used as a regression model when the response is bounded, being therefore an alternative to the beta regression model recently proposed in the statistical literature.  相似文献   

13.
Spatially isotropic max-stable processes have been used to model extreme spatial or space-time observations. One prominent model is the Brown-Resnick process, which has been successfully fitted to time series, spatial data and space-time data. This paper extends the process to possibly anisotropic spatial structures. For regular grid observations we prove strong consistency and asymptotic normality of pairwise maximum likelihood estimates for fixed and increasing spatial domain, when the number of observations in time tends to infinity. We also present a statistical test for isotropy versus anisotropy. We apply our test to precipitation data in Florida, and present some diagnostic tools for model assessment. Finally, we present a method to predict conditional probability fields and apply it to the data.  相似文献   

14.
15.
本文提出了一个描述股市收益率与成交量变化率的关系的非线性统计模型.通过这个模型我们证明了收益率序列{rn}依参数不同分别依分布收敛于指数列维稳定分布和列维稳定分布.  相似文献   

16.
In this paper, we use simulations to investigate the relationship between data envelopment analysis (DEA) efficiency and major production functions: Cobb-Douglas, the constant elasticity of substitution, and the transcendental logarithmic. Two DEA models were used: a constant return to scale (CCR model), and a variable return to scale (BCC model). Each of the models was investigated in two versions: with bounded and unbounded weights. Two cases were simulated: with and without errors in the production functions estimation. Various degrees of homogeneity (of the production function) were tested, reflecting a constant increasing and decreasing return to scale. With respect to the case with errors, three distribution functions were utilized: uniform, normal, and double exponential. For each distribution, 16 levels of the coefficient of variance (CV) were used. In all the tested cases, two measures were analysed: the percentage of efficient units (from the total number of units), and the average efficiency score. We applied a regression analysis to test the relationship between these two efficiency measures and the above parameters. Overall, we found that the degree of homogeneity has the largest effect on efficiency. Efficiency declines as the errors grow (as reflected by larger CV and of the expansion of the probability distribution function away from the centre). The bounds on the weights tend to smooth the effect, and bring the various DEA versions closer to one other. The type of efficiency measure has similar regression tendencies. Finally, the relationship between the efficiency measures and the explanatory variables is quadratic.  相似文献   

17.
This article describes a simple computational method for obtaining the maximum likelihood estimates (MLE) in nonlinear mixed-effects models when the random effects are assumed to have a nonnormal distribution. Many computer programs for fitting nonlinear mixed-effects models, such as PROC NLMIXED in SAS, require that the random effects have a normal distribution. However, there is often interest in either fitting models with nonnormal random effects or assessing the sensitivity of inferences to departures from the normality assumption for the random effects. When the random effects are assumed to have a nonnormal distribution, we show how the probability integral transform can be used, in conjunction with standard statistical software for fitting nonlinear mixed-effects models (e.g., PROC NLMIXED in SAS), to obtain the MLEs. Specifically, the probability integral transform is used to transform a normal random effect to a nonnormal random effect. The method is illustrated using a gamma frailty model for clustered survival data and a beta-binomial model for clustered binary data. Finally, the results of a simulation study, examining the impact of misspecification of the distribution of the random effects, are presented.  相似文献   

18.
In this paper we study the asymptotic tail behavior for a non-standard renewal risk model with a dependence structure and stochastic return. An insurance company is allowed to invest in financial assets such as risk-free bonds and risky stocks, and the price process of its portfolio is described by a geometric Lévy process. By restricting the claim-size distribution to the class of extended regular variation (ERV) and imposing a constraint on the Lévy process in terms of its Laplace exponent, we obtain for the tail probability of the stochastic present value of aggregate claims a precise asymptotic formula, which holds uniformly for all time horizons. We further prove that the corresponding ruin probability also satisfies the same asymptotic formula.  相似文献   

19.
The returns on most financial assets exhibit kurtosis and many also have probability distributions that possess skewness as well. In this paper a general multivariate model for the probability distribution of assets returns, which incorporates both kurtosis and skewness, is described. It is based on the multivariate extended skew-Student-t distribution. Salient features of the distribution are described and these are applied to the task of asset pricing. The paper shows that the market model is non-linear in general and that the sensitivity of asset returns to return on the market portfolio is not the same as the conventional beta, although this measure does arise in special cases. It is shown that the variance of asset returns is time varying and depends on the squared deviation of market portfolio return from its location parameter. The first order conditions for portfolio selection are described. Expected utility maximisers will select portfolios from an efficient surface, which is an analogue of the familiar mean-variance frontier, and which may be implemented using quadratic programming.  相似文献   

20.
Nekoukhou et. al (Commun. Statist. Th. Meth., 2012) introduced a two-parameters discrete probability distribution so-called Discrete Analog of the Generalized Exponential Distribution (in short, DGED). We shall attempt to derive conditions under which a solution for the system of likelihood equations exists and coincides with the maximum likelihood (ML) estimators of the DGED. This kind of ML estimators are coincided with some moment estimators. An approximate computation based on Fisher’s accumulation method is presented in order for the ML estimations of the unknown parameters. Simulation study is also illustrated. Meanwhile, in the sequel two special cases of the DGED are considered. Some statistical properties for such special cases of the DGED are provided. We also propose a linear regression-type model for estimation of the parameter. Finally, we fit the DGED to a real data set and compare it with two other discrete distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号