首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The study of extreme values is of crucial interest in many contexts. The concentration of pollutants, the sea-level and the closing prices of stock indexes are only a few examples in which the occurrence of extreme values may lead to important consequences. In the present paper we are interested in detecting trend in sample extremes. A common statistical approach used to identify trend in extremes is based on the generalized extreme value distribution, which constitutes a building block for parametric models. However, semiparametric procedures imply several advantages when exploring data and checking the model. This paper outlines a semiparametric approach for smoothing sample extremes, based on nonlinear dynamic modelling of the generalized extreme value distribution. The relative merits of this approach are illustrated through two real examples.AMS 2000 Subject Classification. Primary—62G32, 62G05, 62M10  相似文献   

2.
Estimation of flood and drought frequencies is important for reservoir design and management, river pollution, ecology and drinking water supply. Through an example based on daily streamflow observations, we introduce a stepwise procedure for estimating quantiles of the hydrological extremes floods and droughts. We fit the generalised extreme value (GEV) distribution by the method of block maxima and the generalised Pareto (GP) distribution by applying the peak over threshold method. Maximum likelihood, penalized maximum likelihood and probability weighted moments are used for parameter estimation. We incorporate trends and seasonal variation in the models instead of splitting the data, and investigate how the observed number of extreme events, the chosen statistical model, and the parameter estimation method effect parameter estimates and quantiles. We find that a seasonal variation should be included in the GEV distribution fitting for floods using block sizes less than one year. When modelling droughts, block sizes of one year or less are not recommended as significant model bias becomes visible. We conclude that the different characteristics of floods and droughts influence the choices made in the extreme value modelling within a common inferential strategy.This revised version was published online in March 2005 with corrections to the cover date.  相似文献   

3.
Under a von Mises-type condition the joint distribution of suitable normalized lower extreme generalized order statistics converges w.r.t. the variational distance to the asymptotic joint distribution of lower extreme order statistics. Rates of uniform convergence are established. It turns out that the rates of uniform convergence known for ordinary extremes carry over to lower generalized extremes. Finally, models of Weibull type are concerned, where uniform rates are used in connection with model approximations in order to simplify statistical inference.AMS 2000 Subject Classification. Primary—60G70  相似文献   

4.
极端洪水给人类造成了巨大损失,极端洪水保险是分散极端洪水风险的一种有效手段.基于政府、市场和公众合作的极端洪水保险模式是适合我国国情的.在此模式下,建立政府有效参与的保险公司和保险区域风险组合随机优化模型,保证极端洪水保险的有效供给和需求,为合理厘定保险费率提供理论基础.随机优化模型中充分考虑了保险公司的破产概率、稳定性经营和保险区域的灾后恢复能力.最后给出了此模型的收敛性定理.  相似文献   

5.
干旱历时和干旱强度是影响干旱灾害风险的主要因素。根据干旱灾害发生的极端过程特点,用极值理论刻画干旱灾害风险两个特征变量的边缘分布,用Archimedes Copula函数捕捉旱灾风险两个特征变量之间的极值相依结构,本文构建的基于Copula-EVT的旱灾风险评估模型较好地反映了旱灾形成的极端过程和影响因子。实证分析以淮河流域蚌埠站为例,证实了ClaytonCopula-EVT模型能较好地拟合蚌埠站干旱灾害风险的历史经验分布,计算得出:蚌埠站干旱历时大于5个月,干旱强度超过7.45的极端干旱灾害风险概率为3%,重现期T_∩(t,d)为32.4年,对干旱历时和干旱强度的条件重现期研究得出干旱强度的取值对干旱灾害风险重现期的影响较大。  相似文献   

6.
We consider Stochastic Volatility processes with heavy tails and possible long memory in volatility. We study the limiting conditional distribution of future events given that some present or past event was extreme (i.e. above a level which tends to infinity). Even though extremes of stochastic volatility processes are asymptotically independent (in the sense of extreme value theory), these limiting conditional distributions differ from the i.i.d. case. We introduce estimators of these limiting conditional distributions and study their asymptotic properties. If volatility has long memory, then the rate of convergence and the limiting distribution of the centered estimators can depend on the long memory parameter (Hurst index).  相似文献   

7.
The method of so-called constrained stochastic simulation is introduced. This method specifies how to efficiently generate time series around some specific event in a normal process. All events which can be expressed by means of a linear condition (constraint) can be dealt with. Two examples are given in the paper: the generation of stochastic time series around local maxima and the generation of stochastic time series around a combination of a local minimum and maximum with a specified time separation. The constrained time series turn out to be a combination of the original process and several correction terms which includes the autocorrelation function and its time derivatives. For the application concerning local maxima it is shown that the presented method is in line with properties of a normal process near a local maximum as found in literature. The method can e.g. be applied to generate wind gusts in order to assess the extreme loading of wind turbines. AMS 2000 Subject Classification Primary—60G15, 60G70, 62G32; Secondary—62P30  相似文献   

8.
A analysis of hydrological risk is presented associated with decisions based on stochastic flood models. The maxima of a stream-flow are described by a marked Poisson process with a cyclic trend and exponentially distributed marks. Typical design criteria like the expected largest exceedance of a fixed level in a given period are derived from the extreme value process. The approach adopted is based on the whole record of flood data, which consists of the number, the occurrence times and the exceedances of the maxima in the observation period. Thus, compared to the series of largest annual exceedances more information is extracted. This yields an improvement in the evaluation of risk.  相似文献   

9.
The last few years have seen a significant increase in publicly available software specifically targeted to the analysis of extreme values. This reflects the increase in the use of extreme value methodology by the general statistical community. The software that is available for the analysis of extremes has evolved in essentially independent units, with most forming extensions of larger software environments. An inevitable consequence is that these units are spread about the statistical landscape. Scientists seeking to apply extreme value methods must spend considerable time and effort in determining whether the currently available software can be usefully applied to a given problem. We attempt to simplify this process by reviewing the current state, and suggest future approaches for software development. These suggestions aim to provide a basis for an initiative leading to the successful creation and distribution of a flexible and extensible set of tools for extreme value practitioners and researchers alike. In particular, we propose a collaborative framework for which cooperation between developers is of fundamental importance. AMS 2000 Subject Classification Primary—62P99  相似文献   

10.
Estimating the probability of extreme temperature events is difficult because of limited records across time and the need to extrapolate the distributions of these events, as opposed to just the mean, to locations where observations are not available. Another related issue is the need to characterize the uncertainty in the estimated probability of extreme events at different locations. Although the tools for statistical modeling of univariate extremes are well-developed, extending these tools to model spatial extreme data is an active area of research. In this paper, in order to make inference about spatial extreme events, we introduce a new nonparametric model for extremes. We present a Dirichlet-based copula model that is a flexible alternative to parametric copula models such as the normal and t-copula. The proposed modelling approach is fitted using a Bayesian framework that allow us to take into account different sources of uncertainty in the data and models. We apply our methods to annual maximum temperature values in the east-south-central United States.  相似文献   

11.
Heatwaves are defined as a set of hot days and nights that cause a marked short-term increase in mortality. Obtaining accurate estimates of the probability of an event lasting many days is important. Previous studies of temporal dependence of extremes have assumed either a first-order Markov model or a particularly strong form of extremal dependence, known as asymptotic dependence. Neither of these assumptions is appropriate for the heatwaves that we observe for our data. A first-order Markov assumption does not capture whether the previous temperature values have been increasing or decreasing and asymptotic dependence does not allow for asymptotic independence, a broad class of extremal dependence exhibited by many processes including all non-trivial Gaussian processes. This paper provides a kth-order Markov model framework that can encompass both asymptotic dependence and asymptotic independence structures. It uses a conditional approach developed for multivariate extremes coupled with copula methods for time series. We provide novel methods for the selection of the order of the Markov process that are based upon only the structure of the extreme events. Under this new framework, the observed daily maximum temperatures at Orleans, in central France, are found to be well modelled by an asymptotically independent third-order extremal Markov model. We estimate extremal quantities, such as the probability of a heatwave event lasting as long as the devastating European 2003 heatwave event. Critically our method enables the first reliable assessment of the sensitivity of such estimates to the choice of the order of the Markov process.  相似文献   

12.
We develop a vector generalised linear model to describe the influence of the atmospheric circulation on extreme daily precipitation across the UK. The atmospheric circulation is represented by three covariates, namely synoptic scale airflow strength, direction and vorticity; the extremes are represented by the monthly maxima of daily precipitation, modelled by the generalised extreme value distribution (GEV). The model parameters for data from 689 rain gauges across the UK are estimated using a maximum likelihood estimator. Within the framework of vector generalised linear models, various plausible models exist to describe the influence of the individual covariates, possible nonlinearities in the covariates and seasonality. We selected the final model based on the Akaike information criterion (AIC), and evaluated the predictive power of individual covariates by means of quantile verification scores and leave-one-out cross validation. The final model conditions the location and scale parameter of the GEV on all three covariates; the shape parameter is modelled as a constant. The relationships between strength and vorticity on the one hand, and the GEV location and scale parameters on the other hand are modelled as natural cubic splines with two degrees of freedom. The influence of direction is parameterised as a sine with amplitude and phase. The final model has a common parameterisation for the whole year. Seasonality is partly captured by the covariates themselves, but mostly by an additional annual cycle that is parameterised as a phase-shifted sine and accounts for physical influences that we have not attempted to explicitly model, such as humidity.  相似文献   

13.
14.
Stochastic block model (SBM) and its variants are popular models used in community detection for network data. In this article, we propose a feature-adjusted stochastic block model (FASBM) to capture the impact of node features on the network links as well as to detect the residual community structure beyond that explained by the node features. The proposed model can accommodate multiple node features and estimate the form of feature impacts from the data. Moreover, unlike many existing algorithms that are limited to binary-valued interactions, the proposed FASBM model and inference approaches are easily applied to relational data that generate from any exponential family distribution. We illustrate the methods on simulated networks and on two real-world networks: a brain network and an US air-transportation network.  相似文献   

15.
We compared flood mapping techniques using a one‐dimensional (1D) hydraulic model HEC‐RAS and two‐dimensional (2D) LISFLOOD‐FP for a 10‐km reach of Gorgan River in Iran. Both models were run using the same hydrologic input data. The input into the models was a steady discharge of 90 cm, corresponds to a flood peak occurred on March 25, 2012. Flood maps generated using these two models were compared with an observed flood inundation map, using F‐statistic. The roughness coefficients of the models were calibrated by maximizing the value of the F‐statistic. Based on the F‐statistic, LISFLOOD‐FP gives a slightly better result (F = 0.69) than HEC‐RAS (F = 0.67). Visual comparison of the flood extents generated by the two models showed reasonably good agreement. Validation was done using a flood event occurred on May 31, 2014. The LISFLOOD‐FP model gave a better result for validation as well. The 2D model showed more consistency in comparison with the 1D model.  相似文献   

16.
介绍一种二元阈值方法在股票指数上的应用   总被引:4,自引:0,他引:4  
二元极值的阈值方法的一个发展是用来考虑两个变量的联合分布。这个方法是建立在二元极值的点过程表示法的基础上。本文用参数 (Logistic模型 )和非参数模型对 1992 1999年的上海、深圳日收盘指数对数收益进行分析并给出分析结果。  相似文献   

17.
18.
本文基于多类型复发事件数据,讨论了一个新的加性乘积比率回归模型,该模型包括两部分,其中第一部分为可加Aalen模型,其中协变量影响为加性的且与时间有关.第二部分为Cox回归模型,其中协变量有乘性影响.利用估计方程的方法,给出了该模型中未知参数和非参数函数的一种估计方法,并利用现代经验过程理沦证明了所得估计的相合性和渐近正态性.  相似文献   

19.
Recently there has been a lot of effort to model extremes of spatially dependent data. These efforts seem to be divided into two distinct groups: the study of max-stable processes, together with the development of statistical models within this framework; the use of more pragmatic, flexible models using Bayesian hierarchical models (BHM) and simulation based inference techniques. Each modeling strategy has its strong and weak points. While max-stable models capture the local behavior of spatial extremes correctly, hierarchical models based on the conditional independence assumption, lack the asymptotic arguments the max-stable models enjoy. On the other hand, they are very flexible in allowing the introduction of physical plausibility into the model. When the objective of the data analysis is to estimate return levels or kriging of extreme values in space, capturing the correct dependence structure between the extremes is crucial and max-stable processes are better suited for these purposes. However when the primary interest is to explain the sources of variation in extreme events Bayesian hierarchical modeling is a very flexible tool due to the ease with which random effects are incorporated in the model. In this paper we model a data set on Portuguese wildfires to show the flexibility of BHM in incorporating spatial dependencies acting at different resolutions.  相似文献   

20.
Parsimonious extreme value copula models with O(d) parameters for d observed variables of extrema are presented. These models utilize the dependence characteristics, including factor and tree structures, assumed on the underlying variables that give rise to the data of extremes. For factor structures, a class of parametric models is obtained by taking the extreme value limit of factor copulas with non-zero tail dependence. An alternative model suitable for both factor and tree structures imposes constraints on the parametric Hüsler-Reiss copula to get representations in terms of O(d) other parameters. Dependence properties are discussed. As the full density is often intractable, the method of composite (pairwise) likelihood is used for model inference. Procedures to improve the stability of bivariate density evaluation are also developed. The proposed models are applied to two data examples — one for annual extreme river flows and one for bimonthly extremes of daily stock returns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号