首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A realized generalized autoregressive conditional heteroskedastic (GARCH) model is developed within a Bayesian framework for the purpose of forecasting value at risk and conditional value at risk. Student‐t and skewed‐t return distributions are combined with Gaussian and student‐t distributions in the measurement equation to forecast tail risk in eight international equity index markets over a 4‐year period. Three realized measures are considered within this framework. A Bayesian estimator is developed that compares favourably, in simulations, with maximum likelihood, both in estimation and forecasting. The realized GARCH models show a marked improvement compared with ordinary GARCH for both value‐at‐risk and conditional value‐at‐risk forecasting. This improvement is consistent across a variety of data and choice of distributions. Realized GARCH models incorporating a skewed student‐t distribution for returns are favoured overall, with the choice of measurement equation error distribution and realized measure being of lesser importance. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we elaborate how Poisson regression models of different complexity can be used in order to model absolute transaction price changes of an exchange‐traded security. When combined with an adequate autoregressive conditional duration model, our modelling approach can be used to construct a complete modelling framework for a security's absolute returns at transaction level, and thus for a model‐based quantification of intraday volatility and risk. We apply our approach to absolute price changes of an option on the XETRA DAX index based on quote‐by‐quote data from the EUREX exchange and find that within our Bayesian framework a Poisson generalized linear model (GLM) with a latent AR(1) process in the mean is the best model for our data according to the deviance information criterion (DIC). While, according to our modelling results, the price development of the underlying, the intrinsic value of the option at the time of the trade, the number of new quotations between two price changes, the time between two price changes and the Bid–Ask spread have significant effects on the size of the price changes, this is not the case for the remaining time to maturity of the option. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

3.
A useful application for copula functions is modeling the dynamics in the conditional moments of a time series. Using copulas, one can go beyond the traditional linear ARMA (p,q) modeling, which is solely based on the behavior of the autocorrelation function, and capture the entire dependence structure linking consecutive observations. This type of serial dependence is best represented by a canonical vine decomposition, and we illustrate this idea in the context of emerging stock markets, modeling linear and nonlinear temporal dependences of Brazilian series of realized volatilities. However, the analysis of intraday data collected from e‐markets poses some specific challenges. The large amount of real‐time information calls for heavy data manipulation, which may result in gross errors. Atypical points in high‐frequency intraday transaction prices may contaminate the series of daily realized volatilities, thus affecting classical statistical inference and leading to poor predictions. Therefore, in this paper, we propose to robustly estimate pair‐copula models using the weighted minimum distance and the weighted maximum likelihood estimates (WMLE). The excellent performance of these robust estimates for pair‐copula models are assessed through a comprehensive set of simulations, from which the WMLE emerged as the best option for members of the elliptical copula family. We evaluate and compare alternative volatility forecasts and show that the robustly estimated canonical vine‐based forecasts outperform the competitors. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Bayesian l0‐regularized least squares is a variable selection technique for high‐dimensional predictors. The challenge is optimizing a nonconvex objective function via search over model space consisting of all possible predictor combinations. Spike‐and‐slab (aka Bernoulli‐Gaussian) priors are the gold standard for Bayesian variable selection, with a caveat of computational speed and scalability. Single best replacement (SBR) provides a fast scalable alternative. We provide a link between Bayesian regularization and proximal updating, which provides an equivalence between finding a posterior mode and a posterior mean with a different regularization prior. This allows us to use SBR to find the spike‐and‐slab estimator. To illustrate our methodology, we provide simulation evidence and a real data example on the statistical properties and computational efficiency of SBR versus direct posterior sampling using spike‐and‐slab priors. Finally, we conclude with directions for future research.  相似文献   

5.
A multiple‐regime threshold nonlinear financial time series model, with a fat‐tailed error distribution, is discussed and Bayesian estimation and inference are considered. Furthermore, approximate Bayesian posterior model comparison among competing models with different numbers of regimes is considered which is effectively a test for the number of required regimes. An adaptive Markov chain Monte Carlo (MCMC) sampling scheme is designed, while importance sampling is employed to estimate Bayesian residuals for model diagnostic testing. Our modeling framework provides a parsimonious representation of well‐known stylized features of financial time series and facilitates statistical inference in the presence of high or explosive persistence and dynamic conditional volatility. We focus on the three‐regime case where the main feature of the model is to capturing of mean and volatility asymmetries in financial markets, while allowing an explosive volatility regime. A simulation study highlights the properties of our MCMC estimators and the accuracy and favourable performance as a model selection tool, compared with a deviance criterion, of the posterior model probability approximation method. An empirical study of eight international oil and gas markets provides strong support for the three‐regime model over its competitors, in most markets, in terms of model posterior probability and in showing three distinct regime behaviours: falling/explosive, dormant and rising markets. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
Nonlinear flexural vibration of a symmetric rectangular honeycomb sandwich thin panel with simply supported along all four edges is studied in this paper. The nonlinear governing equations of the symmetric rectangular honeycomb sandwich panel subjected to transverse excitations are simplified to a set of two ordinary differential equations by the Galerkin method. Based on the homotopy analysis method, the average equations of the primary resonance and harmonic resonance are obtained. The influence of structural parameters, the transverse exciting force amplitude, and transverse damping to the symmetric rectangular honeycomb sandwich panel are discussed by using the analytic approximation method. Compared with the results obtained by single‐mode modeling technique, the results obtained by double‐mode modeling technique change the softening and hardening nonlinear characteristics when Ω ≈ ω1, ω1/3, and ω2/3.  相似文献   

7.
This work deals with log‐symmetric regression models, which are particularly useful when the response variable is continuous, strictly positive, and following an asymmetric distribution, with the possibility of modeling atypical observations by means of robust estimation. In these regression models, the distribution of the random errors is a member of the log‐symmetric family, which is composed by the log‐contaminated‐normal, log‐hyperbolic, log‐normal, log‐power‐exponential, log‐slash and log‐Student‐t distributions, among others. One way to select the best family member in log‐symmetric regression models is using information criteria. In this paper, we formulate log‐symmetric regression models and conduct a Monte Carlo simulation study to investigate the accuracy of popular information criteria, as Akaike, Bayesian, and Hannan‐Quinn, and their respective corrected versions to choose adequate log‐symmetric regressions models. As a business application, a movie data set assembled by authors is analyzed to compare and obtain the best possible log‐symmetric regression model for box offices. The results provide relevant information for model selection criteria in log‐symmetric regressions and for the movie industry. Economic implications of our study are discussed after the numerical illustrations.  相似文献   

8.
To understand and predict chronological dependence in the second‐order moments of asset returns, this paper considers a multivariate hysteretic autoregressive (HAR) model with generalized autoregressive conditional heteroskedasticity (GARCH) specification and time‐varying correlations, by providing a new method to describe a nonlinear dynamic structure of the target time series. The hysteresis variable governs the nonlinear dynamics of the proposed model in which the regime switch can be delayed if the hysteresis variable lies in a hysteresis zone. The proposed setup combines three useful model components for modeling economic and financial data: (1) the multivariate HAR model, (2) the multivariate hysteretic volatility models, and (3) a dynamic conditional correlation structure. This research further incorporates an adapted multivariate Student t innovation based on a scale mixture normal presentation in the HAR model to tolerate for dependence and different shaped innovation components. This study carries out bivariate volatilities, Value at Risk, and marginal expected shortfall based on a Bayesian sampling scheme through adaptive Markov chain Monte Carlo (MCMC) methods, thus allowing to statistically estimate all unknown model parameters and forecasts simultaneously. Lastly, the proposed methods herein employ both simulated and real examples that help to jointly measure for industry downside tail risk.  相似文献   

9.
In count data regression there can be several problems that prevent the use of the standard Poisson log‐linear model: overdispersion, caused by unobserved heterogeneity or correlation, excess of zeros, non‐linear effects of continuous covariates or of time scales, and spatial effects. We develop Bayesian count data models that can deal with these issues simultaneously and within a unified inferential approach. Models for overdispersed or zero‐inflated data are combined with semiparametrically structured additive predictors, resulting in a rich class of count data regression models. Inference is fully Bayesian and is carried out by computationally efficient MCMC techniques. Simulation studies investigate performance, in particular how well different model components can be identified. Applications to patent data and to data from a car insurance illustrate the potential and, to some extent, limitations of our approach. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
Michael Wenzel 《PAMM》2004,4(1):382-383
A hierarchical model for dimensional adaptivity, using mixed beam‐shell structures, is presented. Thin‐walled beam structures are often calculated on the base of beam theories. Parts of the global structure, like framework corners, are usually analyzed with shell elements in a separate model. To minimize the modeling and calculation expense, a transition element to couple beam and shell structures is used. A dimensional adaptiv algorithm is introduced to automate this the procedure of modeling and calculation. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
We analyze the reliability of NASA composite pressure vessels by using a new Bayesian semiparametric model. The data set consists of lifetimes of pressure vessels, wrapped with a Kevlar fiber, grouped by spool, subject to different stress levels; 10% of the data are right censored. The model that we consider is a regression on the log‐scale for the lifetimes, with fixed (stress) and random (spool) effects. The prior of the spool parameters is nonparametric, namely they are a sample from a normalized generalized gamma process, which encompasses the well‐known Dirichlet process. The nonparametric prior is assumed to robustify inferences to misspecification of the parametric prior. Here, this choice of likelihood and prior yields a new Bayesian model in reliability analysis. Via a Bayesian hierarchical approach, it is easy to analyze the reliability of the Kevlar fiber by predicting quantiles of the failure time when a new spool is selected at random from the population of spools. Moreover, for comparative purposes, we review the most interesting frequentist and Bayesian models analyzing this data set. Our credibility intervals of the quantiles of interest for a new random spool are narrower than those derived by previous Bayesian parametric literature, although the predictive goodness‐of‐fit performances are similar. Finally, as an original feature of our model, by means of the discreteness of the random‐effects distribution, we are able to cluster the spools into three different groups. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
Generalizing model companions from model theory we define companions of pieces of canonical partitions of Polish G‐spaces. This unifies several constructions from logic. The central problem of the paper is the existence of companions which form a G‐orbit which is a Gδ‐set. We describe companions of some typical G‐spaces. (© 2005 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
Various alignment problems arising in cryo‐electron microscopy, community detection, time synchronization, computer vision, and other fields fall into a common framework of synchronization problems over compact groups such as ℤ/L, U(1), or SO(3). The goal in such problems is to estimate an unknown vector of group elements given noisy relative observations. We present an efficient iterative algorithm to solve a large class of these problems, allowing for any compact group, with measurements on multiple “frequency channels” (Fourier modes, or more generally, irreducible representations of the group). Our algorithm is a highly efficient iterative method following the blueprint of approximate message passing (AMP), which has recently arisen as a central technique for inference problems such as structured low‐rank estimation and compressed sensing. We augment the standard ideas of AMP with ideas from representation theory so that the algorithm can work with distributions over general compact groups. Using standard but nonrigorous methods from statistical physics, we analyze the behavior of our algorithm on a Gaussian noise model, identifying phases where we believe the problem is easy, (computationally) hard, and (statistically) impossible. In particular, such evidence predicts that our algorithm is information‐theoretically optimal in many cases, and that the remaining cases exhibit statistical‐to‐computational gaps. © 2018 Wiley Periodicals, Inc.  相似文献   

14.
A step‐stress accelerated life testing model is considered for progressive type‐I censored experiments when the tested items are not monitored continuously but inspected at prespecified time points, producing thus grouped data. The underlying lifetime distributions belong to a general scale family of distributions. The points of stress‐level change are simultaneously inspection points as well while there is the option of assigning additional inspection points in between the stress‐level change points. In a Bayesian framework, the posterior distributions of the parameters of the model are derived for characteristic choices of prior distributions, as conjugate‐like and normal priors; vague or noninformative. The developed approach is illustrated on a simulated example and on a real data set, both known from the literature. The results are compared to previous analyses; frequentist or Bayes.  相似文献   

15.
We propose a special panel quantile regression model with multiple stochastic change‐points to analyze latent structural breaks in the short‐term post‐offering price–volume relationships in China's growth enterprise market where the piecewise quantile equations are defined by change point indication functions. We also develop a new Bayesian inference and Markov chain Monte Carlo simulation approach to estimate the parameters, including the locations of change points, and put forth simulation‐based posterior Bayesian factor tests to find the best number of change points. Our empirical evidence suggests that the single change point effect is significant on quantile‐based price–volume relationships in China's growth enterprise market. The lagged initial public offering (IPO) return and the IPO volume rate of change have positive impacts on the current IPO return before and after the change point. Along with investors' gradually declining hot sentiment toward a new IPO, the market index volume rate of change induces the abnormal short‐term post‐offering IPO return to move back to the equilibrium. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
We propose a Bayesian framework to model bid placement time in retail secondary market online business‐to‐business auctions. In doing so, we propose a Bayesian beta regression model to predict the first bidder and time to first bid, and a dynamic probit model to analyze participation. In our development, we consider both auction‐specific and bidder‐specific explanatory variables. While we primarily focus on the predictive performance of the models, we also discuss how auction features and bidders' heterogeneity could affect the bid timings, as well as auction participation. We illustrate the implementation of our models by applying to actual auction data and discuss additional insights provided by the Bayesian approach, which can benefit auctioneers.  相似文献   

17.
In this paper, we introduce a robust extension of the three‐factor model of Diebold and Li (J. Econometrics, 130: 337–364, 2006) using the class of symmetric scale mixtures of normal distributions. Specific distributions examined include the multivariate normal, Student‐t, slash, and variance gamma distributions. In the presence of non‐normality in the data, these distributions provide an appealing robust alternative to the routine use of the normal distribution. Using a Bayesian paradigm, we developed an efficient MCMC algorithm for parameter estimation. Moreover, the mixing parameters obtained as a by‐product of the scale mixture representation can be used to identify outliers. Our results reveal that the Diebold–Li models based on the Student‐t and slash distributions provide significant improvement in in‐sample fit and out‐of‐sample forecast to the US yield data than the usual normal‐based model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
The memory‐resistor or memristor is a new electrical element characterized by a nonlinear charge‐flux relation. This device poses many challenging problems, in particular from the circuit modeling point of view. In this paper, we address the index analysis of certain differential‐algebraic models of memristive circuits; specifically, our attention is focused on so‐called branch‐oriented models, which include in particular tree‐based formulations of the circuit equations. Our approach combines results coming from differential‐algebraic equation theory, matrix analysis and theory of digraphs. This framework should be useful in future studies of dynamical aspects of memristive circuits. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
ABSTRACT. Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long‐term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information‐theoretic methods in a two‐stage modeling process to investigate relationships between landscape‐level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Onocorhynchus tshawy‐tscha) in the Columbia River basin. Our first‐stage model selection results indicated that the Ricker‐type, stock recruitment model with a constant Ricker a, i.e., recruits‐per‐spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second‐stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate‐high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2°C mean annual air temperature.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号