首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper investigates the behaviour of interest rates in Turkey using a two-regime TAR model with an autoregressive unit root. This method recently developed by Caner and Hansen [Threshold autoregression with a unit roots, Econometrica 69 (6) (2001) 1555–1596] allows to simultaneously consider non-stationarity and non-linearity. Our finding indicates that the interest rate is a non-linear series and is characterized by a unit root process over the period 1990:1–2006:5.  相似文献   

2.
This paper investigates the asymptotic properties of estimators obtained from the so called CVA (canonical variate analysis) subspace algorithm proposed by Larimore (1983) in the case when the data is generated using a minimal state space system containing unit roots at the seasonal frequencies such that the yearly difference is a stationary vector autoregressive moving average (VARMA) process. The empirically most important special cases of such data generating processes are the I(1) case as well as the case of seasonally integrated quarterly or monthly data. However, increasingly also datasets with a higher sampling rate such as hourly, daily or weekly observations are available, for example for electricity consumption. In these cases the vector error correction representation (VECM) of the vector autoregressive (VAR) model is not very helpful as it demands the parameterization of one matrix per seasonal unit root. Even for weekly series this amounts to 52 matrices using yearly periodicity, for hourly data this is prohibitive. For such processes estimation using quasi-maximum likelihood maximization is extremely hard since the Gaussian likelihood typically has many local maxima while the parameter space often is high-dimensional. Additionally estimating a large number of models to test hypotheses on the cointegrating rank at the various unit roots becomes practically impossible for weekly data, for example. This paper shows that in this setting CVA provides consistent estimators of the transfer function generating the data, making it a valuable initial estimator for subsequent quasi-likelihood maximization. Furthermore, the paper proposes new tests for the cointegrating rank at the seasonal frequencies, which are easy to compute and numerically robust, making the method suitable for automatic modeling. A simulation study demonstrates by example that for processes of moderate to large dimension the new tests may outperform traditional tests based on long VAR approximations in sample sizes typically found in quarterly macroeconomic data. Further simulations show that the unit root tests are robust with respect to different distributions for the innovations as well as with respect to GARCH-type conditional heteroskedasticity. Moreover, an application to Kaggle data on hourly electricity consumption by different American providers demonstrates the usefulness of the method for applications. Therefore the CVA algorithm provides a very useful initial guess for subsequent quasi maximum likelihood estimation and also delivers relevant information on the cointegrating ranks at the different unit root frequencies. It is thus a useful tool for example in (but not limited to) automatic modeling applications where a large number of time series involving a substantial number of variables need to be modelled in parallel.  相似文献   

3.
Benjamin M. Tabak 《Physica A》2007,385(1):261-269
In this paper a simple test for detecting bilinearity in a stochastic unit root process is used to test for the presence of nonlinear unit roots in Brazilian equity shares. The empirical evidence for a set of 53 individual stocks, after adjusting for GARCH effects, suggests that for more than 66%, the hypothesis of unit root bilinearity is accepted. Therefore, the dynamics of Brazilian share prices is in conformity with this type of nonlinearity. These nonlinearities in spot prices may emerge due to the sophistication of the derivatives market.  相似文献   

4.
The major goal of this paper is to examine the hypothesis that stock returns and return volatility are asymmetric, threshold nonlinear, functions of change in trading volume. A minor goal is to examine whether return spillover effects also display such asymmetry. Employing a double-threshold GARCH model with trading volume as a threshold variable, we find strong evidence supporting this hypothesis in five international market return series. Asymmetric causality tests lend further support to our trading volume threshold model and conclusions. Specifically, an increase in volume is positively associated, while decreasing volume is negatively associated, with the major price index in four of the five markets. The volatility of each series also displays an asymmetric reaction, four of the markets display higher volatility following increases in trading volume. Using posterior odds ratio, the proposed threshold model is strongly favored in three of the five markets, compared to a US news double threshold GARCH model and a symmetric GARCH model. We also find significant nonlinear asymmetric return spillover effects from the US market.  相似文献   

5.
Recent studies show that a negative shock in stock prices will generate more volatility than a positive shock of similar magnitude. The aim of this paper is to appraise the hypothesis under which the conditional mean and the conditional variance of stock returns are asymmetric functions of past information. We compare the results for the Portuguese Stock Market Index PSI 20 with six other Stock Market Indices, namely the SP 500, FTSE 100, DAX 30, CAC 40, ASE 20, and IBEX 35. In order to assess asymmetric volatility we use autoregressive conditional heteroskedasticity specifications known as TARCH and EGARCH. We also test for asymmetry after controlling for the effect of macroeconomic factors on stock market returns using TAR and M-TAR specifications within a VAR framework. Our results show that the conditional variance is an asymmetric function of past innovations raising proportionately more during market declines, a phenomenon known as the leverage effect. However, when we control for the effect of changes in macroeconomic variables, we find no significant evidence of asymmetric behaviour of the stock market returns. There are some signs that the Portuguese Stock Market tends to show somewhat less market efficiency than other markets since the effect of the shocks appear to take a longer time to dissipate.  相似文献   

6.
Here, the Panel seemingly unrelated regressions augmented Dickey-Fuller test (SURADF) test, first introduced and advanced by Breuer et al. [Misleading inferences from panel unit-root tests with an illustration from purchasing power parity, Rev. Int. Econ. 9(3) (2001) 482-493], is used to investigate the mean-reverting behavior of the current account of 48 African countries during the 1980-2004 periods. The empirical results from numerous panel-based unit root tests, conducted earlier, indicated that the current account of each of these countries is stationary; however, when Breuer et al.'s (2001) Panel SURADF test is conducted, it is found that a unit root exists in the current account of 11 of the countries studied. These results have one extremely important policy implication for the 48 African countries studied: the current account deficit of most is sustainable, and thus signifying that those nations should have no incentive to default on their international debt.  相似文献   

7.
This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.  相似文献   

8.
This research models and forecasts daily AQI (air quality index) levels in 16 cities/counties of Taiwan, examines their AQI level forecast performance via a rolling window approach over a one-year validation period, including multi-level forecast classification, and measures the forecast accuracy rates. We employ statistical modeling and machine learning with three weather covariates of daily accumulated precipitation, temperature, and wind direction and also include seasonal dummy variables. The study utilizes four models to forecast air quality levels: (1) an autoregressive model with exogenous variables and GARCH (generalized autoregressive conditional heteroskedasticity) errors; (2) an autoregressive multinomial logistic regression; (3) multi-class classification by support vector machine (SVM); (4) neural network autoregression with exogenous variable (NNARX). These models relate to lag-1 AQI values and the previous day’s weather covariates (precipitation and temperature), while wind direction serves as an hour-lag effect based on the idea of nowcasting. The results demonstrate that autoregressive multinomial logistic regression and the SVM method are the best choices for AQI-level predictions regarding the high average and low variation accuracy rates.  相似文献   

9.
Molecular dynamics of two- and three-dimensional liquids undergoing a homogeneous adiabatic expansion provides a direct numerical simulation of the atomization process. The Lennard-Jones potential is used with different force cutoff distances; the cluster distributions do not depend strongly on the cutoff parameter. Expansion rates, scaled by the natural molecular time unit (about a picosecond), are investigated from unity down to 0.01; over this range the mean droplet size follows the scaling behavior of an energy balance model which minimizes the sum of kinetic plus surface energy. A second model which equates the elastic stored energy to the surface energy gives better agreement with the simulation results. The simulation results indicate that both the mean and the maximum droplet size have a power-law dependence upon the expansion rate; the exponents are -2d/3 (mean) and -d/2 (maximum), where d is the dimensionality. The mean does not show a dependence upon the system size, whereas the maximum does increase with system size, and furthermore, its exponent increases with an increase in the force cutoff distance. A mean droplet size of 2.8/eta(2), where eta is the expansion rate, describes our high-density three-dimensional simulation results, and this relation is also close to experimental results from the free-jet expansion of liquid helium. Thus, one relation spans a cluster size range from one atom to over 40 million atoms. The structure and temperature of the atomic clusters are described.  相似文献   

10.
Are structural break models true switching models or long memory processes? The answer to this question remains ambiguous. In recent years, many papers have dealt with this problem. Some studies have shown that, under specific conditions, switching models and long memory processes can easily be confused. In this paper, using several generating models (the mean-plus-noise model, the stochastic permanent break model, the Markov switching model, the threshold autoregressive (TAR) model, the sign model, and the structural change model) and several estimation techniques (the Geweke–Porter–Hudak (GPH) technique, detrended fluctuation analysis (DFA), the exact local Whittle (ELW) method, and wavelet methods) we show that, even if the answer is quite simple in some cases, it can be mitigated in other cases. Using French and American inflation rates, we found that the most appropriate process that takes into account the important features of these series is a model that simultaneously combines changes in regimes and long memory behavior. The main result of this study indicates that estimating a long memory parameter without taking into account the presence of breaks in the data sets may lead to misspecification and hence to overestimating the true parameter.  相似文献   

11.
In this paper we test for the presence of bubbles in the Nasdaq stock market index over the period 1994–2003 applying fractional integration techniques and allowing for structural breaks and non-linear adjustments of prices to dividends. The results show a significant structural break in 1998 for all model specifications and data periodicity. Furthermore, we do not find evidence of asymmetric adjustment of prices to dividends when using M-TAR and TAR models. The evidence of bubbles varies depending on the data periodicity and model specification used in the analysis. Finally, the results show persistent deviations of stock prices to dividends in all cases considered, though we only find evidence of bubbles in the Nasdaq index when using weekly data for the time period after June 1998.  相似文献   

12.
Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.  相似文献   

13.
Traffic volatility modeling has been highly valued in recent years because of its advantages in describing the uncertainty of traffic flow during the short-term forecasting process. A few generalized autoregressive conditional heteroscedastic (GARCH) models have been developed to capture and hence forecast the volatility of traffic flow. Although these models have been confirmed to be capable of producing more reliable forecasts than traditional point forecasting models, the more or less imposed restrictions on parameter estimations may make the asymmetric property of traffic volatility be not or insufficiently considered. Furthermore, the performance of the models has not been fully evaluated and compared in the traffic forecasting context, rendering the choice of the models dilemmatic for traffic volatility modeling. In this study, an omnibus traffic volatility forecasting framework is proposed, where various traffic volatility models with symmetric and asymmetric properties can be developed in a unifying way by fixing or flexibly estimating three key parameters, namely the Box-Cox transformation coefficient λ, the shift factor b, and the rotation factor c. Extensive traffic speed datasets collected from urban roads of Kunshan city, China, and from freeway segments of the San Diego Region, USA, were used to evaluate the proposed framework and develop traffic volatility forecasting models in a number of case studies. The models include the standard GARCH, the threshold GARCH (TGARCH), the nonlinear ARCH (NGARCH), the nonlinear-asymmetric GARCH (NAGARCH), the Glosten–Jagannathan–Runkle GARCH (GJR-GARCH), and the family GARCH (FGARCH). The mean forecasting performance of the models was measured with mean absolute error (MAE) and mean absolute percentage error (MAPE), while the volatility forecasting performance of the models was measured with volatility mean absolute error (VMAE), directional accuracy (DA), kickoff percentage (KP), and average confidence length (ACL). Experimental results demonstrate the effectiveness and flexibility of the proposed framework and provide insights into how to develop and select proper traffic volatility forecasting models in different situations.  相似文献   

14.
This study investigates whether the market share leader in the notebook industry in Taiwan is likely to maintain its dominant position. Market share data are used to investigate the intensity of competitiveness in the industry, and data on the gap in market shares are employed to elucidate the dominance of the leading firm in Taiwan's notebook industry during the 1998-2004 period. The newly developed Panel SURADF tests advanced by Breuer et al. [Misleading inferences from panel unit root tests with an illustration from purchasing power parity, Rev. Int. Econ. 9 (3) (2001) 482-493] are employed to determine whether the market share gap is stationary or not. Unlike other panel-based unit root tests which are joint tests of a unit root for all members of a panel and are incapable of determining the mix of I(0) and I(1) series in a panel setting, the Panel SURADF tests have the advantage of being able to investigate a separate unit root null hypothesis for each individual panel member and are, therefore, able to identify how many and which series in a panel are stationary processes. The empirical results from several panel-based unit root tests substantiate that the market shares of the firms studied here are non-stationary, indicating that Taiwan's notebook industry is highly competitive; however, Breuer et al.'s [12] Panel SURADF tests unequivocally show that only Compal is stationary with respect to market share gap. In terms of sales volume, Compal is the second largest firm in the notebook industry in Taiwan, and the results indicate that it alone has the opportunity to become the market share leader in the notebook industry.  相似文献   

15.
A new approach is developed to examine potential causality between merger activity and industrial production. The proposed method combines an information criterion-based approach to lag optimisation with joint maximum likelihood estimation of an autoregressive distributed lag model and GARCH(1,1) specification. Application to UK data provides significant evidence in support of causality between merger activity and industrial production, a result which has been predicted theoretically in the literature but has not received empirical support in earlier research.  相似文献   

16.
An asymmetric Jerusalem unit and the frequency selective surface (FSS) structure composed of such units are designed. The transmittance of the designed FSS structure is calculated by mode-matching method and compared with the test results. The comparison results show that the FSS center frequency of the asymmetric structure unit drifts little with the variation of the incident angles of the electromagnetic waves and keeps relatively stable. The research offers a new choice for the application of FSS under the large scanning angle of electromagnetic waves.  相似文献   

17.
Two different air filter test methodologies are discussed and compared for challenges in the nano-sized particle range of 10–400 nm. Included in the discussion are test procedure development, factors affecting variability and comparisons between results from the tests. One test system which gives a discrete penetration for a given particle size is the TSI 8160 Automated Filter tester (updated and commercially available now as the TSI 3160) manufactured by the TSI, Inc., Shoreview, MN. Another filter test system was developed utilizing a Scanning Mobility Particle Sizer (SMPS) to sample the particle size distributions downstream and upstream of an air filter to obtain a continuous percent filter penetration versus particle size curve. Filtration test results are shown for fiberglass filter paper of intermediate filtration efficiency. Test variables affecting the results of the TSI 8160 for NaCl and dioctyl phthalate (DOP) particles are discussed, including condensation particle counter stability and the sizing of the selected particle challenges. Filter testing using a TSI 3936 SMPS sampling upstream and downstream of a filter is also shown with a discussion of test variables and the need for proper SMPS volume purging and filter penetration correction procedure. For both tests, the penetration versus particle size curves for the filter media studied follow the theoretical Brownian capture model of decreasing penetration with decreasing particle diameter down to 10 nm with no deviation. From these findings, the authors can say with reasonable confidence that there is no evidence of particle thermal rebound in the size range.  相似文献   

18.
Adnan Kasman  Saadet Kasman 《Physica A》2008,387(12):2837-2845
This paper examines the impact of the introduction of stock index futures on the volatility of the Istanbul Stock Exchange (ISE), using asymmetric GARCH model, for the period July 2002-October 2007. The results from EGARCH model indicate that the introduction of futures trading reduced the conditional volatility of ISE-30 index. Results further indicate that there is a long-run relationship between spot and future prices. The results also suggest that the direction of both long- and short-run causality is from spot prices to future prices. These findings are consistent with those theories stating that futures markets enhance the efficiency of the corresponding spot markets.  相似文献   

19.
The behavior of a test particle in a rarefied gas of classical particles is investigated. considering different interaction mechanisms (specular and diffuse reflection, respectively). For large mass ratio between test and gas particles, analytical expressions for the linear friction coefficient are derived. Moreover, the existence of directed motion of asymmetric test particles with distinct initial conditions (but in the absence of any gradients) is shown. The analytical results are supported by a numerical simulation technique applicable to systems with any mass ratio, which is described here in detail.  相似文献   

20.
In this work, addressing some contradictions, it is tried to interpret some gaps between surface stress theories and the size effects observed in experiments for nanowires through different examples. Due to mandatory self-equilibrium state of nanostructures at different states, in a generalized model, a balancing factor is defined for the surface residual stress and duly the clamped nanowires are classified into suspended and etched types. The claims are confirmed by observing similar results for bending and tensile tests of Ag nanowires that addresses alternative sources for size effects beside the surface stresses. In addition, the size effects and surface material properties are identified lower at larger deformation ranges and regarding tremendous gap between atomistic simulation and continuum core–shell models, it is verified that the surface elasticity may not be the entire source for size effects. In extension, due to anisotropicity of single crystals, two orientation dependent parameters are defined for nanoplates that are modeled based on Kirchhoff plate, von-Karman strains and surface stress models. It is shown that orientation of (100)-nanoplates changes the size effects for more than 70%. Meanwhile, some test setups are recommended for characterization of the size effects of nanowires and nanoplates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号