首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.  相似文献   

2.
In the integer-valued generalized autoregressive conditional heteroscedastic (INGARCH) models, parameter estimation is conventionally based on the conditional maximum likelihood estimator (CMLE). However, because the CMLE is sensitive to outliers, we consider a robust estimation method for bivariate Poisson INGARCH models while using the minimum density power divergence estimator. We demonstrate the proposed estimator is consistent and asymptotically normal under certain regularity conditions. Monte Carlo simulations are conducted to evaluate the performance of the estimator in the presence of outliers. Finally, a real data analysis using monthly count series of crimes in New South Wales and an artificial data example are provided as an illustration.  相似文献   

3.
This study considers support vector regression (SVR) and twin SVR (TSVR) for the time series of counts, wherein the hyper parameters are tuned using the particle swarm optimization (PSO) method. For prediction, we employ the framework of integer-valued generalized autoregressive conditional heteroskedasticity (INGARCH) models. As an application, we consider change point problems, using the cumulative sum (CUSUM) test based on the residuals obtained from the PSO-SVR and PSO-TSVR methods. We conduct Monte Carlo simulation experiments to illustrate the methods’ validity with various linear and nonlinear INGARCH models. Subsequently, a real data analysis, with the return times of extreme events constructed based on the daily log-returns of Goldman Sachs stock prices, is conducted to exhibit its scope of application.  相似文献   

4.
Count data appears in many research fields and exhibits certain features that make modeling difficult. Most popular approaches to modeling count data can be classified into observation and parameter-driven models. In this paper, we review two models from these classes: the log-linear multivariate conditional intensity model (also referred to as an integer-valued generalized autoregressive conditional heteroskedastic model) and the non-linear state-space model for count data. We compare these models in terms of forecasting performance on simulated data and two real datasets. In simulations, we consider the case of model misspecification. We find that both models have advantages in different situations, and we discuss the pros and cons of inference for both models in detail.  相似文献   

5.
In this work we use 3D direct numerical simulations (DNS) to investigate the average velocity conditioned on a conserved scalar in a double scalar mixing layer (DSML). The DSML is a canonical multistream flow designed as a model problem for the extensively studied piloted diffusion flames. The conditional mean velocity appears as an unclosed term in advanced Eulerian models of turbulent non-premixed combustion, like the conditional moment closure and transported probability density function (PDF) methods. Here it accounts for inhomogeneous effects that have been found significant in flames with relatively low Damköhler numbers. Today there are only a few simple models available for the conditional mean velocity and these are discussed with reference to the DNS results. We find that both the linear model of Kutznetzov and the Li and Bilger model are unsuitable for multi stream flows, whereas the gradient diffusion model of Pope shows very close agreement with DNS over the whole range of the DSML. The gradient diffusion model relies on a model for the conserved scalar PDF and here we have used a presumed mapping function PDF, that is known to give an excellent representation of the DNS. A new model for the conditional mean velocity is suggested by arguing that the Gaussian reference field represents the velocity field, a statement that is evidenced by a near perfect agreement with DNS. The model still suffers from an inconsistency with the unconditional flux of conserved scalar variance, though, and a strategy for developing fully consistent models is suggested.  相似文献   

6.
This study uses the fourteen stock indices as the sample and then utilizes eight parametric volatility forecasting models and eight composed volatility forecasting models to explore whether the neural network approach and the settings of leverage effect and non-normal return distribution can promote the performance of volatility forecasting, and which one of the sixteen models possesses the best volatility forecasting performance. The eight parametric volatility forecasts models are composed of the generalized autoregressive conditional heteroskedasticity (GARCH) or GJR-GARCH volatility specification combining with the normal, Student’s t, skewed Student’s t, and generalized skewed Student’s t distributions. Empirical results show that, the performance for the composed volatility forecasting approach is significantly superior to that for the parametric volatility forecasting approach. Furthermore, the GJR-GARCH volatility specification has better performance than the GARCH one. In addition, the non-normal distribution does not have better forecasting performance than the normal distribution. In addition, the GJR-GARCH model combined with both the normal distribution and a neural network approach has the best performance of volatility forecasting among sixteen models. Thus, a neural network approach significantly promotes the performance of volatility forecasting. On the other hand, the setting of leverage effect can encourage the performance of volatility forecasting whereas the setting of non-normal distribution cannot.  相似文献   

7.
This paper considers the periodic self-exciting threshold integer-valued autoregressive processes under a weaker condition in which the second moment is finite instead of the innovation distribution being given. The basic statistical properties of the model are discussed, the quasi-likelihood inference of the parameters is investigated, and the asymptotic behaviors of the estimators are obtained. Threshold estimates based on quasi-likelihood and least squares methods are given. Simulation studies evidence that the quasi-likelihood methods perform well with realistic sample sizes and may be superior to least squares and maximum likelihood methods. The practical application of the processes is illustrated by a time series dataset concerning the monthly counts of claimants collecting short-term disability benefits from the Workers’ Compensation Board (WCB). In addition, the forecasting problem of this dataset is addressed.  相似文献   

8.
In this study, we consider an online monitoring procedure to detect a parameter change for integer-valued generalized autoregressive heteroscedastic (INGARCH) models whose conditional density of present observations over past information follows one parameter exponential family distributions. For this purpose, we use the cumulative sum (CUSUM) of score functions deduced from the objective functions, constructed for the minimum power divergence estimator (MDPDE) that includes the maximum likelihood estimator (MLE), to diminish the influence of outliers. It is well-known that compared to the MLE, the MDPDE is robust against outliers with little loss of efficiency. This robustness property is properly inherited by the proposed monitoring procedure. A simulation study and real data analysis are conducted to affirm the validity of our method.  相似文献   

9.
The second-order CMC model for a detailed chemical mechanism is used to model a turbulent CH4/H2/N2 jet diffusion flame. Second-order corrections are made to the three rate limiting steps of methane–air combustion, while first-order closure is employed for all the other steps. Elementary reaction steps have a wide range of timescales with only a few of them slow enough to interact with turbulent mixing. Those steps with relatively large timescales require higher-order correction to represent the effect of fluctuating scalar dissipation rates. Results show improved prediction of conditional mean temperature and mass fractions of OH and NO. Major species are not much influenced by second-order corrections except near the nozzle exit. A parametric study is performed to evaluate the effects of the variance parameter in log-normal scalar dissipation PDF and the constants for the dissipation term in conditional variance and covariance equations.  相似文献   

10.
Sub-Saharan Africa has been the epicenter of the outbreak since the spread of acquired immunodeficiency syndrome (AIDS) began to be prevalent. This article proposes several regression models to investigate the relationships between the HIV/AIDS epidemic and socioeconomic factors (the gross domestic product per capita, and population density) in ten countries of Sub-Saharan Africa, for 2011–2016. The maximum likelihood method was used to estimate the unknown parameters of these models along with the Newton–Raphson procedure and Fisher scoring algorithm. Comparing these regression models, there exist significant spatiotemporal non-stationarity and auto-correlations between the HIV/AIDS epidemic and two socioeconomic factors. Based on the empirical results, we suggest that the geographically and temporally weighted Poisson autoregressive (GTWPAR) model is more suitable than other models, and has the better fitting results.  相似文献   

11.
This research models and forecasts daily AQI (air quality index) levels in 16 cities/counties of Taiwan, examines their AQI level forecast performance via a rolling window approach over a one-year validation period, including multi-level forecast classification, and measures the forecast accuracy rates. We employ statistical modeling and machine learning with three weather covariates of daily accumulated precipitation, temperature, and wind direction and also include seasonal dummy variables. The study utilizes four models to forecast air quality levels: (1) an autoregressive model with exogenous variables and GARCH (generalized autoregressive conditional heteroskedasticity) errors; (2) an autoregressive multinomial logistic regression; (3) multi-class classification by support vector machine (SVM); (4) neural network autoregression with exogenous variable (NNARX). These models relate to lag-1 AQI values and the previous day’s weather covariates (precipitation and temperature), while wind direction serves as an hour-lag effect based on the idea of nowcasting. The results demonstrate that autoregressive multinomial logistic regression and the SVM method are the best choices for AQI-level predictions regarding the high average and low variation accuracy rates.  相似文献   

12.
A new integer-valued moving average model is introduced. The assumption of independent counting series in the model is relaxed to allow dependence between them, leading to the overdispersion in the model. Statistical properties were established for this new integer-valued moving average model with dependent counting series. The Yule–Walker method was applied to estimate the model parameters. The estimator’s performance was evaluated using simulations, and the overdispersion test of the INMA(1) process was applied to examine the dependence between counting series.  相似文献   

13.
While the mean and unconditional variance are to be predicted well by any reasonable turbulent combustion model, these are generally not sufficient for the accurate modelling of complex phenomena such as extinction/reignition. An additional criterion has been recently introduced: accurate modelling of the dissipation timescales associated with fluctuations of scalars about their conditional mean (conditional dissipation timescales). Analysis of Direct Numerical Simulation (DNS) results for a passive scalar shows that the conditional dissipation timescale is of the order of the integral timescale and smaller than the unconditional dissipation timescale. A model is proposed: the conditional dissipation timescale is proportional to the integral timescale. This model is used in Multiple Mapping Conditioning (MMC) modelling for a passive scalar case and a reactive scalar case, comparing to DNS results for both. The results show that this model improves the accuracy of MMC predictions so as to match the DNS results more closely using a relatively-coarse spatial resolution compared to other turbulent combustion models.  相似文献   

14.
Economic networks share with other social networks the fundamental property of sparsity. It is well known that the maximum entropy techniques usually employed to estimate or simulate weighted networks produce unrealistic dense topologies. At the same time, strengths should not be neglected, since they are related to core economic variables like supply and demand. To overcome this limitation, the exponential Bosonic model has been previously extended in order to obtain ensembles where the average degree and strength sequences are simultaneously fixed (conditional geometric model). In this paper a new exponential model, which is the network equivalent of Boltzmann ideal systems, is introduced and then extended to the case of joint degree-strength constraints (conditional Poisson model). Finally, the fitness of these alternative models is tested against a number of networks. While the conditional geometric model generally provides a better goodness-of-fit in terms of log-likelihoods, the conditional Poisson model could nevertheless be preferred whenever it provides a higher similarity with original data. If we are interested instead only in topological properties, the simple Bernoulli model appears to be preferable to the correlated topologies of the two more complex models.  相似文献   

15.
Unemployment has risen as the economy has shrunk. The coronavirus crisis has affected many sectors in Romania, some companies diminishing or even ceasing their activity. Making forecasts of the unemployment rate has a fundamental impact and importance on future social policy strategies. The aim of the paper is to comparatively analyze the forecast performances of different univariate time series methods with the purpose of providing future predictions of unemployment rate. In order to do that, several forecasting models (seasonal model autoregressive integrated moving average (SARIMA), self-exciting threshold autoregressive (SETAR), Holt–Winters, ETS (error, trend, seasonal), and NNAR (neural network autoregression)) have been applied, and their forecast performances have been evaluated on both the in-sample data covering the period January 2000–December 2017 used for the model identification and estimation and the out-of-sample data covering the last three years, 2018–2020. The forecast of unemployment rate relies on the next two years, 2021–2022. Based on the in-sample forecast assessment of different methods, the forecast measures root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percent error (MAPE) suggested that the multiplicative Holt–Winters model outperforms the other models. For the out-of-sample forecasting performance of models, RMSE and MAE values revealed that the NNAR model has better forecasting performance, while according to MAPE, the SARIMA model registers higher forecast accuracy. The empirical results of the Diebold–Mariano test at one forecast horizon for out-of-sample methods revealed differences in the forecasting performance between SARIMA and NNAR, of which the best model of modeling and forecasting unemployment rate was considered to be the NNAR model.  相似文献   

16.
The flamelet/progress variable approach (FPVA) has been proposed by Pierce and Moin as a model for turbulent non-premixed combustion in large-eddy simulation. The filtered chemical source term in this model appears in unclosed form, and is modeled by a presumed probability density function (PDF) for the joint PDF of the mixture fraction Z and a flamelet parameter λ. While the marginal PDF of Z can be reasonably approximated by a beta distribution, a model for the conditional PDF of the flamelet parameter needs to be developed. Further, the ability of FPVA to predict extinction and re-ignition has also not been assessed. In this paper, we address these aspects of the model using the DNS database of Sripakagorn et al. It is first shown that the steady flamelet assumption in the context of FPVA leads to good predictions even for high levels of local extinction. Three different models for the conditional PDF of the flamelet parameter are tested in an a priori sense. Results obtained using a delta function to model the conditional PDF of λ lead to an overprediction of the mean temperature, even with only moderate extinction levels. It is shown that if the conditional PDF of λ is modeled by a beta distribution conditioned on Z, then FPVA can predict extinction and re-ignition effects, and good agreement between the model and DNS data for the mean temperature is observed.  相似文献   

17.
The direct Lyapunov method is used to investigate the stability of charged solitons of pulson type described by a relativistic complex scalar field in a model of general form. It is shown that the stability can only be conditional. Some necessary and sufficient conditions for stability of stationary solitons for a fixed charge are formulated. Examples of models with power-law and logarithmic nonlinearities are considered.Translated from Izvestiya Vysshikh Uchebnykh Zavedenii, Fizika, No. 1, pp. 56–60, January, 1981.  相似文献   

18.
Importance sampling is a Monte Carlo method where samples are obtained from an alternative proposal distribution. This can be used to focus the sampling process in the relevant parts of space, thus reducing the variance. Selecting the proposal that leads to the minimum variance can be formulated as an optimization problem and solved, for instance, by the use of a variational approach. Variational inference selects, from a given family, the distribution which minimizes the divergence to the distribution of interest. The Rényi projection of order 2 leads to the importance sampling estimator of minimum variance, but its computation is very costly. In this study with discrete distributions that factorize over probabilistic graphical models, we propose and evaluate an approximate projection method onto fully factored distributions. As a result of our evaluation it becomes apparent that a proposal distribution mixing the information projection with the approximate Rényi projection of order 2 could be interesting from a practical perspective.  相似文献   

19.
幂律指数在1与3之间的一类无标度网络   总被引:2,自引:0,他引:2       下载免费PDF全文
郭进利  汪丽娜 《物理学报》2007,56(10):5635-5639
借助排队系统中顾客批量到达的概念,提出节点批量到达的Poisson网络模型.节点按照到达率为λ的Poisson过程批量到达系统.模型1,批量按照到达批次的幂律非线性增长,其幂律指数为θ(0≤θ<+∞).BA模型是在θ=0时的特例.利用Poisson过程理论和连续化方法进行分析,发现这个网络稳态平均度分布是幂律分布,而且幂律指数在1和3之间.模型2,批量按照节点到达批次的对数非线性增长,得出当批量增长较缓慢时,稳态度分布幂律指数为3.因此,节点批量到达的Poisson网络模型不仅是BA模型的推广,也为许多幂律指数在1和2之间的现实网络提供了理论依据.  相似文献   

20.
Granger causality model (GCM) derived from multivariate vector autoregressive models of data has been employed to identify effective connectivity in the human brain with functional magnetic resonance imaging (fMRI) and to reveal complex temporal and spatial dynamics underlying a variety of cognitive processes. In the most recent fMRI effective connectivity measures, pair-wise GCM has commonly been applied based on single-voxel values or average values from special brain areas at the group level. Although a few novel conditional GCM methods have been proposed to quantify the connections between brain areas, our study is the first to propose a viable standardized approach for group analysis of fMRI data with GCM. To compare the effectiveness of our approach with traditional pair-wise GCM models, we applied a well-established conditional GCM to preselected time series of brain regions resulting from general linear model (GLM) and group spatial kernel independent component analysis of an fMRI data set in the temporal domain. Data sets consisting of one task-related and one resting-state fMRI were used to investigate connections among brain areas with the conditional GCM method. With the GLM-detected brain activation regions in the emotion-related cortex during the block design paradigm, the conditional GCM method was proposed to study the causality of the habituation between the left amygdala and pregenual cingulate cortex during emotion processing. For the resting-state data set, it is possible to calculate not only the effective connectivity between networks but also the heterogeneity within a single network. Our results have further shown a particular interacting pattern of default mode network that can be characterized as both afferent and efferent influences on the medial prefrontal cortex and posterior cingulate cortex. These results suggest that the conditional GCM approach based on a linear multivariate vector autoregressive model can achieve greater accuracy in detecting network connectivity than the widely used pair-wise GCM, and this group analysis methodology can be quite useful to extend the information obtainable in fMRI.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号