首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
为了研究港口货物总吞吐量随时间变化的规律,提出了港口货物总吞吐量的概率分布模型.由于货物总吞吐量的变化与到达港口的货运船数目以及装卸设备的工作效率有密切关系,构造一个关于到达港口的货运船数目以及装卸设备的工作能力组合而成的复合变量,货物总吞吐量是这些复合变量所表示的货物装卸量的和.应用wald方程,得到货物总吞吐量的概率分布.货物总吞吐量服从何种概率分布依赖于到达港口的货运船数目所服从的概率分布.克服了传统预测模型难于对未来货物吞吐量大小变化的可能性作出量化判断的缺点.同时,根据建立的数学模型,分析了影响货物总吞吐量变化的因素,以山东地区某港口的货物吞吐量变化规律进行了案例分析.实际结果与理论分析相符.  相似文献   

2.
为提高港口货物吞吐量预测精度,建立了基于ARIMAX-SVR的组合预测模型。以天津港为例,选取1999~2018年货物吞吐量数据进行分析,首先运用BP神经网络补插缺失数据,然后通过Pearson相关分析筛选出影响货物吞吐量的主要因素;再在ARIMA模型的基础上建立了ARIMAX模型,为进一步提高模型精度,最后建立了SVR模型修正的ARIMAX模型。实证分析结果表明组合模型拟合精度更高,预测效果更好,适用于港口吞吐量预测并且模型具有一定的先进性。  相似文献   

3.
货物吞吐量是衡量港口发展水平的硬性指标,进行港口吞吐量预测可以更好的指导港口基础设施的规划与发展方向.通过对曹妃甸港主要货物在"十二五"规划时期的吞吐量情况进行分析,采用灰色预测的方法并分别对主要货物建立相应的预测模型并进行检验;同时分析货物吞吐量的预测数据,为曹妃甸港口的未来发展策略提出具体建议.  相似文献   

4.
个人信用评价问题研究中,需要建立较多的虚拟变量作为解释变量.Group Lasso可以将相关的虚拟变量作为组进行整体剔除或保留在模型中.结合具体的个人信贷数据,应用Group Lasso方法进行变量选择建立Logistic模型,并与全模型、向前选择和向后选择建立的Logistic模型进行比较,发现Group Lasso方法建立的模型,在变量解释和预测正确率上,都是最优的.  相似文献   

5.
通过建立船舶-货物的亚洲区域化学品船航运市场动态预测模型,对区域内船舶营运情况作了分析.并运用市场变量相关航运市场景气程度的回归关系,应用系统动力学模拟的结果加入到回归模型中对实际船舶运营期租水平进行解释,对回归方程进行敏感性分析,优化模型,得到亚洲市场化学品船期租水平与各市场变量的系数关系,首次提出化学品船期租水平运价指数,并为亚洲航线化学品船远期期租水平指数的交易、与相关产品的价格对冲、套期保值等操作的可能性提出理论基础.  相似文献   

6.
与现有网络结构设计方法不同,本文将RBF网络解释为解释变量和被解释变量之间的一个非线性函数,基于RBF网络的学习动态特性提出2种修剪模型WRBF和TRBF。这两种模型根据参数显著性增加和删减节点,为网络结构设计提供了理论依据。对中国信贷序列预测的结果表明,这些模型能够识别外移、萎缩和衰减等冗余核函数,得到的精简网络具有最好的预测精度,对于提高货币政策前瞻性具有很好应用价值。  相似文献   

7.
《数理统计与管理》2013,(6):1079-1089
本文利用非平稳离散选择(NSD)模型,在两种时间刻度下,对我国央行货币政策操作中调整法定存款准备金率和定期利率的动态行为进行量化分析和预测,并与支持向量机(SVM)的预测结果进行比较.结果表明,核心经济变量及其纵向相对水平变化对央行货币政策调控决策具有显著且较优的解释能力.根据样本外模型预测结果,本文认为以月度为单位对央行制定执行货币政策行为进行分析预测比以季度为单位更为合适.虽然SVM模型整体样本外预测能力优于NSD模型,但NSD核心差分变量模型对央行上下调整政策行为具有较好的预测能力。本文结论对央行的货币政策调整决策行为具有一定的解释能力,有助于市场主体衡量经济运行状态,及时把握央行的货币政策操作动向.  相似文献   

8.
文章给出了非线性预测的一般理论方法,运用动态BP神经网络对实际问题的历史数据进行学习,然后根据学习后获得的非线性机理来进行预测。并将此方法应用于港口货物吞吐量及出口量预报。仿真表明此方法是有效的。  相似文献   

9.
极端天气是目前社会热点问题.利用高斯过程函数型回归对北京,上海等10个城市近年来夏季日最高气温进行整体建模.选取城市地理位置信息作为均值函数解释变量,时间和降雨信息作为高斯过程协方差结构解释变量,充分利用模型能够同时捕捉均值和协方差结构的优势,解决多地区日最高气温的整体建模和同步预测问题.研究表明,高斯过程函数型回归模型在随机预测,外延预测,k步预测,以及对于训练数据集以外城市的预测均有较好的效果,且优于一般的函数型数据模型.  相似文献   

10.
为了对机场旅客吞吐量进行更高精度的预测,提出了一种基于网络搜索信息的“分解-重构-集成”组合预测新方法。首先,采用平均影响值和时差相关分析法对机场旅客吞吐量相关的网络搜索关键词进行筛选,合成综合搜索指数。其次,利用改进的自适应白噪声完备集合经验模态分解(ICEEMDAN)方法分别将机场旅客吞吐量和综合搜索指数分解为若干子模态序列,依据子序列的样本熵值重构为高、中、低频序列。以搜索指数中的不同频率成分作为辅助输入信息,分别对机场旅客吞吐量的高频和中频序列采用麻雀搜索算法优化的BP神经网络(SSA-BP)模型进行预测,而低频序列采用自回归分布滞后模型进行预测,最后将不同频率序列预测值用SSA-BP模型进行综合集成得到最终的预测值。通过实证发现,该组合预测新方法能显著提高预测的精度,并表现出较好的鲁棒性。  相似文献   

11.
This article presents a Markov chain Monte Carlo algorithm for both variable and covariance selection in the context of logistic mixed effects models. This algorithm allows us to sample solely from standard densities with no additional tuning. We apply a stochastic search variable approach to select explanatory variables as well as to determine the structure of the random effects covariance matrix.

Prior determination of explanatory variables and random effects is not a prerequisite because the definite structure is chosen in a data-driven manner in the course of the modeling procedure. To illustrate the method, we give two bank data examples.  相似文献   

12.
面板数据模型在经济、生物、统计等领域有着广泛的应用。经典的面板数据模型假设解释变量系数不随时间变化。然而在现实中,解释变量系数可能会因多种因素的影响而存在多重未知的结构变点。本文假设交互固定效应面板数据模型中含有多重未知的结构变点。研究发现通过Pairwise惩罚的参数估计方法在目标函数中增加对相邻时间解释变量系数的惩罚项,能够同时进行参数估计和结构变点诊断。蒙特卡洛模拟结果显示,不管是否存在同方差假设,该方法估计的解释变量系数均偏差较小且结构变点诊断错误率低。  相似文献   

13.
Regression games     
The solution of a TU cooperative game can be a distribution of the value of the grand coalition, i.e. it can be a distribution of the payoff (utility) all the players together achieve. In a regression model, the evaluation of the explanatory variables can be a distribution of the overall fit, i.e. the fit of the model every regressor variable is involved. Furthermore, we can take regression models as TU cooperative games where the explanatory (regressor) variables are the players. In this paper we introduce the class of regression games, characterize it and apply the Shapley value to evaluating the explanatory variables in regression models. In order to support our approach we consider Young’s (Int. J. Game Theory 14:65–72, 1985) axiomatization of the Shapley value, and conclude that the Shapley value is a reasonable tool to evaluate the explanatory variables of regression models.  相似文献   

14.
A study of glial tumours involving 192 cases is presented. Different issues are addressed: (i) the interrelationships between the histological variables, (ii) the problem of the prediction of the survival time, (iii) the causal role of the variables in the progress of the disease. We propose a three-level grade which can be defined alternatively with perivascular lymphocites or with the signs necrosis and neovascularization. We constructed a predictive model based on the Cox model in which the variables were chosen according to Akaike's criterion. In the explanatory analysis we dropped the variables which could be considered as consequences rather than causes of the disease and we first tested groups of variables (factors): we found that age, the topology and the histology of the tumour were explanatory.  相似文献   

15.
ASSESSMENT OF LOCAL INFLUENCE IN MULTIVARIATE REGRESSION MODEL   总被引:1,自引:0,他引:1  
1Intr0ductionRegressi0ndiagnosticsandinfluenceanalysis,asatechlliquetoinvestigatethefittingresu1t0fmodelandinfluentialpatternsinthedatasets,hadbeenpaidgreatattentioninrecentyears.Someworksweresummarizedinseveralb00ks(see[1]-[3l),muchofwhichemphasizedtlieidentilicati0nofinfiuentialpointsandinvo1vedde1etionofthedatacases.[4jemP1oyedlocalinfluenceana1ysisasana1ternativetechniquet0studytheidentificati0nofinfluentia1p0ints-inwhichtl1en0rma1curvature0finfluencegrapl1basedon1ikelih0oddisplacementatnu…  相似文献   

16.
This paper considers generalized linear models in a data‐rich environment in which a large number of potentially useful explanatory variables are available. In particular, it deals with the case that the sample size and the number of explanatory variables are of similar sizes. We adopt the idea that the relevant information of explanatory variables concerning the dependent variable can be represented by a small number of common factors and investigate the issue of selecting the number of common factors while taking into account the effect of estimated regressors. We develop an information criterion under model mis‐specification for both the distributional and structural assumptions and show that the proposed criterion is a natural extension of the Akaike information criterion (AIC). Simulations and empirical data analysis demonstrate that the proposed new criterion outperforms the AIC and Bayesian information criterion. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
Multiblock component methods are applied to data sets for which several blocks of variables are measured on a same set of observations with the goal to analyze the relationships between these blocks of variables. In this article, we focus on multiblock component methods that integrate the information found in several blocks of explanatory variables in order to describe and explain one set of dependent variables. In the following, multiblock PLS and multiblock redundancy analysis are chosen, as particular cases of multiblock component methods when one set of variables is explained by a set of predictor variables that is organized into blocks. Because these multiblock techniques assume that the observations come from a homogeneous population they will provide suboptimal results when the observations actually come from different populations. A strategy to palliate this problem—presented in this article—is to use a technique such as clusterwise regression in order to identify homogeneous clusters of observations. This approach creates two new methods that provide clusters that have their own sets of regression coefficients. This combination of clustering and regression improves the overall quality of the prediction and facilitates the interpretation. In addition, the minimization of a well-defined criterion—by means of a sequential algorithm—ensures that the algorithm converges monotonously. Finally, the proposed method is distribution-free and can be used when the explanatory variables outnumber the observations within clusters. The proposed clusterwise multiblock methods are illustrated with of a simulation study and a (simulated) example from marketing.  相似文献   

18.
The multinomial logit model is the most widely used model for nominal multi-category responses. One problem with the model is that many parameters are involved, and another that interpretation of parameters is much harder than for linear models because the model is nonlinear. Both problems can profit from graphical representations. We propose to visualize the effect strengths by star plots, where one star collects all the parameters connected to one term in the linear predictor. In simple models, one star refers to one explanatory variable. In contrast to conventional star plots, which are used to represent data, the plots represent parameters and are considered as parameter glyphs. The set of stars for a fitted model makes the main features of the effects of explanatory variables on the response variable easily accessible. The method is extended to ordinal models and illustrated by several datasets. Supplementary materials are available online.  相似文献   

19.
Variable selection is recognized as one of the most critical steps in statistical modeling. The problems encountered in engineering and social sciences are commonly characterized by over-abundance of explanatory variables, nonlinearities, and unknown interdependencies between the regressors. An added difficulty is that the analysts may have little or no prior knowledge on the relative importance of the variables. To provide a robust method for model selection, this article introduces the multiobjective genetic algorithm for variable selection (MOGA-VS) that provides the user with an optimal set of regression models for a given dataset. The algorithm considers the regression problem as a two objective task, and explores the Pareto-optimal (best subset) models by preferring those models over the other which have less number of regression coefficients and better goodness of fit. The model exploration can be performed based on in-sample or generalization error minimization. The model selection is proposed to be performed in two steps. First, we generate the frontier of Pareto-optimal regression models by eliminating the dominated models without any user intervention. Second, a decision-making process is executed which allows the user to choose the most preferred model using visualizations and simple metrics. The method has been evaluated on a recently published real dataset on Communities and Crime Within the United States.  相似文献   

20.
《Comptes Rendus Mathematique》2008,346(5-6):343-346
In this Note we introduce a general approach to construct structural testing procedures in regression on functional variables. In the case of multivariate explanatory variables a well-known method consists in a comparison between a nonparametric estimator and a particular one. We adapt this approach to the case of functional explanatory variables. We give the asymptotic law of the proposed test statistic. The general approach used allows us to cover a large scope of possible applications as tests for no-effect, tests for linearity, …. To cite this article: L. Delsol, C. R. Acad. Sci. Paris, Ser. I 346 (2008).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号