首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Growth curves such as the logistic and Gompertz are widely used for forecasting market development. The approach proposed is specifically designed for forecasting, rather than fitting available data—the usual approach with non-linear least squares regression. Two innovations form the foundation for this approach. The growth curves are reformulated from a time basis to an observation basis. This ensures that the available observations and the forecasts form a monotonic series; this is not necessarily true for least squares extrapolations of growth curves. An extension of the Kalman filter, an approach already used with linear forecasting models, is applied to the estimation of the growth curve coefficients. This allows the coefficients the flexibility to change over time if the market environment changes. The extended Kalman filter also proves the information for the generation of confidence intervals about the forecasts. Alternative forecasting approaches, least squares and an adaptive Bass model, suggested by Bretschneider and Mahajan, are used to produce comparative forecasts for a number of different data sets. The approach using the extended Kalman filter is shown to be more robust and almost always more accurate than the alternatives.  相似文献   

2.
The Combination of Forecasts   总被引:5,自引:0,他引:5  
Two separate sets of forecasts of airline passenger data have been combined to form a composite set of forecasts. The main conclusion is that the composite set of forecasts can yield lower mean-square error than either of the original forecasts. Past errors of each of the original forecasts are used to determine the weights to attach to these two original forecasts in forming the combined forecasts, and different methods of deriving these weights are examined.  相似文献   

3.
This paper introduces the “piggyback bootstrap.” Like the weighted bootstrap, this bootstrap procedure can be used to generate random draws that approximate the joint sampling distribution of the parametric and nonparametric maximum likelihood estimators in various semiparametric models, but the dimension of the maximization problem for each bootstrapped likelihood is smaller. This reduction results in significant computational savings in comparison to the weighted bootstrap. The procedure can be stated quite simply. First obtain a valid random draw for the parametric component of the model. Then take the draw for the nonparametric component to be the maximizer of the weighted bootstrap likelihood with the parametric component fixed at the parametric draw. We prove the procedure is valid for a class of semiparametric models that includes frailty regression models airsing in survival analysis and biased sampling models that have application to vaccine efficacy trials. Bootstrap confidence sets from the piggyback, and weighted bootstraps are compared for biased sampling data from simulated vaccine efficacy trials.  相似文献   

4.
Interval width and coverage probability are two criteria for evaluating confidence intervals. It's quite worthwhile to investigate fixed-width confidence intervals with a prescribed nominal level, which, in generally speaking, is hardly realized in fixed-sample-size circumstances. A common way to deal with this problem is to apply sequential methods and two-stage sampling or even multi-stage sampling. For zero-inflated Poisson distribution with a probability mass $p$ and Poisson mean parameter $\lambda$, the construction of fixed-width confidence intervals for (\lambda,p)$ is conducted in this paper, including sequential and two-stage procedures. Each procedure is demonstrated to satisfy asymptotic consistency and efficiency. The variation of optimal fixed-sample size by the two parameters is considered under different situations and simulation performance is displayed by Monte Carlo simulation. A real data analysis is also implemented for application.  相似文献   

5.
This paper deals with simulation-based estimation of the probability distribution for completion time in stochastic activity networks. These distribution functions may be valuable in many applications. A simulation method, using importance-sampling techniques, is presented for estimation of the probability distribution function. Separating the state space into two sets, one which must be sampled and another which need not be, is suggested. The sampling plan of the simulation can then be decided after the probabilities of the two sets are adjusted. A formula for the adjustment of the probabilities is presented. It is demonstrated that the estimator is unbiased and the upper bound of variance minimized. Adaptive sampling, utilizing the importance sampling techniques, is discussed to solve problems where there is no information or more than one way to separate the state space. Examples are used to illustrate the sampling plan.  相似文献   

6.
Confident Search     
Abstract

The task of searching for the best element or a good element in a large set P is central to many problems in artificial intelligence and related fields. Often, heuristic information is used to reduce the scope of the search; however, in many instances, this information carries no guarantee of good performance. This article begins with an arbitrary heuristic search procedure and supplies it with a confidence statement of the following form: With specified high probability β, the output of the confidence procedure will be among the best 100α% of the elements of P. The confidence procedure will report either the outcome of the heuristic search or a better alternative with the required properties; that is, it will either certify that the heuristic answer has the desired confidence property or it will produce a better answer having the property. The approach involves combining heuristic search with a form of heuristic sampling that tends to sample the better elements of P. The sample is designed in such a way that the best element in the sample has the desired confidence property—if the answer produced by the heuristic search is better still, it inherits the confidence property. Various devices permit the sampling procedure to retain its confidence property while (1) moving the sample in the direction suggested by the heuristic, (2) adjusting the heuristic preference in response to what is learned during sampling, and (3) reorganizing the sampling whenever promising discoveries are made by chance.  相似文献   

7.
8.
Diffusion processes abound in various areas of corporate activities, such as the time-dependent behaviour of cumulative demand of a new product, or the adoption rate of a technological innovation. In most cases, the proportion of the population that has adopted the new product by time t behaves like an S-shaped curve, which resembles the sigmoid curve typical to many known statistical distribution functions. This analogy has motivated the common use of the latter for forecasting purposes. Recently, a new methodology for empirical modelling has been developed, termed response modelling methodology (RMM). The error distribution of the RMM model has been shown to model well variously shaped distribution functions, and may therefore be adequate to forecast sigmoid-curve processes. In particular, RMM may be applied to forecast S-shaped diffusion processes. In this paper, forty-seven data sets, assembled from published sources by Meade and Islam, are used to compare the accuracy and the stability of RMM-generated forecasts, relative to current commonly applied models. Results show that in most comparisons RMM forecasts outperform those based on any individually selected distributional model.  相似文献   

9.
10.
An ensemble of forecasts generated by different model simulations provides rich information for meteorologists about impending weather such as precipitating clouds. One major form of forecasts presents cloud images created by multiple ensemble members. Common features identified from these images are often used as the consensus prediction of the entire ensemble, while the variation among the images indicates forecast uncertainty. However, the large number of images and the possibly tremendous extent of dissimilarity between them pose cognitive challenges for decision making. In this article, we develop novel methods for summarizing an ensemble of forecasts represented by cloud images and call them collectively the Geometry-Sensitive Ensemble Mean (GEM) toolkit. Conventional pixel-wise or feature-based averaging either loses interesting geometry information or focuses narrowly on some pre-chosen characteristics of the clouds to be forecasted. In GEM, we represent a cloud simulation by a Gaussian mixture model, which captures cloud shapes effectively without making special assumptions. Furthermore, using a state-of-the-art optimization algorithm, we compute the Wasserstein barycenter for a set of distributional entities, which can be considered as the consensus mean or centroid under the Wasserstein metric. Experimental results on two sets of ensemble simulated images are provided. Supplemental materials for the article are available online.  相似文献   

11.
Incorporating statistical multiple comparisons techniques with credit risk measurement, a new methodology is proposed to construct exact confidence sets and exact confidence bands for a beta distribution. This involves simultaneous inference on the two parameters of the beta distribution, based upon the inversion of Kolmogorov tests. Some monotonicity properties of the distribution function of the beta distribution are established which enable the derivation of an efficient algorithm for the implementation of the procedure. The methodology has important applications to financial risk management. Specifically, the analysis of loss given default (LGD) data are often modeled with a beta distribution. This new approach properly addresses model risk caused by inadequate sample sizes of LGD data, and can be used in conjunction with the standard recommendations provided by regulators to provide enhanced and more informative analyses.  相似文献   

12.
When attributes are rare and few or none are observed in the selected sample from a finite universe, sampling statisticians are increasingly being challenged to use whatever methods are available to declare with high probability or confidence that the universe is near or completely attribute-free. This is especially true when the attribute is undesirable. Approximations such as those based on normal theory are frequently inadequate with rare attributes. For simple random sampling without replacement, an appropriate probability distribution for statistical inference is the hypergeometric distribution. But even with the hypergeometric distribution, the investigator is limited from making claims of attribute-free with high confidence unless the sample size is quite large using nonrandomized techniques. For students in statistical theory, this short article seeks to revive the question of the relevance of randomized methods. When comparing methods for construction of confidence bounds in discrete settings, randomization methods are useful in fixing like confidence levels and hence facilitating the comparisons. Under simple random sampling, this article defines and presents a simple algorithm for the construction of exact “randomized” upper confidence bounds which permit one to possibly report tighter bounds than those exact bounds obtained using “nonrandomized” methods. A general theory for exact randomized confidence bounds is presented in Lehmann (1959, p. 81), but Lehmann's development requires more mathematical development than is required in this application. Not only is the development of these “randomized” bounds in this paper elementary, but their desirable properties and their link with the usual nonrandomized bounds are easy to see with the presented approach which leads to the same results as would be obtained using the method of Lehmann.  相似文献   

13.
程从华  陈进源 《应用数学》2012,25(2):274-281
本文考虑基于混合Ⅱ型删失数据的Weibull模型精确推断和可接受抽样计划.得到威布尔分布未知参数最大似然估计的精确分布以及基于精确分布的置信区间.由于精确分布函数较为复杂,给出未知参数的另外几种置信区间,基于近似方法的置信区间.为了评价本文的方法,给出一些数值模拟的结果.且讨论了可靠性中的可接受抽样计划问题.利用参数最大似然估计的精确分布,给出一个可接受抽样计划的执行程序和数值模拟结果.  相似文献   

14.
Electric utilities commonly use econometric modelling for energy and power forecasting. In order to accommodate the uncertainties contained in the input variables, such forecasts are frequently made in three parts: a base forecast, assumed to be the most likely, and a high and a low forecast, often arbitrarily spaced on either side of the base forecast, giving a band of possible values for the forecast. Usually, a single point value forecast is then utilized rather than a distribution of possible forecast values. This paper describes how commercially available spreadsheet software was utilized to convert an econometric energy forecast into probabilistic demand and energy forecasts that incorporate weather variation, as well as other uncertain inputs.  相似文献   

15.
This paper develops a framework for developing forecasts of future mortality rates. We discuss the suitability of six stochastic mortality models for forecasting future mortality and estimating the density of mortality rates at different ages. In particular, the models are assessed individually with reference to the following qualitative criteria that focus on the plausibility of their forecasts: biological reasonableness; the plausibility of predicted levels of uncertainty in forecasts at different ages; and the robustness of the forecasts relative to the sample period used to fit the model. An important, though unsurprising, conclusion is that a good fit to historical data does not guarantee sensible forecasts. We also discuss the issue of model risk, common to many modelling situations in demography and elsewhere. We find that even for those models satisfying our qualitative criteria, there are significant differences among central forecasts of mortality rates at different ages and among the distributions surrounding those central forecasts.  相似文献   

16.
When a simple Markov or semi-Markov model is fitted to a large set of data over several points in time, confidence intervals based on the asymptotic distribution of the maximum likelihood estimates are generally too narrow. Accordingly, the estimated precision of forecasts is too optimistic. These problems arise not only because of inhomogeneities in the sample, but also because of time inhomogeneities in the parameter values themselves. In this paper, some approaches to finding more realistic interval estimates will be surveyed.  相似文献   

17.
Stochastic multicriteria acceptability analysis (SMAA) is a family of methods for aiding multicriteria group decision making. These methods are based on exploring the weight space in order to describe the preferences that make each alternative the most preferred one. The main results of the analysis are rank acceptability indices, central weight vectors and confidence factors for different alternatives. The rank acceptability indices describe the variety of different preferences resulting in a certain rank for an alternative; the central weight vectors represent the typical preferences favouring each alternative; and the confidence factors measure whether the criteria data are sufficiently accurate for making an informed decision.In some cases, when the problem involves a large number of efficient alternatives, the analysis may fail to discriminate between them. This situation is revealed by low confidence factors. In this paper we develop cross confidence factors, which are based on computing confidence factors for alternatives using each other’s central weight vectors. The cross confidence factors can be used for classifying efficient alternatives into sets of similar and competing alternatives. These sets are related to the concept of reference sets in Data Envelopment Analysis (DEA), but generalized for stochastic models. Forming these sets is useful when trying to identify one or more most preferred alternatives, or suitable compromise alternatives. The reference sets can also be used for evaluating whether criteria need to be measured more accurately, and at which alternatives the measurements should be focused. This may cause considerable savings in measurement costs. We demonstrate the use of the cross confidence factors and reference sets using a real-life example.  相似文献   

18.
Exponential smoothing methods are widely used as forecasting techniques in inventory systems and business planning, where reliable prediction intervals are also required for a large number of series. This paper describes a Bayesian forecasting approach based on the Holt–Winters model, which allows obtaining accurate prediction intervals. We show how to build them incorporating the uncertainty due to the smoothing unknowns using a linear heteroscedastic model. That linear formulation simplifies obtaining the posterior distribution on the unknowns; a random sample from such posterior, which is not analytical, is provided using an acceptance sampling procedure and a Monte Carlo approach gives the predictive distributions. On the basis of this scheme, point-wise forecasts and prediction intervals are obtained. The accuracy of the proposed Bayesian forecasting approach for building prediction intervals is tested using the 3003 time series from the M3-competition.  相似文献   

19.
于文华  杨坤  魏宇 《运筹与管理》2021,30(6):132-138
相较于低频波动率模型,高频波动率模型在单资产的波动和风险预测中均取得了更好效果,因此如何将高频波动率模型引入组合风险分析具有重要的理论和现实意义。本文以沪深300指数中的6种行业高频数据为例,运用滚动时间窗技术建立9类已实现波动率异质自回归(HAR-RV-type)模型刻画行业指数波动,同时使用R-vine copula模型描述行业资产间相依结构,进一步结合均值-CVaR模型优化行业资产组合投资比例,构建组合风险的预期损失模型,并通过返回测试比较不同风险模型的精度差异。研究结果表明:将HAR族高频波动率模型引入组合风险分析框架,能够有效预测行业资产组合风险状况;高频波动率预测的准确性将进而影响组合风险测度效果,跳跃、符号跳跃变差以及符号正向、负向跳跃变差均有助于提高行业组合风险的预测精度。  相似文献   

20.
Exact confidence intervals and regions are proposed for the location and scale parameters of the Rayleigh distribution. These sets are valid for complete data, and also for standard and progressive failure censoring. Constrained optimization problems are solved to find the minimum-size confidence sets for the Rayleigh parameters with the required confidence level. The smallest-area confidence region is derived by simultaneously solving four nonlinear equations. Three numerical examples regarding remission times of leukemia patients, strength data and the titanium content in an aircraft-grade alloy, as well as a simulation study, are included for illustrative purposes. Further applications in hypothesis testing and the construction of pointwise and simultaneous confidence bands are also pointed out.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号