首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
Analysis of uncertainty is often neglected in the evaluation of complex systems models, such as computational models used in hydrology or ecology. Prediction uncertainty arises from a variety of sources, such as input error, calibration accuracy, parameter sensitivity and parameter uncertainty. In this study, various computational approaches were investigated for analysing the impact of parameter uncertainty on predictions of streamflow for a water-balance hydrological model used in eastern Australia. The parameters and associated equations which had greatest impact on model output were determined by combining differential error analysis and Monte Carlo simulation with stochastic and deterministic sensitivity analysis. This integrated approach aids in the identification of insignificant or redundant parameters and provides support for further simplifications in the mathematical structure underlying the model. Parameter uncertainty was represented by a probability distribution and simulation experiments revealed that the shape (skewness) of the distribution had a significant effect on model output uncertainty. More specifically, increasing negative skewness of the parameter distribution correlated with decreasing width of the model output confidence interval (i.e. resulting in less uncertainty). For skewed distributions, characterisation of uncertainty is more accurate using the confidence interval from the cumulative distribution rather than using variance. The analytic approach also identified the key parameters and the non-linear flux equation most influential in affecting model output uncertainty.  相似文献   

2.
Dokka  Trivikram  Goerigk  Marc  Roy  Rahul 《Optimization Letters》2020,14(6):1323-1337

In robust optimization, the uncertainty set is used to model all possible outcomes of uncertain parameters. In the classic setting, one assumes that this set is provided by the decision maker based on the data available to her. Only recently it has been recognized that the process of building useful uncertainty sets is in itself a challenging task that requires mathematical support. In this paper, we propose an approach to go beyond the classic setting, by assuming multiple uncertainty sets to be prepared, each with a weight showing the degree of belief that the set is a “true” model of uncertainty. We consider theoretical aspects of this approach and show that it is as easy to model as the classic setting. In an extensive computational study using a shortest path problem based on real-world data, we auto-tune uncertainty sets to the available data, and show that with regard to out-of-sample performance, the combination of multiple sets can give better results than each set on its own.

  相似文献   

3.
This paper provides a significant numerical evidence for out-of-sample forecasting ability of linear Gaussian interest rate models with unobservable underlying factors. We calibrate one, two and three factor linear Gaussian models using the Kalman filter on two different bond yield data sets and compare their out-of-sample forecasting performance. One-step ahead as well as four-step ahead out-of-sample forecasts are analyzed based on the weekly data. When evaluating the one-step ahead forecasts, it is shown that a one factor model may be adequate when only the short-dated or only the long-dated yields are considered, but two and three factor models performs significantly better when the entire yield spectrum is considered. Furthermore, the results demonstrate that the predictive ability of multi-factor models remains intact far ahead out-of-sample, with accurate predictions available up to one year after the last calibration for one data set and up to three months after the last calibration for the second, more volatile data set. The experimental data denotes two different periods with different yield volatilities, and the stability of model parameters after calibration in both the cases is deemed to be both significant and practically useful. When it comes to four-step ahead predictions, the quality of forecasts deteriorates for all models, as can be expected, but the advantage of using a multi-factor model as compared to a one factor model is still significant.  相似文献   

4.
This article suggests a method for variable and transformation selection based on posterior probabilities. Our approach allows for consideration of all possible combinations of untransformed and transformed predictors along with transformed and untransformed versions of the response. To transform the predictors in the model, we use a change-point model, or “change-point transformation,” which can yield more interpretable models and transformations than the standard Box–Tidwell approach. We also address the problem of model uncertainty in the selection of models. By averaging over models, we account for the uncertainty inherent in inference based on a single model chosen from the set of models under consideration. We use a Markov chain Monte Carlo model composition (MC3) method which allows us to average over linear regression models when the space of models under consideration is very large. This considers the selection of variables and transformations at the same time. In an example, we show that model averaging improves predictive performance as compared with any single model that might reasonably be selected, both in terms of overall predictive score and of the coverage of prediction intervals. Software to apply the proposed methodology is available via StatLib.  相似文献   

5.
This article deals with non-linear model parameter estimation from experimental data. As for non-linear models a rigorous identifiability analysis is difficult to perform, parameter estimation is performed in such a way that uncertainty in the estimated parameter values is represented by the range of model use results when the model is used for a certain purpose. Using this approach, the article presents a simulation study where the objective is to discover whether the estimation of model parameters can be improved, so that a small enough range of model use results is obtained. The results of the study indicate that from plant measurements available for the estimation of model parameters, it is possible to extract data that are important for the estimation of model parameters relative to a certain model use. If these data are improved by a proper measurement campaign (e.g. proper choice of measured variables, better accuracy, higher measurement frequency) it is to be expected that a valid model for a certain model use will be obtained. The simulation study is performed for an activated sludge model from wastewater treatment, while the estimation of model parameters is done by Monte Carlo simulation.  相似文献   

6.
Bayesian model averaging (BMA) is the state of the art approach for overcoming model uncertainty. Yet, especially on small data sets, the results yielded by BMA might be sensitive to the prior over the models. Credal model averaging (CMA) addresses this problem by substituting the single prior over the models by a set of priors (credal set). Such approach solves the problem of how to choose the prior over the models and automates sensitivity analysis. We discuss various CMA algorithms for building an ensemble of logistic regressors characterized by different sets of covariates. We show how CMA can be appropriately tuned to the case in which one is prior-ignorant and to the case in which instead domain knowledge is available. CMA detects prior-dependent instances, namely instances in which a different class is more probable depending on the prior over the models. On such instances CMA suspends the judgment, returning multiple classes. We thoroughly compare different BMA and CMA variants on a real case study, predicting presence of Alpine marmot burrows in an Alpine valley. We find that BMA is almost a random guesser on the instances recognized as prior-dependent by CMA.  相似文献   

7.
Recently the SABR model has been developed to manage the option smile which is observed in derivatives markets. Typically, calibration of such models is straightforward as there is adequate data available for robust extraction of the parameters required asinputs to the model. The paper considers calibration of the model in situations where input data is very sparse. Although this will require some creative decision making, the algorithms developed here are remarkably robust and can be used confidently for mark to market and hedging of option portfolios.  相似文献   

8.
Obtaining reliable estimates of the parameters of a probabilistic classification model is often a challenging problem because the amount of available training data is limited. In this paper, we present a classification approach based on belief functions that makes the uncertainty resulting from limited amounts of training data explicit and thereby improves classification performance. In addition, we model classification as an active information acquisition problem where features are sequentially selected by maximizing the expected information gain with respect to the current belief distribution, thus reducing uncertainty as quickly as possible. For this, we consider different measures of uncertainty for belief functions and provide efficient algorithms for computing them. As a result, only a small subset of features need to be extracted without negatively impacting the recognition rate. We evaluate our approach on an object recognition task where we compare different evidential and Bayesian methods for obtaining likelihoods from training data and we investigate the influence of different uncertainty measures on the feature selection process.  相似文献   

9.
Expert knowledge in the form of mathematical models can be considered sufficient statistics of all prior experimentation in the domain, embodying generic or abstract knowledge of it. When used in a probabilistic framework, such models provide a sound foundation for data mining, inference, and decision making under uncertainty.We describe a methodology for encapsulating knowledge in the form of ordinary differential equations (ODEs) in dynamic Bayesian networks (DBNs). The resulting DBN framework can handle both data and model uncertainty in a principled manner, can be used for temporal data mining with noisy and missing data, and can be used to re-estimate model parameters automatically using data streams. A standard assumption when performing inference in DBNs is that time steps are fixed. Generally, the time step chosen is small enough to capture the dynamics of the most rapidly changing variable. This can result in DBNs having a natural time step that is very short, leading to inefficient inference; this is particularly an issue for DBNs derived from ODEs and for systems where the dynamics are not uniform over time.We propose an alternative to the fixed time step inference used in standard DBNs. In our algorithm, the DBN automatically adapts the time step lengths to suit the dynamics in each step. The resulting system allows us to efficiently infer probable values of hidden variables using multiple time series of evidence, some of which may be sparse, noisy or incomplete.We evaluate our approach with a DBN based on a variant of the van der Pol oscillator, and demonstrate an example where it gives more accurate results than the standard approach, but using only one tenth the number of time steps.We also apply our approach to a real-world example in critical care medicine. By incorporating knowledge in the form of an existing ODE model, we have built a DBN framework for efficiently predicting individualised patient responses using the available bedside and lab data.  相似文献   

10.
Guaranteed nonlinear parameter estimation in knowledge-based models   总被引:1,自引:0,他引:1  
Knowledge-based models are ubiquitous in pure and applied sciences. They often involve unknown parameters to be estimated from experimental data. This is usually much more difficult than for black-box models, only intended to mimic a given input–output behavior. The output of knowledge-based models is almost always nonlinear in their parameters, so that linear least squares cannot be used, and analytical solutions for the model equations are seldom available. Moreover, since the parameters have some physical meaning, it is not enough to find some numerical values of these quantities that are such that the model fits the data reasonably well. One would like, for instance, to make sure that the parameters to be estimated are identifiable. If this is not the case, all equivalent solutions should be provided. The uncertainty in the parameters resulting from the measurement noise and approximate nature of the model should also be characterized. This paper describes how guaranteed methods based on interval analysis may contribute to these tasks. Examples in linear and nonlinear compartmental modeling, widely used in biology, are provided.  相似文献   

11.
This article introduces graphical sensitivity analysis for multidimensional scaling. This new technique is designed to combat two problems associated with multidimensional scaling analyses: The possibility of local minima and the uncertainty regarding sensitivity of the solution to changes in the parameters. Graphical sensitivity analysis is currently available in ViSta-MDS, a test bed for graphical model examination. By graphically manipulating points in the solution space, analysts may examine the sensitivity of the solution to changes in the model parameters. Furthermore, the analyst may search for alternative solutions that represent local minima. An example of graphical sensitivity analysis using ViSta-MDS is described.  相似文献   

12.
A hierarchical model is developed for the joint mortality analysis of pension scheme datasets. The proposed model allows for a rigorous statistical treatment of missing data. While our approach works for any missing data pattern, we are particularly interested in a scenario where some covariates are observed for members of one pension scheme but not the other. Therefore, our approach allows for the joint modelling of datasets which contain different information about individual lives. The proposed model generalizes the specification of parametric models when accounting for covariates. We consider parameter uncertainty using Bayesian techniques. Model parametrization is analysed in order to obtain an efficient MCMC sampler, and address model selection. The inferential framework described here accommodates any missing-data pattern, and turns out to be useful to analyse statistical relationships among covariates. Finally, we assess the financial impact of using the covariates, and of the optimal use of the whole available sample when combining data from different mortality experiences.  相似文献   

13.
Input and output data, under uncertainty, must be taken into account as an essential part of data envelopment analysis (DEA) models in practice. Many researchers have dealt with this kind of problem using fuzzy approaches, DEA models with interval data or probabilistic models. This paper presents an approach to scenario-based robust optimization for conventional DEA models. To consider the uncertainty in DEA models, different scenarios are formulated with a specified probability for input and output data instead of using point estimates. The robust DEA model proposed is aimed at ranking decision-making units (DMUs) based on their sensitivity analysis within the given set of scenarios, considering both feasibility and optimality factors in the objective function. The model is based on the technique proposed by Mulvey et al. (1995) for solving stochastic optimization problems. The effect of DMUs on the product possibility set is calculated using the Monte Carlo method in order to extract weights for feasibility and optimality factors in the goal programming model. The approach proposed is illustrated and verified by a case study of an engineering company.  相似文献   

14.
Finite mixture distributions arise in sampling a heterogeneous population. Data drawn from such a population will exhibit extra variability relative to any single subpopulation. Statistical models based on finite mixtures can assist in the analysis of categorical and count outcomes when standard generalized linear models (GLMs) cannot adequately express variability observed in the data. We propose an extension of GLMs where the response follows a finite mixture distribution and the regression of interest is linked to the mixture’s mean. This approach may be preferred over a finite mixture of regressions when the population mean is of interest; here, only one regression must be specified and interpreted in the analysis. A technical challenge is that the mixture’s mean is a composite parameter that does not appear explicitly in the density. The proposed model maintains its link to the regression through a certain random effects structure and is completely likelihood-based. We consider typical GLM cases where means are either real-valued, constrained to be positive, or constrained to be on the unit interval. The resulting model is applied to two example datasets through Bayesian analysis. Supporting the extra variation is seen to improve residual plots and produce widened prediction intervals reflecting the uncertainty. Supplementary materials for this article are available online.  相似文献   

15.
The common difficulty in solving a Binary Linear Programming (BLP) problem is uncertainties in the parameters and the model structure. The previous studies of BLP problems normally focus on parameter uncertainty or model structure uncertainty, but not on both types of uncertainties. This paper develops an interval-coefficient Fuzzy Binary Linear Programming (IFBLP) and its solution for BLP problems under uncertainties both on parameters and model structure. In the IFBLP, the parameter uncertainty is represented by the interval coefficients, and the model structure uncertainty is reflected by the fuzzy constraints and a fuzzy goal. A novel and efficient methodology is used to solve the IFLBP into two extreme crisp-coefficient BLPs, which are called the ‘best optimum model’ and the ‘worst optimum model’. The results of these two crisp-coefficient extreme models can bound all outcomes of the IFBLP. One of the contributions in this paper is that it provides a mathematical sound approach (based on some mathematical developments) to find the boundaries of optimal alpha values, so that the linearity of model can be maintained during the conversions. The proposed approach is applied to a traffic noise control plan to demonstrate its capability of dealing with uncertainties.  相似文献   

16.
The calibration of some stochastic differential equation used to model spot prices in electricity markets is investigated. As an alternative to relying on standard likelihood maximization, the adoption of a fully Bayesian paradigm is explored, that relies on Markov chain Monte Carlo (MCMC) stochastic simulation and provides the posterior distributions of the model parameters. The proposed method is applied to one‐ and two‐factor stochastic models, using both simulated and real data. The results demonstrate good agreement between the maximum likelihood and MCMC point estimates. The latter approach, however, provides a more complete characterization of the model uncertainty, an information that can be exploited to obtain a more realistic assessment of the forecasting error. In order to further validate the MCMC approach, the posterior distribution of the Italian electricity price volatility is explored for different maturities and compared with the corresponding maximum likelihood estimates.  相似文献   

17.
In almost all the realistic circumstances, such as health risk assessment and uncertainty analysis of atmospheric dispersion, it is very essential to include all the information into modelling. The parameters associated to a particular model may include different kind of variability, imprecision and uncertainty. More often, it is seen that available informations are interpreted in probabilistic sense. Probability theory is a well-established theory to measure such kind of variability. However, not all of available information, data or model parameters affected by variability, imprecision and uncertainty can be handled by traditional probability theory. Uncertainty or imprecision may occur due to incomplete information or data, measurement errors or data obtained from expert judgement or subjective interpretation of available data or information. Thus, model parameters, data may be affected by subjective uncertainty. Traditional probability theory is inappropriate to represent them. Possibility theory and fuzzy set theory is another branch of mathematics which is used as a tool to describe the parameters with insufficient or vague knowledge. In this paper, an attempt has been made to combine probability knowledge and possibility knowledge and draw the uncertainty. The paper describes an algorithm for combining probability distribution and interval-valued fuzzy number and applied to environmental risk modelling with a case study. The primary aim of this paper is to propagate the proposed method. Computer codes are prepared for the proposed method using MATLAB.  相似文献   

18.
Estimating the probability of extreme temperature events is difficult because of limited records across time and the need to extrapolate the distributions of these events, as opposed to just the mean, to locations where observations are not available. Another related issue is the need to characterize the uncertainty in the estimated probability of extreme events at different locations. Although the tools for statistical modeling of univariate extremes are well-developed, extending these tools to model spatial extreme data is an active area of research. In this paper, in order to make inference about spatial extreme events, we introduce a new nonparametric model for extremes. We present a Dirichlet-based copula model that is a flexible alternative to parametric copula models such as the normal and t-copula. The proposed modelling approach is fitted using a Bayesian framework that allow us to take into account different sources of uncertainty in the data and models. We apply our methods to annual maximum temperature values in the east-south-central United States.  相似文献   

19.
Non-probabilistic convex model utilizes a convex set to quantify the uncertainty domain of uncertain-but-bounded parameters, which is very effective for structural uncertainty analysis with limited or poor-quality experimental data. To overcome the complexity and diversity of the formulations of current convex models, in this paper, a unified framework for construction of the non-probabilistic convex models is proposed. By introducing the correlation analysis technique, the mathematical expression of a convex model can be conveniently formulated once the correlation matrix of the uncertain parameters is created. More importantly, from the theoretic analysis level, an evaluation criterion for convex modelling methods is proposed, which can be regarded as a test standard for validity verification of subsequent newly proposed convex modelling methods. And from the practical application level, two model assessment indexes are proposed, by which the adaptabilities of different convex models to a specific uncertain problem with given experimental samples can be estimated. Four numerical examples are investigated to demonstrate the effectiveness of the present study.  相似文献   

20.
We consider a problem where a company must decide the order in which to launch new products within a given time horizon and budget constraints, and where the parameters of the adoption rate of these new products are subject to uncertainty. This uncertainty can bring significant change to the optimal launch sequence. We present a robust optimization approach that incorporates such uncertainty on the Bass diffusion model for new products as well as on the price response function of partners that collaborate with the company in order to bring its products to market. The decision-maker optimizes his worst-case profit over an uncertainty set where nature chooses the time periods in which (integer) units of the budgets of uncertainty are used for worst impact. This leads to uncertainty sets with binary variables. We show that a conservative approximation of the robust problem can nonetheless be reformulated as a mixed integer linear programming problem, is therefore of the same structure as the deterministic problem and can be solved in a tractable manner. Finally, we illustrate our approach on numerical experiments. Our model also incorporates contracts with potential commercialization partners. The key output of our work is a sequence of product launch times that protects the decision-maker against parameter uncertainty for the adoption rates of the new products and the response of potential partners to partnership offers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号