首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
Quality of decisions in inventory management models depends on the accuracy of parameter estimates used for decision making. In many situations, error in decision making is unavoidable. In such cases, sensitivity analysis is necessary for better implementation of the model. Though the newsvendor model is one of the most researched inventory models, little is known about its robustness. In this paper, we perform sensitivity analysis of the classical newsvendor model. Conditions for symmetry/skewness of cost deviation (i.e., deviation of expected demand–supply mismatch cost from its minimum) have been identified. These conditions are closely linked with symmetry/skewness of the demand density function. A lower bound of cost deviation is established for symmetric unimodal demand distributions. Based on demonstrations of the lower bound, we found the newsvendor model to be sensitive to sub-optimal ordering decisions, more sensitive than the economic order quantity model. Order quantity deviation (i.e., deviation of order quantity from its optimum) is explored briefly. We found the magnitude of order quantity deviation to be comparable with that of parameter estimation error. Mean demand is identified as the most influential parameter in deciding order quantity deviation.  相似文献   

2.
Accelerated life testing of materials is used to collect failure data quickly when the lifetime of a specimen under use condition is too long. This article considers estimates of the generalized exponential distribution parameters under step-stress partially accelerated life testing with Type-II censoring. The maximum likelihood approach is applied to derive point and asymptotic confidence interval estimations of the model parameters. The performance of the estimators is evaluated numerically for different parameter values and different sample sizes via their mean square error. Also, the average confidence intervals lengths and the associated coverage probabilities are obtained. A simulation study is conducted for illustration.  相似文献   

3.
The hybrid bootstrap uses resampling ideas to extend the duality approach to interval estimation for a parameter of interest when there are nuisance parameters. The confidence region constructed by the hybrid bootstrap may perform much better than the parametric bootstrap region in situations where the data provide substantial information about the nuisance parameter, but limited information about the parameter of interest. We apply this method to estimate the location of quantitative trait loci (QTL) in interval mapping model. The conditional distribution of quantitative traits, given flanked genetic marker genotypes is often assumed to be the mixture model of two phenotype distributions. The mixing proportions in the model represent the recombination rate between a genetic marker and quantitative trait loci and provides information about the unknown location of the QTL. Since recombination events are unlikely, we will have less information about the location of the QTL than other parameters. This observation makes a hybrid approach to interval estimation for QTL appealing, especially since the necessary distribution theory, which is often a challenge for mixture models, can be handled by bootstrap simulation.  相似文献   

4.
Input and output data, under uncertainty, must be taken into account as an essential part of data envelopment analysis (DEA) models in practice. Many researchers have dealt with this kind of problem using fuzzy approaches, DEA models with interval data or probabilistic models. This paper presents an approach to scenario-based robust optimization for conventional DEA models. To consider the uncertainty in DEA models, different scenarios are formulated with a specified probability for input and output data instead of using point estimates. The robust DEA model proposed is aimed at ranking decision-making units (DMUs) based on their sensitivity analysis within the given set of scenarios, considering both feasibility and optimality factors in the objective function. The model is based on the technique proposed by Mulvey et al. (1995) for solving stochastic optimization problems. The effect of DMUs on the product possibility set is calculated using the Monte Carlo method in order to extract weights for feasibility and optimality factors in the goal programming model. The approach proposed is illustrated and verified by a case study of an engineering company.  相似文献   

5.
In this paper, the transformation method is introduced as a powerful approach for both the simulation and the analysis of systems with uncertain model parameters. Based on the concept of α-cuts, the method represents a special implementation of fuzzy arithmetic that avoids the well-known effect of overestimation which usually arises when fuzzy arithmetic is reduced to interval computation. Systems with uncertain model parameters can thus be simulated without any artificial widening of the simulation results. As a by-product of the implementation scheme, the transformation method also provides a measure of influence to quantitatively analyze the uncertain system with respect to the effect of each uncertain model parameter on the overall uncertainty of the model output. By this, a special kind of sensitivity analysis can be defined on the basis of fuzzy arithmetic. Finally, to show the efficiency of the transformation method, the method is applied to the simulation and analysis of a model for the friction interface between the sliding surfaces of a bolted joint connection.  相似文献   

6.
Simulation models support managers in the solution of complex problems. International agencies recommend uncertainty and global sensitivity methods as best practice in the audit, validation and application of scientific codes. However, numerical complexity, especially in the presence of a high number of factors, induces analysts to employ less informative but numerically cheaper methods. This work introduces a design for estimating global sensitivity indices from given data (including simulation input–output data), at the minimum computational cost. We address the problem starting with a statistic based on the L1-norm. A formal definition of the estimators is provided and corresponding consistency theorems are proved. The determination of confidence intervals through a bias-reducing bootstrap estimator is investigated. The strategy is applied in the identification of the key drivers of uncertainty for the complex computer code developed at the National Aeronautics and Space Administration (NASA) assessing the risk of lunar space missions. We also introduce a symmetry result that enables the estimation of global sensitivity measures to datasets produced outside a conventional input–output functional framework.  相似文献   

7.
频率模型平均估计近年来受到了较大的关注,但对有测量误差的观测数据尚未见到任何研究.文章主要考虑了线性测量误差模型的平均估计问题,导出了模型平均估计的渐近分布,基于Hjort和Claeskens(2003)的思想构造了一个覆盖真实参数的概率趋于预定水平的置信区间,并证明了该置信区间与基于全模型正态逼近所构造的置信区间的渐近等价性.模拟结果表明当协变量存在测量误差时,模型平均估计能明显增加点估计的效率.  相似文献   

8.
We present a simple open-source semi-intrusive computational method to propagate uncertainties through hyperelastic models of soft tissues. The proposed method is up to two orders of magnitude faster than the standard Monte Carlo method. The material model of interest can be altered by adjusting few lines of (FEniCS) code. The method is able to (1) provide the user with statistical confidence intervals on quantities of practical interest, such as the displacement of a tumour or target site in an organ; (2) quantify the sensitivity of the response of the organ to the associated parameters of the material model. We exercise the approach on the determination of a confidence interval on the motion of a target in the brain. We also show that for the boundary conditions under consideration five parameters of the Ogden–Holzapfel-like model have negligible influence on the displacement of the target zone compared to the three most influential parameters. The benchmark problems and all associated data are made available as supplementary material.  相似文献   

9.
The Shadow Prior     
In this article we consider posterior simulation in models with constrained parameter or sampling spaces. Constraints on the support of sampling and prior distributions give rise to a normalization constant in the complete conditional posterior distribution for the (hyper-) parameters of the respective distribution, complicating posterior simulation.

To mitigate the problem of evaluating normalization constants, we propose a computational approach based on model augmentation. We include an additional level in the probability model to separate the (hyper-) parameter from the constrained probability model, and we refer to this additional level in the probability model as a shadow prior. This approach can significantly reduce the overall computational burden if the original (hyper-) prior includes a complicated structure, but a simple form is chosen for the shadow prior, for example, if the original prior includes a mixture model or multivariate distribution, and the shadow prior defines a set of shadow parameters that are iid given the (hyper-) parameters. Although introducing the shadow prior changes the posterior inference on the original parameters, we argue that by appropriate choices of the shadow prior, the change is minimal and posterior simulation in the augmented probability model provides a meaningful approximation to the desired inference. Data used in this article are available online.  相似文献   

10.
Stochastic simulations applied to black-box computer experiments are becoming more widely used to evaluate the reliability of systems. Yet, the reliability evaluation or computer experiments involving many replications of simulations can take significant computational resources as simulators become more realistic. To speed up, importance sampling coupled with near-optimal sampling allocation for these experiments is recently proposed to efficiently estimate the probability associated with the stochastic system output. In this study, we establish the central limit theorem for the probability estimator from such procedure and construct an asymptotically valid confidence interval to quantify estimation uncertainty. We apply the proposed approach to a numerical example and present a case study for evaluating the structural reliability of a wind turbine.  相似文献   

11.
We analyze the approximation quality of the discrete-time decomposition approach, compared to simulation, and with respect to the expected value and the 95th-percentile of waiting time. For both performance measures, we use OLS regression models to compute point estimates, and quantile regression models to compute interval estimates of decomposition error. The ANOVA reveal major influencing factors on decomposition error while the regression models are demonstrated to provide accurate forecasts and precise confidence intervals for decomposition error.  相似文献   

12.
Complex computational engineering uncertainty analyses have become more prevalent. When input parameters of such engineering models are uncertain, the output metric's uncertainty distribution is of an unknown parametric form. Since Wilks' method, named after the seminal paper by SS Wilks in 1941 entitled “Determination of sample sizes for setting tolerance limits”, is a nonparametric statistical procedure, it has received renewed interest, in particular in nuclear and chemical safety engineering. Unfortunately, the prevailing Wilks' method applied relies on arbitrary specification of order statistics' ranks with undue influence on the sample size recommendations that follow. Herein, a novel modification of Wilks' method involving two quantiles is proposed resolving that arbitrary rank selection. Together with a confidence level to be exceeded, these quantiles uniquely determine the parameters of an order statistics' beta distribution which drive the selection of symmetric tolerance limits. The modified procedure is demonstrated in two illustrative engineering uncertainty analysis examples drawn from the nuclear and chemical engineering domains.  相似文献   

13.
For models with correlated parameters, the amount of uncertainty (generally measured by variance) in a model output contributed by a specific parameter encompasses two components: (1) the uncertainty contributed by the variations (used to represent uncertainty in the parameter) correlated with other parameters; and (2) the uncertainty contributed by the variations unique to the parameter of interest (i.e., uncorrelated variations or variations that cannot be explained by any other parameters in the model). A regression-based method has been proposed previously by Xu and Gertner (2008) [1] to decouple the correlated and uncorrelated contributions to uncertainties in model outputs by each parameter for linear models. Based on a modified version of the popular Fourier Amplitude Sensitivity Test (FAST), this paper develops a general approach for the quantification of the correlated and uncorrelated parametric uncertainty contributions in linear, nonlinear and non-monotonic models with linear or nonlinear dependence among parameters. The decoupling of correlated and uncorrelated contributions can help us determine if the uncertainty contributed by a specific parameter results from the uncertainty in itself or from its correlations with other parameters. Thus, this decoupling can be very useful in improving the understanding our modeled systems.  相似文献   

14.
Avoiding concentration or saturation of activities is fundamental in many environmental and urban planning contexts. Examples include dispersing retail and restaurant outlets, sensitivity to impacts in forest utilization, spatial equity of waste disposal, ensuring public safety associated with noxious facilities, and strategic placement of military resources, among others. Dispersion models have been widely applied to ensure spatial separation between activities or facilities. However, existing approaches rely on deterministic approaches that ignore issues of spatial data uncertainty, which could lead to poor decision making. To address data uncertainty issues in dispersion modelling, a multi-objective approach that explicitly accounts for spatial uncertainty is proposed, enabling the impacts of uncertainty to be evaluated with statistical confidence. Owing to the integration of spatial uncertainty, this dispersion model is more complex and computationally challenging to solve. In this paper we develop a multi-objective evolutionary algorithm to address the computational challenges posed. The proposed heuristic incorporates problem-specific spatial knowledge to significantly enhance the capability of the evolutionary algorithm for solving this problem. Empirical results demonstrate the performance superiority of the developed approach in supporting facility and service planning.  相似文献   

15.
The popularity of state-space models comes from their flexibilities and the large variety of applications they have been applied to. For multivariate cases, the assumption of normality is very prevalent in the research on Kalman filters. To increase the applicability of the Kalman filter to a wider range of distributions, we propose a new way to introduce skewness to state-space models without losing the computational advantages of the Kalman filter operations. The skewness comes from the extension of the multivariate normal distribution to the closed skew-normal distribution. To illustrate the applicability of such an extension, we present two specific state-space models for which the Kalman filtering operations are carefully described.  相似文献   

16.
考虑非参数协变量带有测量误差的非线性半参数模型,构造了模型中未知参数的经验对数似然比统计量,在测量误差分布为普通光滑分布时,证明了所提出的统计量具有渐近χ2分布,由此结果可以用来构造未知参数的置信域.另外也构造了未知参数的最小二乘估计量,并证明了它的渐近性质.就置信域及其覆盖概率大小方面,通过模拟研究比较了经验似然方法与最小二乘法的优劣.  相似文献   

17.
Despite several years of research, type reduction (TR) operation in interval type-2 fuzzy logic system (IT2FLS) cannot perform as fast as a type-1 defuzzifier. In particular, widely used Karnik–Mendel (KM) TR algorithm is computationally much more demanding than alternative TR approaches. In this work, a data driven framework is proposed to quickly, yet accurately, estimate the output of the KM TR algorithm using simple regression models. Comprehensive simulation performed in this study shows that the centroid end-points of KM algorithm can be approximated with a mean absolute percentage error as low as 0.4%. Also, switch point prediction accuracy can be as high as 100%. In conjunction with the fact that simple regression model can be trained with data generated using exhaustive defuzzification method, this work shows the potential of proposed method to provide highly accurate, yet extremely fast, TR approximation method. Speed of the proposed method should theoretically outperform all available TR methods while keeping the uncertainty information intact in the process.  相似文献   

18.
The main focus of the call center research has been on models that assume all input distributions are known in queuing theory which gives birth to staffing and the estimation of operating characteristics. Studies investigating uncertainty of the input distributions and its implications on call center management are scarce. This study attempts to fill this gap by analyzing the call center service distribution behavior by using Bayesian parametric and semi-parametric mixture models that are capable of exhibiting non-standard behavior such as multi-modality, skewness and excess kurtosis motivated by real call center data. The study is motivated by the observation that different customer profiles might require different agent skill sets which can create additional sources of uncertainty in the behavior of service distributions. In estimating model parameters, Markov chain Monte Carlo methods such as the Gibbs sampler and the reversible jump algorithms are presented and the implications of using such models on system performance and staffing are discussed.  相似文献   

19.
Moment-independent importance measures are increasingly used by practitioners to understand how output uncertainty may be shared between a set of stochastic inputs. Computing Borgonovo's sensitivity indices for a large group of inputs is still a challenging problem due to the curse of dimensionality and it is addressed in this article. An estimation scheme taking the most of recent developments in copula theory is developed. Furthermore, the concept of Shapley value is used to derive new sensitivity indices, which makes the interpretation of Borgonovo's indices much easier. The resulting importance measure offers a double advantage compared with other existing methods since it allows to quantify the impact exerted by one input variable on the whole output distribution after taking into account all possible dependencies and interactions with other variables. The validity of the proposed methodology is established on several analytical examples and the benefits in terms of computational efficiency are illustrated with real-life test cases such as the study of the water flow through a borehole. In addition, a detailed case study dealing with the atmospheric re-entry of a launcher first stage is completed.  相似文献   

20.
To evaluate the impact of model inaccuracies over the network’s output, after the evidence propagation, in a Gaussian Bayesian network, a sensitivity measure is introduced. This sensitivity measure is the Kullback-Leibler divergence and yields different expressions depending on the type of parameter to be perturbed, i.e. on the inaccurate parameter.In this work, the behavior of this sensitivity measure is studied when model inaccuracies are extreme, i.e. when extreme perturbations of the parameters can exist. Moreover, the sensitivity measure is evaluated for extreme situations of dependence between the main variables of the network and its behavior with extreme inaccuracies. This analysis is performed to find the effect of extreme uncertainty about the initial parameters of the model in a Gaussian Bayesian network and about extreme values of evidence. These ideas and procedures are illustrated with an example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号