首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Kriging metamodeling in simulation: A review   总被引:1,自引:0,他引:1  
This article reviews Kriging (also called spatial correlation modeling). It presents the basic Kriging assumptions and formulas—contrasting Kriging and classic linear regression metamodels. Furthermore, it extends Kriging to random simulation, and discusses bootstrapping to estimate the variance of the Kriging predictor. Besides classic one-shot statistical designs such as Latin Hypercube Sampling, it reviews sequentialized and customized designs for sensitivity analysis and optimization. It ends with topics for future research.  相似文献   

2.
This paper proposes a novel method to select an experimental design for interpolation in simulation. Although the paper focuses on Kriging in deterministic simulation, the method also applies to other types of metamodels (besides Kriging), and to stochastic simulation. The paper focuses on simulations that require much computer time, so it is important to select a design with a small number of observations. The proposed method is therefore sequential. The novelty of the method is that it accounts for the specific input/output function of the particular simulation model at hand; that is, the method is application-driven or customized. This customization is achieved through cross-validation and jackknifing. The new method is tested through two academic applications, which demonstrate that the method indeed gives better results than either sequential designs based on an approximate Kriging prediction variance formula or designs with prefixed sample sizes.  相似文献   

3.
Screening experiments are performed to eliminate unimportant factors efficiently so that the remaining important factors can be studied more thoroughly in later experiments. This paper proposes controlled sequential factorial design (CSFD) for discrete-event simulation experiments. It combines a sequential hypothesis testing procedure with a traditional (fractional) factorial design to control the Type I error and power for each factor under heterogeneous variance conditions. We compare CSFD with other sequential screening methods with similar error control properties. CSFD requires few assumptions and demonstrates robust performance with different system conditions. The method is appropriate for systems with a moderate number of factors and large variances.  相似文献   

4.
The goal of factor screening is to find the really important inputs (factors) among the many inputs that may be changed in a realistic simulation experiment. A specific method is sequential bifurcation (SB), which is a sequential method that changes groups of inputs simultaneously. SB is most efficient and effective if the following assumptions are satisfied: (i) second-order polynomials are adequate approximations of the input/output functions implied by the simulation model; (ii) the signs of all first-order effects are known; and (iii) if two inputs have no important first-order effects, then they have no important second-order effects either (heredity property). This paper examines SB for random simulation with multiple responses (outputs), called multi-response SB (MSB). This MSB selects groups of inputs such that—within a group—all inputs have the same sign for a specific type of output, so no cancellation of first-order effects occurs. To obtain enough replicates (replications) for correctly classifying a group effect or an individual effect as being important or unimportant, MSB applies Wald’s sequential probability ratio test (SPRT). The initial number of replicates in this SPRT is also selected efficiently by MSB. Moreover, MSB includes a procedure to validate the three assumptions of MSB. The paper evaluates the performance of MSB through extensive Monte Carlo experiments that satisfy all MSB assumptions, and through a case study representing a logistic system in China; the results are very promising.  相似文献   

5.
Simulated computer experiments have become a viable cost-effective alternative for controlled real-life experiments. However, the simulation of complex systems with multiple input and output parameters can be a very time-consuming process. Many of these high-fidelity simulators need minutes, hours or even days to perform one simulation. The goal of global surrogate modeling is to create an approximation model that mimics the original simulator, based on a limited number of expensive simulations, but can be evaluated much faster. The set of simulations performed to create this model is called the experimental design. Traditionally, one-shot designs such as the Latin hypercube and factorial design are used, and all simulations are performed before the first model is built. In order to reduce the number of simulations needed to achieve the desired accuracy, sequential design methods can be employed. These methods generate the samples for the experimental design one by one, without knowing the total number of samples in advance. In this paper, the authors perform an extensive study of new and state-of-the-art space-filling sequential design methods. It is shown that the new sequential methods proposed in this paper produce results comparable to the best one-shot experimental designs available right now.  相似文献   

6.
In the design of a system, the comparison of possible solutions using simulation is generally performed with fixed environmental conditions. In practice, however, unexpected changes can occur for example in the part mix of a manufacturing facility or in the customer demand. Such changes, which are considered as modifications in environmental factors, can impact the system response. As a consequence, a solution A that is better than B for a given environment, can yield poorer performance than B for another environment. Therefore, we are interested in robust simulation studies, which aim at taking into account several possible environments. In methods based on Taguchi’s principles, no distinction is made between these environments in the robustness computation. In the suggested heuristic approach, we focus on problems where a particular environment is expected when the system will be in operation (the others being unexpected environments). This particular environment will be considered in the study as a “base environmental scenario”. The robustness of a solution of the design problem is computed as an approximate measure of what will be saved or lost if the environment becomes the unexpected. Reference curves are suggested to allow these solutions to be empirically compared in accordance with the decision-maker’s requirements. A simplified example is provided. The results are different from those obtained using a signal to noise ratio, which is typically used in Taguchian approaches.  相似文献   

7.
This paper proposed a neural network (NN) metamodeling method to generate the cycle time (CT)–throughput (TH) profiles for single/multi-product manufacturing environments. Such CT–TH profiles illustrate the trade-off relationship between CT and TH, the two critical performance measures, and hence provide a comprehensive performance evaluation of a manufacturing system. The proposed methods distinct from the existing NN metamodeling work in three major aspects: First, instead of treating an NN as a black box, the geometry of NN is examined and utilized; second, a progressive model-fitting strategy is developed to obtain the simplest-structured NN that is adequate to capture the CT–TH relationship; third, an experiment design method, particularly suitable to NN modeling, is developed to sequentially collect simulation data for the efficient estimation of the NN models.  相似文献   

8.
Analysts faced with conducting experiments involving quantitative factors have a variety of potential designs in their portfolio. However, in many experimental settings involving discrete-valued factors (particularly if the factors do not all have the same number of levels), none of these designs are suitable.In this paper, we present a mixed integer programming (MIP) method that is suitable for constructing orthogonal designs, or improving existing orthogonal arrays, for experiments involving quantitative factors with limited numbers of levels of interest. Our formulation makes use of a novel linearization of the correlation calculation.The orthogonal designs we construct do not satisfy the definition of an orthogonal array, so we do not advocate their use for qualitative factors. However, they do allow analysts to study, without sacrificing balance or orthogonality, a greater number of quantitative factors than it is possible to do with orthogonal arrays which have the same number of runs.  相似文献   

9.
Simulation is a widely used methodology for queueing systems. Its superficial simplicity hides a number of pitfalls which are not all as well known as they should be. In particular simulation experiments need careful design and analysis as well as good presentations of the results. Even the elements of simulation such as the generation of arrival and service times have a chequered history with major problems lying undiscovered for 20 years. On the other hand, good simulation practice can offer much more than is commonly realized.  相似文献   

10.
In this paper, we develop an approach that determines the overall best parameter setting in design of experiments. The approach starts with successive orthogonal array experiments and ends with a full factorial experiment. The setup for the next orthogonal-array experiment is obtained from the previous one by either fixing a factor at a given level or by reducing the number of levels considered for all currently non-fixed factors. We illustrate this method using an industrial problem with seven parameters, each with three levels. In previous work, the full factorial of 37 = 2,187 points was evaluated and the best point was found. With the new method, we found the same point using 3% of these evaluations. As a further comparison, we obtained the optimum using a traditional Taguchi approach, and found it corresponded to the 366th of the 2,187 possibilities when sorted by the objective function. We conclude the proposed approach would provide an accurate, fast, and economic tool for optimization using design of experiments.  相似文献   

11.
Uses and abuses of statistical simulation   总被引:1,自引:0,他引:1  
More and more problems are being tackled by simulation as large computing costs per hour approach those of mathematicians' time. Abuses of simulation arise from ignorance or careless use of little understood procedures, and some of the fundamental tools of the subject are much less well understood than commonly supposed. This is illustrated here by the saga of pseudorandom number generators, normal variate generators and the analysis of queueing system simulations. On the positive side, genuinely new uses of simulation are appearing, particularly in statistical inference. These are exemplified by recursive algorithms for simulating complex systems and simulation-based likelihood inference for point processes.  相似文献   

12.
We present a heuristic optimization method for stochastic production-inventory systems that defy analytical modelling and optimization. The proposed heuristic takes advantage of simulation while at the same time minimizes the impact of the dimensionality curse by using regression analysis. The heuristic was developed and tested for an oil and gas company, which decided to adopt the heuristic as the optimization method for a supply-chain design project. To explore the performance of the heuristic in general settings, we conducted a simulation experiment on 900 test problems. We found that the average cost error of using the proposed heuristic was reasonably low for practical applications.  相似文献   

13.
A dual porosity model of multidimensional, multicomponent, multiphase flow in naturally fractured reservoirs is derived by the mathematical theory of homogenization. A fully compositional model is considered where there are N chemical components, each of which may exist in any or all of the three phases: gas, oil, and water. Special attention is paid to developing a general approach to incorporating gravitational forces, pressure gradient effects, and effects of mass transfer between phases. In particular, general equations for the interactions between matrix and fracture systems are obtained under homogenization by a careful scaling of these effects. Using this dual porosity compositional model, numerical experiments are reported for the benchmark problems of the sixth comparative solution project organized by the society of petroleum engineers.  相似文献   

14.
15.
In this paper, the super-linearly and quadratically convergent strong sub-feasible method [J.L. Li, J.B. Jian, A superlinearly and quadratically convergent strongly subfeasible method for nonlinear inequality constrained optimization, OR Transactions, 7 (2) (2003) 21-34] for nonlinear inequality constrained optimization is improved, such that the iterative points can get into the feasible region after a finite number of iterations. As a result, a strict restricted condition can be overcome. Another two contributions of this paper are that a new bidirectional Armijo line search is presented and a lot of numerical comparison results are reported.  相似文献   

16.
We presenta Bayesian approach to model calibration when evaluation of the model is computationally expensive. Here, calibration is a nonlinear regression problem: given a data vector Y corresponding to the regression model f(β), find plausible values of β. As an intermediate step, Y and f are embedded into a statistical model allowing transformation and dependence. Typically, this problem is solved by sampling from the posterior distribution of β given Y using MCMC. To reduce computational cost, we limit evaluation of f to a small number of points chosen on a high posterior density region found by optimization.Then,we approximate the logarithm of the posterior density using radial basis functions and use the resulting cheap-to-evaluate surface in MCMC.We illustrate our approach on simulated data for a pollutant diffusion problem and study the frequentist coverage properties of credible intervals. Our experiments indicate that our method can produce results similar to those when the true “expensive” posterior density is sampled by MCMC while reducing computational costs by well over an order of magnitude.  相似文献   

17.
The contribution of this paper is to introduce change of measure based techniques for the rare-event analysis of heavy-tailed random walks. Our changes of measures are parameterized by a family of distributions admitting a mixture form. We exploit our methodology to achieve two types of results. First, we construct Monte Carlo estimators that are strongly efficient (i.e. have bounded relative mean squared error as the event of interest becomes rare). These estimators are used to estimate both rare-event probabilities of interest and associated conditional expectations. We emphasize that our techniques allow us to control the expected termination time of the Monte Carlo algorithm even if the conditional expected stopping time (under the original distribution) given the event of interest is infinity–a situation that sometimes occurs in heavy-tailed settings. Second, the mixture family serves as a good Markovian approximation (in total variation) of the conditional distribution of the whole process given the rare event of interest. The convenient form of the mixture family allows us to obtain functional conditional central limit theorems that extend classical results in the literature.  相似文献   

18.
The aim of this work is to analyze the efficiency of a new sustainable urban gravity settler to avoid the solid particle transport, to improve the water waste quality and to prevent pollution problems due to rain water harvesting in areas with no drainage pavement. In order to get this objective, it is necessary to solve particle transport equations along with the turbulent fluid flow equations since there are two phases: solid phase (sand particles) and fluid phase (water). In the first place, the turbulent flow is modelled by solving the Reynolds-averaged Navier-Stokes (RANS) equations for incompressible viscous flows through the finite volume method (FVM) and then, once the flow velocity field has been determined, representative particles are tracked using the Lagrangian approach. Within the particle transport models, a particle transport model termed as Lagrangian particle tracking model is used, where particulates are tracked through the flow in a Lagrangian way. The full particulate phase is modelled by just a sample of about 2,000 individual particles. The tracking is carried out by forming a set of ordinary differential equations in time for each particle, consisting of equations for position and velocity. These equations are then integrated using a simple integration method to calculate the behaviour of the particles as they traverse the flow domain. The entire FVM model is built and the design of experiments (DOE) method was used to limit the number of simulations required, saving on the computational time significantly needed to arrive at the optimum configuration of the settler. Finally, conclusions of this work are exposed.  相似文献   

19.
Engineers and scientists often identify robust parameter design as one of the most important process and quality improvement methods. Focused on statistical modeling and numerical optimization strategies, most researchers typically assume a process with reasonably small variability. Realistically, however, industrial processes often exhibit larger variability, particularly in mass production lines. In such cases, many of the modeling assumptions behind the robust parameter design models available in the literature do not hold. Accordingly, the results and recommendations provided to decision makers could generate suboptimal modifications to processes and products. As manufacturers seek improved methods for ensuring quality in resource-constrained environments, experimenters should examine trade-offs to achieve the levels of precision that best support their decision making. In contrast to previous research, this paper proposes a trade-off analysis between the cost of replication and the desired precision of generated solutions. We consider several techniques in the early stages of experimental design, using Monte Carlo simulation as a tool, for revealing potential options to the decision maker. This is perhaps the first study to show the avenue which may lead to more effective robust parameter design models with the optimal combination of cost constraints and desired precision of solutions.  相似文献   

20.
This paper presents a composite model in which two simulation approaches, discrete-event simulation (DES) and system dynamics (SD), are used together to address a major healthcare problem, the sexually transmitted infection Chlamydia. The paper continues an on-going discussion in the literature about the potential benefits of linking DES and SD. Previous researchers have argued that DES and SD are complementary approaches and many real-world problems would benefit from combining both methods. In this paper, a DES model of the hospital outpatient clinic which treats Chlamydia patients is combined with an SD model of the infection process in the community. These two models were developed in commercial software and linked in an automated fashion via an Excel interface. To our knowledge this is the first time such a composite model has been used in a healthcare setting. The model shows how the prevalence of Chlamydia at a community level affects (and is affected by) operational level decisions made in the hospital outpatient department. We discuss the additional benefits provided by the composite model over and above the benefits gained from the two individual models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号