首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposed a neural network (NN) metamodeling method to generate the cycle time (CT)–throughput (TH) profiles for single/multi-product manufacturing environments. Such CT–TH profiles illustrate the trade-off relationship between CT and TH, the two critical performance measures, and hence provide a comprehensive performance evaluation of a manufacturing system. The proposed methods distinct from the existing NN metamodeling work in three major aspects: First, instead of treating an NN as a black box, the geometry of NN is examined and utilized; second, a progressive model-fitting strategy is developed to obtain the simplest-structured NN that is adequate to capture the CT–TH relationship; third, an experiment design method, particularly suitable to NN modeling, is developed to sequentially collect simulation data for the efficient estimation of the NN models.  相似文献   

2.
Machine failure can have a significant impact on the throughput of manufacturing systems, therefore accurate modelling of breakdowns in manufacturing simulation models is essential. Finite mixture distributions have been successfully used by Ford Motor Company to model machine breakdown durations in simulation models of engine assembly lines. These models can be very complex, with a large number of machines. To simplify the modelling we propose a method of grouping machines with similar distributions of breakdown durations, which we call the Arrows Classification Method, where the Two-Sample Cramér-von-Mises statistic is used to measure the similarity of two sets of the data. We evaluate the classification procedure by comparing the throughput of a simulation model when run with mixture models fitted to individual machine breakdown durations; mixture models fitted to group breakdown durations; and raw data. Details of the methods and results of the classification will be presented, and demonstrated using an example.  相似文献   

3.
This paper proposes a novel method to select an experimental design for interpolation in random simulation, especially discrete event simulation. (Though the paper focuses on Kriging, this design approach may also apply to other types of metamodels such as non-linear regression models and splines.) Assuming that simulation requires much computer time, it is important to select a design with a small number of observations (or simulation runs). The proposed method is therefore sequential. Its novelty is that it accounts for the specific input/output behavior (or response function) of the particular simulation at hand; i.e., the method is customized or application-driven. A tool for this customization is bootstrapping, which enables the estimation of the variances of predictions for inputs not yet simulated. The method is tested through two classic simulation models, namely the expected steady-state waiting time of the M/M/1 queuing model, and the mean costs of a terminating (s, S) inventory simulation. For these two simulation models the novel design indeed gives better results than a popular alternative design, namely Latin Hypercube Sampling (LHS) with a prefixed sample.  相似文献   

4.
5.
A Bayesian model selection procedure for comparing models subject to inequality and/or equality constraints is proposed. An encompassing prior approach is used, and a general form of the Bayes factor of a constrained model against the encompassing model is derived. A simple estimation method is proposed which can estimate the Bayes factors for all candidate models simultaneously by using one set of samples from the encompassing model. A simulation study and a real data analysis demonstrate performance of the method.  相似文献   

6.
Kroese  D.P.  Rubinstein  R.Y. 《Queueing Systems》2004,46(3-4):317-351
We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation (change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.  相似文献   

7.
8.
Congestion and memory occupancy in computer system may be reduced further if new jobs are admitted only when the number of jobs queued at CPU is below CPU run queue cutoff (RQ). In this paper, we prove that response time of a job is invariant with respect toRQ if jobs do not communicate each other. We also demonstrate this invariance property numerically using marix-geometric methods and present an approximate method for the delay due to context switching under time slicing. The approximation suggests that time slicing with constant overhead yields a throughput similar to an FCFS system without overhead.  相似文献   

9.
Using the holding time model (HTM) method, an approximate analytic formula is derived for calculating the average throughput of a K-station production line with exponential service times, manufacturing blocking and no intermediate buffers between adjacent stations. The usefulness of the proposed analytical formula relies on the fact that it can handle the (general) case of workstations with different mean processing times — this being the contribution of this work compared against that of Alkaff and Muth — provided a good estimation of some coefficients involved is being made. By doing this for the balanced lines case, a simple formula is proposed with very good numerical results.  相似文献   

10.
This paper is about state estimation for continuous-time nonlinear models, in a context where all uncertain variables can be bounded. More precisely, cooperative models are considered, i.e., models that satisfy some constraints on the signs of the entries of the Jacobian of their dynamic equation. In this context, interval observers and a guaranteed recursive state estimation algorithm are combined to enclose the state at any given instant of time in a subpaving. The approach is illustrated on the state estimation of a waste-water treatment process.  相似文献   

11.
One method for improving wireless network throughput involves using directional antennas to increase signal gain and/or decrease interference. The physical layer models used in current networking simulators only minimally address the interaction of directional antennas and radio propagation. This paper compares the models found in popular simulation tools with measurements taken across a variety of links in multiple environments. We find that the effects of antenna direction are significantly different from those predicted by the models used in the common wireless network simulators. We propose a parametric model that better captures the effects of different propagation environments on directional antenna systems; we also show that the derived models are sensitive to both the direction of signal gain and the environment in which the antenna is used.  相似文献   

12.
This paper models and analyzes the throughput of a two-stage manufacturing system with multiple independent unreliable machines at each stage and one finite-sized buffer between the stages. The machines follow exponential operation, failure, and repair processes. Most of the literature uses binary random variables to model unreliable machines in transfer lines and other production lines. This paper first illustrates the importance of using more than two states to model parallel unreliable machines because of their independent and asynchronous operations in the parallel system. The system balance equations are then formulated based on a set of new notations of vector manipulations, and are transformed into a matrix form fitting the properties of the Quasi-Birth–Death (QBD) process. The Matrix-Analytic (MA) method for solving the generic QBD processes is used to calculate the system state probability and throughput. Numerical cases demonstrate that solution method is fast and accurate in analyzing parallel manufacturing systems, and thus prove the applicability of the new model and the effectiveness of the MA-based method. Such multi-state models and their solution techniques can be used as a building block for analyzing larger, more complex manufacturing systems.  相似文献   

13.
This paper addresses the estimation of the variance of the sample mean from steady-state simulations without requiring the knowledge of simulation run length a priori. Dynamic batch means is a new and useful approach to implementing the traditional batch means in limited memory without the knowledge of the simulation run length. However, existing dynamic batch means estimators do not allow one to control the value of batch size, which is the performance parameter of the batch means estimators. In this work, an algorithm is proposed based on two dynamic batch means estimators to dynamically estimate the optimal batch size as the simulation runs. The simulation results show that the proposed algorithm requires reasonable computation time and possesses good statistical properties such as small mean-squared-error (mse).  相似文献   

14.
Fusing multiple Bayesian knowledge sources   总被引:1,自引:0,他引:1  
We address the problem of information fusion in uncertain environments. Imagine there are multiple experts building probabilistic models of the same situation and we wish to aggregate the information they provide. There are several problems we may run into by naively merging the information from each. For example, the experts may disagree on the probability of a certain event or they may disagree on the direction of causality between two events (e.g., one thinks A causes B while another thinks B causes A). They may even disagree on the entire structure of dependencies among a set of variables in a probabilistic network. In our proposed solution to this problem, we represent the probabilistic models as Bayesian Knowledge Bases (BKBs) and propose an algorithm called Bayesian knowledge fusion that allows the fusion of multiple BKBs into a single BKB that retains the information from all input sources. This allows for easy aggregation and de-aggregation of information from multiple expert sources and facilitates multi-expert decision making by providing a framework in which all opinions can be preserved and reasoned over.  相似文献   

15.
Fork/join stations are commonly used to model the synchronization constraints in queuing models of computer networks, fabrication/assembly systems and material control strategies for manufacturing systems. This paper presents an exact analysis of a fork/join station in a closed queuing network with inputs from servers with two-phase Coxian service distributions, which models a wide range of variability in the input processes. The underlying queue length and departure processes are analyzed to determine performance measures such as throughput, distributions of the queue length and inter-departure times from the fork/join station. The results show that, for certain parameter settings, variability in the arrival processes has a significant impact on system performance. The model is also used to study the sensitivity of performance measures such as throughput, mean queue lengths, and variability of inter-departure times for a wide range of input parameters and network populations.  相似文献   

16.
The machine mix for a particular FMS, the number of machines performing each of three operations and the number of machines performing any of the three operations (flexible machines), is input to an FMS simulation. An intuitively selected combination of these four inputs are compared to a 24−1 fractional factorial design. The throughput predicted by the simulation is analyzed through two different regression models. These models are validated. A regression model in two inputs including their interaction, gives valid predictions and stable explanations.  相似文献   

17.
18.
The optimal flow control of a G/G/c finite capacity queue is investigated by approximating the general (G-type) distributions by a maximum entropy model with known first two moments. The flow-control mechanism maximizing the throughput, under a bounded time-delay criterion, is shown to be of window type (bang-bang control). The optimal input rate and the maximum number of packets in the system (i.e. sliding window size) are derived in terms of the maximum input rate and the second moment of the interinput time, the maximum allowed average time delay, the first two moments of the service times and the number of servers. Moreover, the relationship between the maximum throughput and maximum time delay is determined. Numerical examples provide useful information on how critically the optimal throughput is affected by the distributional form of the input and service patterns and the finite capacity of the queue.  相似文献   

19.
This paper proposes a mathematical programming method to construct the membership functions of the fuzzy objective value of the cost-based queueing decision problem with the cost coefficients and the arrival rate being fuzzy numbers. On the basis of Zadeh’s extension principle, three pairs of mixed integer nonlinear programs (MINLP) parameterized by the possibility level α are formulated to calculate the lower and upper bounds of the minimal expected total cost per unit time at α, through which the membership function of the minimal expected total cost per unit time of the fuzzy objective value is constructed. To provide a suitable optimal service rate for designing queueing systems, the Yager’s ranking index method is adopted. Two numerical examples are solved successfully to demonstrate the validity of the proposed method. Since the objective value is completely expressed by a membership function rather than by a crisp value, it conserves the fuzziness of the input information, thus more information is provided for designing queueing systems. The successful extension of queueing decision models to fuzzy environments permits queueing decision models to have wider applications in practice.  相似文献   

20.
A new approach to assess product lifetime performance for small data sets   总被引:2,自引:0,他引:2  
Because of cost and time limit factors, the number of samples is usually small in the early stages of manufacturing systems, and the scarcity of actual data will cause problems in decision-making. In order to solve this problem, this paper constructs a counter-intuitive hypothesis testing method by choosing the maximal p-value based on a two-parameter Weibull distribution to enhance the estimate of a nonlinear and asymmetrical shape of product lifetime distribution. Further, we systematically generate virtual data to extend the small data set to improve learning robustness of product lifetime performance. This study provides simulated data sets and two practical examples to demonstrate that the proposed method is a more appropriate technique to increase estimation accuracy of product lifetime for normal or non-normal data with small sample sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号