首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
The feasibility of solving LP problems on an Apple II is explored. The maximal size of problems that can be solved is found to be around 50 constraints and 100 variables. Run times may be over 12 hours in interpreted Applesoft basic and over one hour in compiled Applesoft basic. Numerical stability is somewhat problematic. Nonetheless, the conclusion is that nicely scaled problems can be solved on the Apple II.  相似文献   

2.
Decision making in modern supply chains can be extremely daunting due to their complex nature. Discrete-event simulation is a technique that can support decision making by providing what-if analysis and evaluation of quantitative data. However, modelling supply chain systems can result in massively large and complicated models that can take a very long time to run even with today's powerful desktop computers. Distributed simulation has been suggested as a possible solution to this problem, by enabling the use of multiple computers to run models. To investigate this claim, this paper presents experiences in implementing a simulation model with a ‘conventional’ approach and with a distributed approach. This study takes place in a healthcare setting, the supply chain of blood from donor to recipient. The study compares conventional and distributed model execution times of a supply chain model simulated in the simulation package Simul8. The results show that the execution time of the conventional approach increases almost linearly with the size of the system and also the simulation run period. However, the distributed approach to this problem follows a more linear distribution of the execution time in terms of system size and run time and appears to offer a practical alternative. On the basis of this, the paper concludes that distributed simulation can be successfully applied in certain situations.  相似文献   

3.
There is an unmet demand for the treatment of irreversible kidney failure, particularly in the older age groups. A model of the treatment system was built to explore the implications of meeting the demand, giving different priorities to the available treatments and changing the balance between home and hospital. This discrete event simulation, developed in the Wessex Region, describes the system realistically, including resource use and constraints, the arrival of kidneys for transplantation and the matching of donors with recipients. It is written in Pascal on an Apple II computer and uses shadow entities to describe the survival of patients on each type of treatment. The model was validated with techniques which included the use of a tabular display while the simulation was running. The model has proved to be easy to use and robust both to different data requirements and extreme policy changes. The techniques developed have more general application in the Health Service context.  相似文献   

4.
Microcomputers are now widely used for discrete-event simulation work in Operational Research. An inexpensive microcomputer system for simulation modelling is presented. Based on an Apple II, it allows the programmer to develop three-phase activity based interactive models in Ucsd Pascal. Use is made of disc emulation for the provision of simultaneously available pictorial displays and the extension of fast access on-line memory for the development of large simulations.Use of a microcomputer speeds development time and gives the user a transportable computer dedicated to the simulation. Pascal facilitates the development of readable portable simulations.The simulation of a conveyer belt system demonstrates the simplicity and flexibility of the pictorial display. A practical study in the Health Service (modelling the treatment of chronic renal failure) illustrates that the package may be used to simulate a real and complex system.  相似文献   

5.
Thermoelastic damping (TED) affects the quality factors of vacuum-operated micro/nanobeam resonators significantly. In this work, by adopting the non-Fourier theory of dual-phase-lag (DPL) model, an analytical formula of TED in micro/nanobeam resonators with circular cross-section is first developed. Moreover, for micro/nanobeam resonators with rectangular cross-section, the series-form type of DPL-TED model is also proposed and compared with the modified existing model. The characteristics of TED spectra with the single-peak, dual-peak, and multiple-peak phenomena are explored. The simulation results reveal that the ratio of dual-phase-lag times and the characteristic dimension of beams such as the radius and thickness have significant influences on TED behaviors. In addition, temperature distributions in micro/nanobeams exhibit an apparent distinction under the DPL non-Fourier effect.  相似文献   

6.
Both technology and market demands within the high-tech electronics manufacturing industry change rapidly. Accurate and efficient estimation of cycle-time (CT) distribution remains a critical driver of on-time delivery and associated customer satisfaction metrics in these complex manufacturing systems. Simulation models are often used to emulate these systems in order to estimate parameters of the CT distribution. However, execution time of such simulation models can be excessively long limiting the number of simulation runs that can be executed for quantifying the impact of potential future operational changes. One solution is the use of simulation metamodeling which is to build a closed-form mathematical expression to approximate the input–output relationship implied by the simulation model based on simulation experiments run at selected design points in advance. Metamodels can be easily evaluated in a spreadsheet environment “on demand” to answer what-if questions without needing to run lengthy simulations. The majority of previous simulation metamodeling approaches have focused on estimating mean CT as a function of a single input variable (i.e., throughput). In this paper, we demonstrate the feasibility of a quantile regression based metamodeling approach. This method allows estimation of CT quantiles as a function of multiple input variables (e.g., throughput, product mix, and various distributional parameters of time-between-failures, repair time, setup time, loading and unloading times). Empirical results are provided to demonstrate the efficacy of the approach in a realistic simulation model representative of a semiconductor manufacturing system.  相似文献   

7.
The purpose of this paper is to illustrate how a very simple queueing model can be used to gain insight into a computer memory management strategy that is important for a large class of discrete-event simulation models. To this end, an elementary queueing model is used to demonstrate that it can be advantageous to run transaction-based simulations with a relatively few tagged transactions that collect data. The remaining transactions merely congest the system. Conceptually the tagged transactions flow through the simulation acting similar to radioactive trace elements inserted into a biological system. The queueing model analyzed in this paper provides insight into some trade-offs in simulation data collection. We show that, while resulting in a longer computer run, an optimal tagging interval greater than one will minimize the probability of prematurely aborting the run. Finally, we propose a heuristic procedure to estimate the optimal tagging interval. We illustrate this with an actual simulation study of a steel production facility.This research was partially supported by a grant to Cornell University by the Bethlehem Steel Corporation  相似文献   

8.
We have developed an approximate analytic model and a detailed simulation model to study the performance of an ISDN switch with distributed architecture. The analytic model treats the switch as a network of single server and infinite server queues with nonpreemptive priority service, general service times and batch arrivals. The simulation program is written in a distributed and modular way so as to simplify model development and debugging. Also extensive statistical techniques are employed for simulation output validation. It is observed that the analytic and the simulation models are in close agreement for the mean end-to-end delay and in moderately close agreement for the 95th percentile points of the end-to-end delay distribution. The comparisons between the analytic and the simulation models lead us to conjecture that the analytic model would be even more accurate for bigger systems with several hundred processors (where simulation models are too expensive to run). Even though the model assumes Poisson external call arrival process, it is shown that it may be applied with reasonable accuracy even when external call arrivals are non-Poisson. This is due to the fact that the composite message arrival process at a processor or transmission element tends to be close to Poisson even when the external call arrivals are non-Poisson.  相似文献   

9.
This paper presents an approach to simulate and implement by stepwise refinement the whole manufacturing system (MS) by means of distributed simulation. This approach is based on the use of different classes of Petri nets to model different levels of a manufacturing system. Furthermore these classes may match the abstraction levels of a high-level Petri net used to model the MS. Each level can be simulated on a processor or a cluster of processors which can communicate between themselves using a network. The main contribution is to give the opportunity to combine simulation, performance evaluation and emulation. The emulation means that a part of the system can be run in real time while the other part is simulated. Moreover based on the abstraction levels of high-level Petri nets, subsystems can be integrated step-by-step from the design stage to the implementation one, allowing inter-changeability between simulated components and real-time physical systems. This approach is achieved by defining a simulation engine which involves a local simulator, an emulator and an interface to the physical process. Criteria are defined to use an emulator or a local control software for a physical process as a logical process for the conservative distributed simulation.  相似文献   

10.
In a previous paper, the author discussed what induction period was necessary when simulating time series models, in order to avoid distortion caused by the initial transients which arise from starting up a simulation run. That approach, however, can be misleading and this paper gives an alternative and more precise treatment which provides greater insight. One novel conclusion is that the necessary warm up period depends not only on the process to be simulated, but also on the length of the required simulation run. For simplicity, the detailed argument is restricted to the first order autoregressive model.  相似文献   

11.
MGG is a software package for the application of mathematicalprogramming (MP). It complements the Sciconic MP code by providinga facility for developing MP models quickly and efficiently,and it enables changes to be made easily to established models.It is available on a wide range of minis and mainframes anda version of it is available as part of the Micro LP systemon IBM PCs and compatibles. MGG is based on an approach to modellingMP problems which stresses the primacy of the mathematical formulation.The process of MP modelling is divided into two stages: modelpreparation and running of the model. The user writes a mathematicalformulation, which MGG converts to matrix generator and reportwriter programs. This is done once to produce the programs whichcan then be run many times on different data. This paper describesMGG and draws comparisons with other matrix generator and mathematicalprogramming languages. It starts by considering how an MP problemcan be described, and then sets out a methodology for formulation.The MGG language is based upon this approach. A simple exampleis presented which is shown both as an algebraic formulationand in the MGG language. The process of building and runningmodels with MGG is then described. Finally, some comments areoffered on experience of using the software.  相似文献   

12.
Multidimensional scaling has a wide range of applications when observations are not continuous but it is possible to define a distance (or dissimilarity) among them. However, standard implementations are limited when analyzing very large datasets because they rely on eigendecomposition of the full distance matrix and require very long computing times and large quantities of memory. Here, a new approach is developed based on projection of the observations in a space defined by a subset of the full dataset. The method is easily implemented. A simulation study showed that its performance are satisfactory in different situations and can be run in a short time when the standard method takes a very long time or cannot be run because of memory requirements.  相似文献   

13.
《Computational Geometry》2000,15(1-3):69-89
We present a fast and accurate collision detection algorithm for haptic interaction with polygonal models. Given a model, we precompute a hybrid hierarchical representation, consisting of uniform grids (represented using a hash table) and trees of tight-fitting oriented bounding box trees (OBBTrees). At run time, we use hybrid hierarchical representations and exploit frame-to-frame coherence for fast proximity queries. We describe a new overlap test, which is specialized for intersection of a line segment with an oriented bounding box for haptic simulation and takes 42–72 operations including transformation costs. The algorithms have been implemented as part of H-COLLIDE and interfaced with a PHANToM arm and its haptic toolkit, GHOST, and applied to a number of models. As compared to the commercial implementation, we are able to achieve up to 20 times speedup in our experiments and sustain update rates over 1000 Hz on a 400 MHz Pentium II. In practice, our prototype implementation can accurately and efficiently detect contacts between a virtual probe guided by a force-feedback arm and large complex geometries composed of tens of thousands of polygons, with substained KHz rates.  相似文献   

14.
Virtual prototyping plays an ever increasing role in the engineering disciplines. Nowadays, engineers can rely on powerful tools like object oriented modeling languages, e.g., Modelica. Models written in this language can be simulated by open source software as well as commercial tools. The advantage of this approach is that the engineers can concentrate themselves on modeling, whereas the numerical intricacies of the simulation are handled by the software. On the other hand the simulations are usually slower than implementations which are parallelized and optimized manually. This can lead to computation times which are infeasible in practice, e.g., when a real time simulation is necessary for a hardware-in-the-loop simulation. In this contribution we are concerned with speeding up such automated simulations by parallelization (on desktop hardware as well as HPC systems). We examine the parallelism across the system approaches. Additionally, the influence of the problem formulation on the simulation time is discussed. The implemented methods are demonstrated on engineering examples. (© 2015 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
Logistics systems have to cope with uncertainties in demand, in lead times, in transport times, in availability of resources and in quality. Management decisions have to take these uncertainties into consideration. An evaluation of decisions may be done by means of simulation. However, not all stochastic phenomena are of equal importance. By design of simulation experiments and making use of response surfaces, the most important phenomena are detected and their influence on performance estimated. Once the influence of the phenomena is known, this knowledge may be used to determine the optimal values of some decision parameters. An illustration is given on how to use response surfaces in a real-world case. A model is built in a logistics modelling software. The decision parameters have to be optimised for a specific objective function. Experiments are run to estimate the response surface. The validity of the response surface with few observations is also tested.  相似文献   

16.
17.
An artificial neural network (ANN) model for economic analysis of risky projects is presented in this paper. Outputs of conventional simulation models are used as neural network training inputs. The neural network model is then used to predict the potential returns from an investment project having stochastic parameters. The nondeterministic aspects of the project include the initial investment, the magnitude of the rate of return, and the investment period. Backpropagation method is used in the neural network modeling. Sigmoid and hyperbolic tangent functions are used in the learning aspect of the system. Analysis of the outputs of the neural network model indicates that more predictive capability can be achieved by coupling conventional simulation with neural network approaches. The trained network was able to predict simulation output based on the input values with very good accuracy for conditions not in its training set. This allowed an analysis of the future performance of the investment project without having to run additional expensive and time-consuming simulation experiments.  相似文献   

18.
In the framework of numerical simulation of damaged materials, softening behaviour represents an important topic. Thereby, the decrease of stiffness is mainly caused by the evolution of microvoids. In contrast to the established phenomenological damage approaches, the explicit consideration of effects on the micro scale can lead to an improved approximation quality. In this work, we discuss an approach to describe microstructural evolution. Based on a two phase micro model representing the macroscopical material behaviour, the structural evolution on the micro scale will be modelled based on configurational forces. Besides some theoretical basics on configurational forces at two phase systems and the definition of suitable evolution laws, we present an application of this approach on void growth process in rubberlike material. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
The development of a consensus production, distribution of finance decision support system for HP Bulmer PLC is described. The DSS is implemented on a micro computer, uses high resolution colour graphics for its display output and supports both a data base and a software model bank. This model bank allows different production and distribution planning models to be run, in context during the same interactive planning session. The paper also comments on the elementary learning capabilities of the system.  相似文献   

20.
科学研究是高校的四大重要职能之一,也是一个大学综合实力的重要体现,高校对教师科研创新的激励政策和管理对全校科研氛围至关重要。以参与人有限理性为前提,本文首先分析了高校当前体制环境下影响科研工作者和管理部门决策的因素,基于演化博弈视角建立高校科研工作者与管理部门之间行为的博弈支付矩阵,构建了相关行为的复制动态方程,然后,基于演化博弈理论研究了科研人员和管理部门行为的演化路径以及影响演化的因素,研究得出了科研人员个体不同策略选择对群体行为产生的影响。为系统化、定量化的研究科研创新行为,本文基于Matlab GUI平台对科研创新与管理进行了演化仿真。系统分析了不同的初始条件和决策参数对演化结果的影响。本文的分析方法可为高校及科研管理部门提供决策支持,以适时采取适度的奖励政策,引导科研向高水平方向演化,提高高校的科研创新水平。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号