首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Julia Mergheim 《PAMM》2008,8(1):10555-10556
In the present contribution a multi–scale – or rather two–scale – framework for the modelling of propagating discontinuities is introduced. The method is based on the Variational Multiscale Method. The displacement field is additively decomposed into a coarse– and a fine–scale part. This kinematic assumption implies a separation of the weak form in two equations, corresponding to the coarse–scale and the fine–scale problem. Both scales are discretized by means of finite elements. On the fine–scale, due to a much finer discretization, a heteregeneous mesostructure and propagating mesocracks can be considered. The propagation of the mesocracks is simulated independently of the underlying finite element mesh by discontinuous elements. The performance of the multi–scale approach is shown by a numerical example. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
Abstract Ecosystem processes function at many scales, and capturing these processes is a challenge for ecosystem models. Nevertheless, it is a necessary step for considering many management issues pertaining to shelf and coastal systems. In this paper, we explore one method of modeling large areas with a focus at a range of scales. We develop an ecosystem model that can be used for strategic management decision support by modeling the waters off southeastern Australia using a polygon telescoping approach, which incorporates fine‐scale detail at the coastal zone, increasing in scale to a very coarse scale in the offshore areas. This telescoping technique is a useful tool for incorporating a wide range of habitats at different scales into a single model.  相似文献   

3.
This paper presents a general preconditioning method based on a multilevel partial elimination approach. The basic step in constructing the preconditioner is to separate the initial points into two parts. The first part consists of ‘block’ independent sets, or ‘aggregates’. Unknowns of two different aggregates have no coupling between them, but those in the same aggregate may be coupled. The nodes not in the first part constitute what might be called the ‘coarse’ set. It is natural to call the nodes in the first part ‘fine’ nodes. The idea of the methods is to form the Schur complement related to the coarse set. This leads to a natural block LU factorization which can be used as a preconditioner for the system. This system is then solved recursively using as preconditioner the factorization that could be obtained from the next level. Iterations between levels are allowed. One interesting aspect of the method is that it provides a common framework for many other techniques. Numerical experiments are reported which indicate that the method can be fairly robust. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
This paper concerns the convex optimal control problem governed by multiscale elliptic equations with arbitrarily rough $L^\infty$ coefficients, which has not only complex coupling between nonseparable scales and nonlinearity, but also important applications in composite materials and geophysics. We use one of the recently developed numerical homogenization techniques, the so-called Rough Polyharmonic Splines (RPS) and its generalization (GRPS) for the efficient resolution of the elliptic operator on the coarse scale. Those methods have optimal convergence rate which do not rely on the regularity of the coefficients nor the concepts of scale-separation or periodicity. As the iterative solution of the nonlinearly coupled OCP-OPT formulation for the optimal control problem requires solving the corresponding (state and co-state) multiscale elliptic equations many times with different right hand sides, numerical homogenization approach only requires one-time pre-computation on the fine scale and the following iterations can be done with computational cost proportional to coarse degrees of freedom. Numerical experiments are presented to validate the theoretical analysis.  相似文献   

5.
Concurrent multiscale method is a spatial and temporal combination of two different scale models for describing the micro/meso and macro mixed behaviors observed in strain localization, failure and phase transformation processes, etc. Most of the existing coupling schemes use the displacement compatibility conditions to glue different scale models, which leads to displacement continuity and stress discontinuity for the obtained multiscale model. To overcome stress discontinuity, this paper presented a multiscale method based on the generalized bridging domain method for coupling the discrete element (DE) and finite element (FE) models. This coupling scheme adopted displacement and stress mixed compatibility conditions. Displacements that were interpolated from FE nodes were prescribed on the artificial boundary of DE model, while stresses at numerical integration points that were extracted from DE contact forces were applied on the material transition zone of FE model (the coupling domain and the artificial boundary of FE model). In addition, this paper proposed an explicit multiple time-steps integration algorithm and adopted Cundall nonviscous damping for quasi-static problems. DE and FE parameters were calibrated by DE simulations of a biaxial compression test and a deposition process. Numerical examples for a 2D cone penetration test (CPT) show that the proposed multiscale method captures both mesoscopic and macroscopic behaviors such as sand soil particle rearrangement, stress concentration near the cone tip, shear dilation, penetration resistance vibration and particle rotation, etc, during the cone penetration process. The proposed multiscale method is versatile for maintaining stress continuity in coupling different scale models.  相似文献   

6.
The electroencephalogram (EEG) measures potential differences, generated by electrical activity in brain tissue, between scalp electrodes. The EEG potentials can be calculated by the quasi-static Poisson equation in a certain head model. It is well known that the electrical dipole (source) which best fits the measured EEG potentials is obtained by an inverse problem. The dipole parameters are obtained by finding the global minimum of the relative residual energy (RRE). For the first time, the space mapping technique (SM technique) is used for minimizing the RRE. The SM technique aims at aligning two different simulation models: a fine model, accurate but CPU-time expensive, and a coarse model, computationally fast but less accurate than the fine one. The coarse model is a semi-analytical model, the so-called three-shell concentric sphere model. The fine model numerically solves the Poisson equation in a realistic head model. If we use the aggressive space mapping (ASM) algorithm, the errors on the dipole location are too large. The hybrid aggressive space mapping (HASM) on the other hand has better convergence properties, yielding a reduction in dipole location errors. The computational effort of HASM is greater than ASM but smaller than using direct optimization techniques.  相似文献   

7.
8.
Controlling the motion of particles in turbulent flows, the paper at hand presents an efficient space–mapping approach that is based on a hierarchy of models. The approach reduces the highly complex optimization of the k-ε turbulence model for high Reynolds–number flows (fine model) to the cheaper one of the Navier–Stokes equations for smaller Reynolds–number (laminar) flows in direct numerical simulations on coarser grids (coarse model) by help of a space–map function that maps the respective coarse model control onto the desired fine model control. The numerical results are very convincing in terms of accuracy and computational effort. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
A nuclear reactor core, that is a few meters in height and diameter is composed of hundreds of fuel assemblies which are again composed of tenth of fuel rods with a diameter of about 10 mm. The relevant length scales for a Computational Fluid Dynamics (CFD) simulations range from the sub millimetre range, relevant for the sub channels up to several meters. Describing such a multi-scale situation with CFD is extremely challenging and the traditional approach is to use integral methods. These are sub channel and sub assembly analyses codes requiring closure by empirical and experimental correlations. A CFD simulation of a complete nuclear reactor set up resolving all relevant scales requires exceedingly large computational resources. However, in many cases there exists repetitive geometrical assemblies and flow patterns. Based on this observation the general approach of creating a parametrized model for a single segment and composing many of these reduced models to obtain the entire reactor simulation becomes feasible. With the Coarse-Grid-CFD (CGCFD) ( [1], [2]), we propose to replace the experimental or empirical input with proper CFD data. Application of the methodology starts with a detailed, well-resolved, and verified CFD simulation of a single representative segment. From this simulation we extract in tabular form so-called volumetric forces which upon parametrization is assigned to all coarse cells. Repeating the fine simulation for multiple flow conditions parametrized data can be obtained or interpolated for all occurring conditions to the desired degree of accuracy. Note, that parametrized data is used to close an otherwise strongly under-resolved, coarsely meshed model of a complete reactor set up. Implementation of volumetric forces are the method of choice to account for effects as long as dominant transport is still distinguishable on the coarse mesh. In cases where smaller scale effects become relevant the Anisotrop Porosity Formulation (APF) allows capturing transport phenomena occurring on the same or slightly smaller scale compared to the coarse mesh resolution. Within this work we present results of several fuel assemblies, that were investigated with our methodology. In particular, we show Coarse-Grid-CFD simulations including a 127 pin LBE cooled wire wrapped fuel assembly. (© 2015 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
To simulate the interaction of seismic waves with microheterogeneities (like cavernous/fractured reservoirs), a finite difference technique based on grids locally refined in time and space is used. These grids are used because the scales of heterogeneities in the reference medium and in the reservoir are different. Parallel computations based on domain decomposition of the target area into elementary subdomains in both the reference medium (a coarse grid) and the reservoir (a fine grid) are performed. Each subdomain is assigned to a specific processor unit, which forms two groups: one for the reference medium, and the other for the reservoir. The data exchange between the groups within a processor unit is performed by non-blocking iSend/iReceive MPI commands. The data exchange between the two groups is performed simultaneously with coupling the coarse and a fine grids, and is controlled by a specially chosen processor unit. The results of a numerical simulation for a realistic model of fracture corridors are presented and discussed.  相似文献   

11.
Recent results on localization, both exponential and dynamical, for various models of one-dimensional, continuum, random Schrödinger operators are reviewed. This includes Anderson models with indefinite single site potentials, the BernoulliAnderson model, the Poisson model, and the random displacement model. Among the tools which are used to analyse these models are generalized spectral averaging techniques and results from inverse spectral and scattering theory. A discussion of open problems is included.  相似文献   

12.
Numerous articles have appeared in the literature expressing different degrees of concern with the methodology of OR in general and with the validation of OR models in particular. Suggestions have been formulated to remove some of the shortcomings of the methodology as currently practised and to introduce modifications in the approach because of the changing nature of the problems tackled. Advances in modeling capabilities and solution techniques have also had considerable impact on the way validation is perceived. Large scale computer-based mathematical models and especially simulation models have brought new dimensions to the notion of validation. Terms like ‘confidence’, ‘credibility and reliability’, ‘model assessment and evaluation’, ‘usefulness and usability of the model’ have become rather common. This paper is an attempt, through an interpretation of the literature, to put model validation and related issues in a framework that may be of use both to model-builders and to decision-makers.  相似文献   

13.
Dominik Jürgens 《PAMM》2008,8(1):10973-10974
The complexity of simulation software increases because the simulated phenomena get more complex and massively parallel machines have to be used. Other reasons are the consideration of new technologies like the computational Grid and the integration of simulation programs into more complex applications. Therefore the development of future scientific software requires implicitly a step towards separation of concerns beyond the possibilities of classical programming techniques. An important integration problem in the context of scientific computing is the coupling of independently simulated phenomena, where different programs simulate only a part of a more complex coupled system. Numerical methods for the coupling of simulators like Lagrangian–multipliers based or staggered methods are in common use. Many problems which appear in practice of simulator coupling are not mathematical, but of more technical nature. That results in the current situation where code for the coupling of simulations is not implemented in a reusable way. New approaches to separation of concerns like aspect oriented and generative techniques may help to overcome these issues. As our vision we discuss the idea of a domain specific language (DSL) for coupled simulations. We present a review on methods, paradigms, theories and tools which aim at the development of coupled simulations with a generative approach. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
Consumer markets have been studied in great depth, and many techniques have been used to represent them. These have included regression‐based models, logit models, and theoretical market‐level models, such as the NBD‐Dirichlet approach. Although many important contributions and insights have resulted from studies that relied on these models, there is still a need for a model that could more holistically represent the interdependencies of the decisions made by consumers, retailers, and manufacturers. When the need is for a model that could be used repeatedly over time to support decisions in an industrial setting, it is particularly critical. Although some existing methods can, in principle, represent such complex interdependencies, their capabilities might be outstripped if they had to be used for industrial applications, because of the details this type of modeling requires. However, a complementary method—agent‐based modeling—shows promise for addressing these issues. Agent‐based models use business‐driven rules for individuals (e.g., individual consumer rules for buying items, individual retailer rules for stocking items, or individual firm rules for advertizing items) to determine holistic, system‐level outcomes (e.g., to determine if brand X's market share is increasing). We applied agent‐based modeling to develop a multi‐scale consumer market model. We then conducted calibration, verification, and validation tests of this model. The model was successfully applied by Procter & Gamble to several challenging business problems. In these situations, it directly influenced managerial decision making and produced substantial cost savings. © 2010 Wiley Periodicals, Inc. Complexity, 2010  相似文献   

15.
Long-term planning for electric power systems, or capacity expansion, has traditionally been modeled using simplified models or heuristics to approximate the short-term dynamics. However, current trends such as increasing penetration of intermittent renewable generation and increased demand response requires a coupling of both the long and short term dynamics. We present an efficient method for coupling multiple temporal scales using the framework of singular perturbation theory for the control of Markov processes in continuous time. We show that the uncertainties that exist in many energy planning problems, in particular load demand uncertainty and uncertainties in generation availability, can be captured with a multiscale model. We then use a dimensionality reduction technique, which is valid if the scale separation present in the model is large enough, to derive a computationally tractable model. We show that both wind data and electricity demand data do exhibit sufficient scale separation. A numerical example using real data and a finite difference approximation of the Hamilton–Jacobi–Bellman equation is used to illustrate the proposed method. We compare the results of our approximate model with those of the exact model. We also show that the proposed approximation outperforms a commonly used heuristic used in capacity expansion models.  相似文献   

16.
In this paper, we review some mathematical models in medical image processing. Due to the superiority in modeling and computation, variational methods have been proven to be powerful techniques, which have been extremely popular and dramatically improved in the past two decades. On one hand, many models have been proposed for nearly all kinds of applications. On the other hand, a lot of models can be globally optimized and also many computation tools have been introduced. Under the variational framework, we focus on two basic problems in medical imaging: image restoration and segmentation, which are core components for kinds of specific tasks. For image restoration, we discuss some models on both additive and multiplicative noises. For image segmentation, we review some models on both whole image segmentation and specific target delineation, with the later being a key step in computer aided surgery. Additionally, we present some models on liver delineation and give their applications to living donor liver transplantation.  相似文献   

17.
Parallel‐in‐time algorithms have been successfully employed for reducing time‐to‐solution of a variety of partial differential equations, especially for diffusive (parabolic‐type) equations. A major failing of parallel‐in‐time approaches to date, however, is that most methods show instabilities or poor convergence for hyperbolic problems. This paper focuses on the analysis of the convergence behavior of multigrid methods for the parallel‐in‐time solution of hyperbolic problems. Three analysis tools are considered that differ, in particular, in the treatment of the time dimension: (a) space–time local Fourier analysis, using a Fourier ansatz in space and time; (b) semi‐algebraic mode analysis, coupling standard local Fourier analysis approaches in space with algebraic computation in time; and (c) a two‐level reduction analysis, considering error propagation only on the coarse time grid. In this paper, we show how insights from reduction analysis can be used to improve feasibility of the semi‐algebraic mode analysis, resulting in a tool that offers the best features of both analysis techniques. Following validating numerical results, we investigate what insights the combined analysis framework can offer for two model hyperbolic problems, the linear advection equation in one space dimension and linear elasticity in two space dimensions.  相似文献   

18.
A non-linear FEM model of brake discs of high-speed trains with thermal-mechanical coupling has been established in ANSYS. The simulation and analysis of 3-D transient temperature field and stress field of brake discs have been carried out for the braking process. According to typical imperfections of brake discs, some imperfection models of brake discs have been developed and the thermal stresses of different models are obtained through simulation and analysis. By comparing results of different models, the influence of imperfection parameters, including the position, depth and size, is resulted for the thermal resistance of brake discs. (© 2009 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
We investigate mortar multiscale numerical methods for coupled Stokes and Darcy flows with the Beavers–Joseph–Saffman interface condition. The domain is decomposed into a series of subdomains (coarse grid) of either Stokes or Darcy type. The subdomains are discretized by appropriate Stokes or Darcy finite elements. The solution is resolved locally (in each coarse element) on a fine scale, allowing for non-matching grids across subdomain interfaces. Coarse scale mortar finite elements are introduced on the interfaces to approximate the normal stress and impose weakly continuity of the normal velocity. Stability and a priori error estimates in terms of the fine subdomain scale $h$ and the coarse mortar scale $H$ are established for fairly general grid configurations, assuming that the mortar space satisfies a certain inf-sup condition. Several examples of such spaces in two and three dimensions are given. Numerical experiments are presented in confirmation of the theory.  相似文献   

20.
In the last years we have witnessed remarkable progress in providing efficient algorithmic solutions to the problem of computing best journeys (or routes) in schedule-based public transportation systems. We have now models to represent timetables that allow us to answer queries for optimal journeys in a few milliseconds, also at a very large scale. Such models can be classified into two types: those representing the timetable as an array, and those representing it as a graph. Array-based models have been shown to be very effective in terms of query time, while graph-based ones usually answer queries by computing shortest paths, and hence they are suitable to be combined with the speed-up techniques developed for road networks.In this paper, we study the behavior of graph-based models in the prominent case of dynamic scenarios, i.e., when delays might occur to the original timetable. In particular, we make the following contributions. First, we consider the graph-based reduced time-expanded model and give a simplified and optimized routine for handling delays, and a re-engineered and fine-tuned query algorithm. Second, we propose a new graph-based model, namely the dynamic timetable model, natively tailored to efficiently incorporate dynamic updates, along with a query algorithm and a routine for handling delays. Third, we show how to adapt the ALT algorithm to such graph-based models. We have chosen this speed-up technique since it supports dynamic changes, and a careful implementation of it can significantly boost its performance. Finally, we provide an experimental study to assess the effectiveness of all proposed models and algorithms, and to compare them with the array-based state of the art solution for the dynamic case. We evaluate both new and existing approaches by implementing and testing them on real-world timetables subject to synthetic delays.Our experimental results show that: (i) the dynamic timetable model is the best model for handling delays; (ii) graph-based models are competitive to array-based models with respect to query time in the dynamic case; (iii) the dynamic timetable model compares favorably with both the original and the reduced time-expanded model regarding space; (iv) combining the graph-based models with speed-up techniques designed for road networks, such as ALT, is a very promising approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号