首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this contribution we reconstruct the three dimensional microstructure of a dual-phase steel based on the tomographic experimental data provided by the 3D electron backscatter diffraction (3D EBSD) method. The cross sections of the resulting microstructure are compared to the 2D reconstruction, which are obtained directly from the 3D EBSD data. We also perform FE-simulations based on these geometries and observe comparable results in 2D and 3D. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
For the optimization of process chains in sheet metal forming it is required to accurately describe each partial process of the chain, e.g. rolling, press hardening and deep drawing. The prediction of the thickness distribution and the residual stresses in the blank has to be of high reliability, since the subsequent behavior of the semi-finished product in the following subprocesses strongly depends on the process history. Therefore, high-quality simulations have to be carried out which incorporate real microstructural data [1,2,3]. In this contribution, the ferritic steel DC04 is analyzed. A finite strain crystal plasticity model is used, for the application of which micro pillar compression tests were carried out experimentally and numerically to identify the material parameters of DC04. For the validation of the model, a two-dimensional EBSD data set has been discretized by finite elements and subjected to homogeneous displacement boundary conditions describing a large strain uniaxial tensile test. The results have been compared to experimental measurements of the specimen after the tensile test. Furthermore, a deep drawing process is simulated, which is based on a two-scale Taylor-type model at the integration points of the finite elements. At each integration point, the initial texture data given by the aforementioned EBSD measurements is assigned to the model. By applying this method, we predict the earing profiles of differently textured sheet metals. (© 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
The objective of this study is to model the primary breakup of a plane liquid sheet emerging from an air-blast nozzle. In the present work the interface compression scheme proposed by OpenCFD Ltd. [1] has been used to capture the interface between the liquid and gas. A One-equation subgrid scale (sgs) turbulent energy transport model attributed to Yoshizawa [2] is used for modeling the effects of turbulence. The set up case selected for this study is based on the experiments carried out by Mitra [3]. The 2D simulations performed in this study predict the breakup length of the plane liquid sheet in good agreement with the experimental data. Future work will involve, performing 3D simulations of the plane liquid sheet generated by the air-blast nozzle and performing comparisons of the resulting droplet characteristics with the experimental data. (© 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
In this paper, we study a periodic predator–prey system with prey impulsively unilateral diffusion in two patches. Firstly, based on the results in [41], sufficient conditions on the existence, uniqueness and globally attractiveness of periodic solution for predator-free and prey-free systems are presented. Secondly, by using comparison theorem of impulsive differential equation and other analysis methods, sufficient and necessary conditions on the permanence and extinction of prey species x with predator have other food source are established. Finally, the theoretical results both for non-autonomous system and corresponding autonomous system are confirmed by numerical simulations, from which we can see some interesting phenomena happen.  相似文献   

5.
A crystal plasticity model and a homogenization method are used to analyze the local and global mechanical behavior of a ferritic stainless steel. In the first step the material constants are determined based on tensile tests and used to simulate the local deformation behavior on the grain scale in the second step. For that 2D EBSD data are discretized by finite elements. The computed local grain reorientations of three different BCC slip systems are compared to experimental data at the state of 20% elongation. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
Energy-conserving algorithms are necessary to solve nonlinear elastodynamic problems in order to recover long term time integration accuracy and stability. Furthermore, some physical phenomena (such as friction) can generate dissipation; then in this work, we present and analyse two energy-consistent algorithms for hyperelastodynamic frictional contact problems which are characterised by a conserving behaviour for frictionless impacts but also by an admissible frictional dissipation phenomenon. The first approach permits one to enforce, respectively, the Kuhn–Tucker and persistency conditions during each time step by combining an adapted continuation of the Newton method and a Lagrangean formulation. In addition the second method which is based on the work in [P. Hauret, P. Le Tallec, Energy-controlling time integration methods for nonlinear elastodynamics and low-velocity impact, Comput. Methods Appl. Mech. Eng. 195 (2006) 4890–4916] represents a specific penalisation of the unilateral contact conditions. Some numerical simulations are presented to underscore the conservative or dissipative behaviour of the proposed methods.  相似文献   

7.
Recently, Gijbels and Rousson[6] suggested a new approach, called nonparametric least-squares test, to check polynomial regression relationships. Although this test procedure is not only simple but also powerful in most cases, there are several other parameters to be chosen in addition to the kernel and bandwidth.As shown in their paper, choice of these parameters is crucial but sometimes intractable. We propose in this paper a new statistic which is based on sample variance of the locally estimated pth derivative of the regression function at each design point. The resulting test is still simple but includes no extra parameters to be determined besides the kernel and bandwidth that are necessary for nonparametric smoothing techniques. Comparison by simulations demonstrates that our test performs as well as or even better than Gijbels and Rousson‘s approach.Furthermore, a real-life data set is analyzed by our method and the results obtained are satisfactory.  相似文献   

8.
Many applications in computational science and engineering require the solution of sequences of slowly changing linear systems. We focus on problems arising in Lattice QCD simulations. In order to generate an ensemble of configurations from which the values of physical observables can be obtained, we have to solve a linear system with a Dirac operator in each time step of the hybrid Monte-Carlo simulations. This operator changes just slightly from time step to time step. While recycling subspace information from the previous system like described in [1] reduces the number of necessary matrix-vector multiplications, the systems are still expensive to solve. To overcome this limitation, we include preconditioning in our implementation. (© 2013 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
In this work, an elastoplastic material model coupled to nonlocal damage is discussed which is based on an implicit gradient-enhanced approach. Combined nonlinear isotropic and kinematic hardening as well as continuum damage of Lemaitre-type are considered. The model is a direct nonlocal extension of a corresponding local model which was presented earlier (see e. g. [1], [2], [3]). Conclusions drawn from a numerical benchmark test performed in this study demonstrate that the nonlocal damage model is suitable to provide mesh-independent solutions in finite element simulations. (© 2015 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
In this paper a numerical method to compute principal component geodesics for Kendall’s planar shape spaces–which are essentially complex projective spaces–is presented. Underlying is the notion of principal component analysis based on geodesics for non-Euclidean manifolds as proposed in an earlier paper by Huckemann and Ziezold [S. Huckemann, H. Ziezold, Principal component analysis for Riemannian manifolds with an application to triangular shape spaces, Adv. Appl. Prob. (SGSA) 38 (2) (2006) 299–319]. Currently, principal component analysis for shape spaces is done on the basis of a Euclidean approximation. In this paper, using well-studied datasets and numerical simulations, these approximation errors are discussed. Overall, the error distribution is rather dispersed. The numerical findings back the notion that the Euclidean approximation is good for highly concentrated data. For low concentration, however, the error can be strongly notable. This is in particular the case for a small number of landmarks. For highly concentrated data, stronger anisotropicity and a larger number of landmarks may also increase the error.  相似文献   

11.
Asset price dynamics is studied by using a system of ordinary differential equations which is derived by utilizing a new excess demand function introduced by Caginalp [4] for a market involving more information on demand and supply for a stock rather than their values at a particular price. Derivation is based on the finiteness of assets (rather than assuming unbounded arbitrage) in addition to investment strategies that are based on not only price momentum (trend) but also valuation considerations. For this new model and the older models which were extracted using the classical excess demand function by Caginalp and Balenovich [2] and [3], time evolutions of asset price are compared through numerical simulations.  相似文献   

12.
Outcome-dependent sampling designs are commonly used in economics, market research and epidemiological studies. Case-control sampling design is a classic example of outcome-dependent sampling, where exposure information is collected on subjects conditional on their disease status. In many situations, the outcome under consideration may have multiple categories instead of a simple dichotomization. For example, in a case-control study, there may be disease sub-classification among the “cases” based on progression of the disease, or in terms of other histological and morphological characteristics of the disease. In this note, we investigate the issue of fitting prospective multivariate generalized linear models to such multiple-category outcome data, ignoring the retrospective nature of the sampling design. We first provide a set of necessary and sufficient conditions for the link functions that will allow for equivalence of prospective and retrospective inference for the parameters of interest. We show that for categorical outcomes, prospective-retrospective equivalence does not hold beyond the generalized multinomial logit link. We then derive an approximate expression for the bias incurred when link functions outside this class are used. Most popular models for ordinal response fall outside the multiplicative intercept class and one should be cautious while performing a naive prospective analysis of such data as the bias could be substantial. We illustrate the extent of bias through a real data example, based on the ongoing Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial by the National Cancer Institute. The simulations based on the real study illustrate that the bias approximations work well in practice.  相似文献   

13.
Computational fluid dynamics (CFD) simulations of complete nuclear reactor core geometries requires exceedingly large computational resources. However, in most cases there are repetitive geometry- and flow patterns allowing the general approach of creating a parameterized model for one segment and composing many of these reduced models to obtain the entire reactor simulation. Traditionally, this approach lead to so-called subchannel analysis codes that are relying heavily on transport models based on experimental and empirical correlations. With our method, the Coarse-Grid-CFD (CGCFD), we intend to replace the experimental or empirical input with CFD data. Our method is based on detailed and well-resolved CFD simulations of representative segments. From these simulations we extract and tabulate volumetric source terms. Parameterized data is used to close an otherwise strongly under resolved, coarsely meshed model of a complete reactor setup. In the previous formulation only forces created internally in the fluid are accounted for. The Anisotropic Porosity (AP) formulation wich is subject of the present investigation adresses other influences, like obstruction and flow guidance through spacers and in particular geometric details which are under resolved or ignored by the coarse mesh. (© 2013 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
Semiparametric linear transformation models have received much attention due to their high flexibility in modeling survival data. A useful estimating equation procedure was recently proposed by Chen et al. (2002) [21] for linear transformation models to jointly estimate parametric and nonparametric terms. They showed that this procedure can yield a consistent and robust estimator. However, the problem of variable selection for linear transformation models has been less studied, partially because a convenient loss function is not readily available under this context. In this paper, we propose a simple yet powerful approach to achieve both sparse and consistent estimation for linear transformation models. The main idea is to derive a profiled score from the estimating equation of Chen et al. [21], construct a loss function based on the profile scored and its variance, and then minimize the loss subject to some shrinkage penalty. Under regularity conditions, we have shown that the resulting estimator is consistent for both model estimation and variable selection. Furthermore, the estimated parametric terms are asymptotically normal and can achieve a higher efficiency than that yielded from the estimation equations. For computation, we suggest a one-step approximation algorithm which can take advantage of the LARS and build the entire solution path efficiently. Performance of the new procedure is illustrated through numerous simulations and real examples including one microarray data.  相似文献   

15.
16.
In this paper we obtain determinantal conditions necessary for the existence of (r,λ)-designs. The work is based on a paper of Connor [2]. In [3] Deza establishes an inequality which must be satisfied by the column vectors of an equidistant code; or, equivalently, the block sizes in an (r,λ)-design. We obtain a generalization of this inequality.  相似文献   

17.
Kai-Uwe Widany  Rolf Mahnken 《PAMM》2014,14(1):273-274
In numerical simulations with the finite element method the dependency on the mesh – and for time-dependent problems on the time discretization – arises. Adaptive refinements in space (and time) based on goal-oriented error estimation [1] become more and more popular for finite element analyses to balance computational effort and accuracy of the solution. The introduction of a goal quantity of interest defines a dual problem which has to be solved to estimate the error with respect to it. Often such procedures are based on a space-time Galerkin framework for instationary problems [2]. Discretization results in systems of equations in which the unknowns are nodal values. Contrary, in current finite element implementations for path-dependent problems some quantities storing information about the path-dependence are located at the integration points of the finite elements [3], e.g. plastic strains etc. In this contribution we propose an approach – similar to [4] for sensitivity analysis – for the approximation of the dual problem which mainly maintains the structure of current finite element implementations for path-dependent problems. Here, the dual problem is introduced after discretization. A numerical example illustrates the approach. (© 2014 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
Reservoirs with multi-fracture techniques are developed and frequently used for oil and gas industry. Recently, they are also used for deep geothermal reservoirs especially for Hot Dry Rock (HDR). The analysis of the reservoir is generally interested in long time physical properties (10–100 years), e.g. fluid flow, heat transport etc. Typical CFD simulations are limited in this context. Here we developed a fluid flow and heat transport modeling in a multi-fracture reservoir based on the so-called Mixed Dimensional Model (MDM), which describes the different characteristic flows and the heat transport in different dimensions. In the mathematical point of view, these models are discretized based on the Cellular Automaton (CA) method combined with other necessary numerical techniques. The different cases of fluid flow and heat transport in multi-fracture reservoirs have been simulated and shown physical results very reasonably with less computational time. (© 2013 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
To gain a basic understanding of foam flow, as it can be found e.g. in transport of aerated food, simulation tools can help to provide better insight. Shearing of the bubbles appears in different flow geometries and is for a bubble assembly not captured analytically. Also experimentally, those flow fields are hard to observe so that simulations are the method of choice. Our method to simulate foams uses a volume of fluid approach that is based on the free surface algorithm by Körner et al. [1]. Different from classical multiphase methods, only the liquid phase is simulated and special boundary conditions at the liquid-gas interface account for the gas phase. With this approach high density ratios, e.g. in water-air systems, are easier to realize than in other methods. High density ratios are even necessary to physically justify the model, where the dynamics of the lighter phase are partially neglected. This method is integrated in the Lattice Boltzmann software framework waLBerla [3] (widely applicable Lattice Boltzmann solver from Erlangen†) that can be used on massively parallel computers and thus allows to simulate even large bubble assemblies. As first validation, single bubbles are sheared with different capillary numbers and the simulation results are compared to literature [2] and show good agreement. The next step is shearing a bubble assembly which is arranged like a dense sphere packing. In order to investigate the geometrical configuration of the assembly and its impact on the behavior during a shear deformation, the bubble assembly is rotated with different angles with respect to the shear direction. (© 2014 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

20.
Approximate Bayesian computation (ABC) is typically used when the likelihood is either unavailable or intractable but where data can be simulated under different parameter settings using a forward model. Despite the recent interest in ABC, high-dimensional data and costly simulations still remain a bottleneck in some applications. There is also no consensus as to how to best assess the performance of such methods without knowing the true posterior. We show how a nonparametric conditional density estimation (CDE) framework, which we refer to as ABC–CDE, help address three nontrivial challenges in ABC: (i) how to efficiently estimate the posterior distribution with limited simulations and different types of data, (ii) how to tune and compare the performance of ABC and related methods in estimating the posterior itself, rather than just certain properties of the density, and (iii) how to efficiently choose among a large set of summary statistics based on a CDE surrogate loss. We provide theoretical and empirical evidence that justify ABC–CDE procedures that directly estimate and assess the posterior based on an initial ABC sample, and we describe settings where standard ABC and regression-based approaches are inadequate. Supplemental materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号