首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
A priori and a posteriori studies for large eddy simulation of the compressible turbulent infinitely fast reacting shear layer are presented. The filtered heat release appearing in the energy equation is unclosed and the accuracy of different models for the filtered scalar dissipation rate and the conditional filtered scalar dissipation rate of the mixture fraction in closing this term is analyzed. The effect of different closures of the subgrid transport of momentum, energy and scalars on the modeling of the filtered heat release via the resolved fields is also considered. Three explicit models of these subgrid fluxes are explored, each with an increasing level of reconstruction and all of them regularized by a Smagorinsky-type term. It is observed that a major part of the error in the prediction of the conditional filtered scalar dissipation comes from the unsatisfactory modeling of the filtered dissipation itself. The error can be substantial in the turbulent fluctuation (rms) of the dissipation fields. It is encouraging that all models give good predictions of the mean and rms density in a posteriori LES of this flow with realistic heat release corresponding to large density change. Although a posteriori results show a small sensitivity to subgrid modeling errors in the current problem, extinction–reignition phenomena involving finite-rate chemistry would demand more accurate modeling of the dissipation rates. A posteriori results also show that the resolved fields obtained with the approximate reconstruction using moments (ARM) agree better with the filtered direct numerical simulation since the level of reconstruction in the modeled subfilter fluxes is increased.  相似文献   

2.
We analyze a multiscale operator decomposition finite element method for a conjugate heat transfer problem consisting of a fluid and a solid coupled through a common boundary. We derive accurate a posteriori error estimates that account for all sources of error, and in particular the transfer of error between fluid and solid domains. We use these estimates to guide adaptive mesh refinement. In addition, we provide compelling numerical evidence that the order of convergence of the operator decomposition method is limited by the accuracy of the transferred gradient information, and adapt a so-called boundary flux recovery method developed for elliptic problems in order to regain the optimal order of accuracy in an efficient manner. In an appendix, we provide an argument that explains the numerical results provided sufficient smoothness is assumed.  相似文献   

3.
This article considers a posteriori error estimation and anisotropic mesh refinement for three-dimensional laminar aerodynamic flow simulations. The optimal order symmetric interior penalty discontinuous Galerkin discretization which has previously been developed for the compressible Navier–Stokes equations in two dimensions is extended to three dimensions. Symmetry boundary conditions are given which allow to discretize and compute symmetric flows on the half model resulting in exactly the same flow solutions as if computed on the full model. Using duality arguments, an error estimation is derived for estimating the discretization error with respect to the aerodynamic force coefficients. Furthermore, residual-based indicators as well as adjoint-based indicators for goal-oriented refinement are derived. These refinement indicators are combined with anisotropy indicators which are particularly suited to the discontinuous Galerkin (DG) discretization. Two different approaches based on either a heuristic criterion or an anisotropic extension of the adjoint-based error estimation are presented. The performance of the proposed discretization, error estimation and adaptive mesh refinement algorithms is demonstrated for 3d aerodynamic flows.  相似文献   

4.
In this paper, we investigate and present an adaptive Discontinuous Galerkin algorithm driven by an adjoint-based error estimation technique for the inviscid compressible Euler equations. This approach requires the numerical approximations for the flow (i.e. primal) problem and the adjoint (i.e. dual) problem which corresponds to a particular simulation objective output of interest. The convergence of these two problems is accelerated by an hp-multigrid solver which makes use of an element Gauss–Seidel smoother on each level of the multigrid sequence. The error estimation of the output functional results in a spatial error distribution, which is used to drive an adaptive refinement strategy, which may include local mesh subdivision (h-refinement), local modification of discretization orders (p-enrichment) and the combination of both approaches known as hp-refinement. The selection between h- and p-refinement in the hp-adaptation approach is made based on a smoothness indicator applied to the most recently available flow solution values. Numerical results for the inviscid compressible flow over an idealized four-element airfoil geometry demonstrate that both pure h-refinement and pure p-enrichment algorithms achieve equivalent error reductions at each adaptation cycle compared to a uniform refinement approach, but requiring fewer degrees of freedom. The proposed hp-adaptive refinement strategy is capable of obtaining exponential error convergence in terms of degrees of freedom, and results in significant savings in computational cost. A high-speed flow test case is used to demonstrate the ability of the hp-refinement approach for capturing strong shocks or discontinuities while improving functional accuracy.  相似文献   

5.
The wavelet-based multiresolution analysis (MRA) technique is used to develop a modelling approach to large-eddy simulation (LES) and its associated subgrid closure problem. The LES equations are derived by projecting the Navier–Stokes (N–S) equations onto a hierarchy of wavelet spaces. A numerical framework is then developed for the solution of the large and the small-scale equations. This is done in one dimension, for the Burgers equation, and in three dimensions, for the N–S problem. The proposed methodology is assessed in a priori tests on an atmospheric turbulent time series and on data from direct numerical simulation. A posteriori (dynamic) tests are also carried out for decaying and force-driven Burgers turbulence.  相似文献   

6.
An integral equation, analogous to the Abel integral equation for the derivation of volume emission coefficient from side-on observations, is developed which takes account of finite optical aperture. This integral equation is partially inverted and the solution is used to examine the effect of finite optical aperture for two simple distributions of volume emission coefficient.It appears that the error arising from the effect of finite optical aperture is negligible in most practical cases.One important and interesting result that emerges from this treatment is that the volume emission coefficient derived on the axis (r=0) is independent of optical aperture and is therefore correct whatever the aperture.  相似文献   

7.
In this article we present the extension of the a posteriori error estimation and goal-oriented mesh refinement approach from laminar to turbulent flows, which are governed by the Reynolds-averaged Navier–Stokes and kω turbulence model (RANS-) equations. In particular, we consider a discontinuous Galerkin discretization of the RANS- equations and use it within an adjoint-based error estimation and adaptive mesh refinement algorithm that targets the reduction of the discretization error in single as well as in multiple aerodynamic force coefficients. The accuracy of the error estimation and the performance of the goal-oriented mesh refinement algorithm is demonstrated for various test cases, including a two-dimensional turbulent flow around a three-element high lift configuration and a three-dimensional turbulent flow around a wing-body configuration.  相似文献   

8.
Compact (ferro- and antiferromagnetic) sigma-models and noncompact (hyperbolic) sigma-models are compared in a lattice formulation in dimensions d?2d?2. While the ferro- and antiferromagnetic models are essentially equivalent, the qualitative difference to the noncompact models is highlighted. The perturbative and the large N expansions are studied in both types of models and are argued to be asymptotic expansions on a finite lattice. An exact correspondence between the expansion coefficients of the compact and the noncompact models is established, for both expansions, valid to all orders on a finite lattice. The perturbative one involves flipping the sign of the coupling and remains valid in the termwise infinite volume limit. The large N correspondence concerns the functional dependence on the free propagator and holds directly only in finite volume.  相似文献   

9.
We present least-squares-based finite element formulations for the numerical solution of the radiative transfer equation in its first-order primitive variable form. The use of least-squares principles leads to a variational unconstrained minimization problem in a setting of residual minimization. In addition, the resulting linear algebraic problem will always have a symmetric positive definite coefficient matrix, allowing the use of robust and fast iterative methods for its solution. We consider space-angle coupled and decoupled formulations. In the coupled formulation, the space-angle dependency is represented by two-dimensional finite element expansions and the least-squares functional minimized in the continuous space-angle domain. In the decoupled formulation the angular domain is represented by discrete ordinates, the spatial dependence represented by one-dimensional finite element expansions, and the least-squares functional minimized continuously in space domain and at discrete locations in the angle domain. Numerical examples are presented to demonstrate the merits of the formulations in slab geometry, for absorbing, emitting, anisotropically scattering mediums, allowing for spatially varying absorption and scattering coefficients. For smooth solutions in space-angle domain, exponentially fast decay of error measures is demonstrated as the p-level of the finite element expansions is increased. The formulations represent attractive alternatives to weak form Galerkin finite element formulations, typically applied to the more complicated second-order even- and odd-parity forms of the radiative transfer equation.  相似文献   

10.
11.
Characterization of computational mesh’s quality prior to performing a numerical simulation is an important step in insuring that the result is valid. A highly distorted mesh can result in significant errors. It is therefore desirable to predict solution accuracy on a given mesh. The HiFi/SEL high-order finite element code is used to study the effects of various mesh distortions on solution quality of known analytic problems for spatial discretizations with different order of finite elements. The measured global error norms are compared to several mesh quality metrics by independently varying both the degree of the distortions and the order of the finite elements. It is found that the spatial spectral convergence rates are preserved for all considered distortion types, while the total error increases with the degree of distortion. For each distortion type, correlations between the measured solution error and the different mesh metrics are quantified, identifying the most appropriate overall mesh metric. The results show promise for future a priori computational mesh quality determination and improvement.  相似文献   

12.
孙毓平 《计算物理》1987,4(4):446-458
本文在详细分析了对流扩散过程物理特性的基础上,按对流扩散过程的物理要求应用特征方法处理对流项、以能充分描述扩散效应的有限分析方法处理扩散项,建立了一种合乎对流扩散物理要求的、无条件L稳定的、数值模拟对流扩散物理现象的特征有限分析方法;并就非线性情况证明了特征有限分析方法的收敛性、给出了解的误差估计。最后的数值实验表明它能很好地模拟对流扩散过程,数值粘性小,精度高,稳定性好,并且没有伪振荡现象发生。  相似文献   

13.
The aim of this paper was to accurately estimate the local truncation error of partial differential equations, that are numerically solved using a finite difference or finite volume approach on structured and unstructured meshes. In this work, we approximated the local truncation error using the τ-estimation procedure, which aims to compare the residuals on a sequence of grids with different spacing. First, we focused the analysis on one-dimensional scalar linear and non-linear test cases to examine the accuracy of the estimation of the truncation error for both finite difference and finite volume approaches on different grid topologies. Then, we extended the analysis to two-dimensional problems: first on linear and non-linear scalar equations and finally on the Euler equations. We demonstrated that this approach yields a highly accurate estimation of the truncation error if some conditions are fulfilled. These conditions are related to the accuracy of the restriction operators, the choice of the boundary conditions, the distortion of the grids and the magnitude of the iteration error.  相似文献   

14.
《Physica A》1991,173(3):583-594
From classical electrodynamics the energy dissipation per unit volume in a dispersive nonmagnetic medium is known to be equal to ωϵ″(ω) 〈E2〉, where ϵ″ denotes the imaginary part of the permittivity. The present work calculates the energy dissipation per unit surface area when two semi-infinite homogenous slabs are separated by a gap a. Only the gap-induced part of the dissipation is taken into account, so that the effect may be called a Casimir dissipative effect. Subtracting off the formal T = 0 expression the net dissipation is found to be negative. This reflects the fact that the dissipation in the presence of a gap is less than it would be in the case of a single homogeneous medium (i.e., a = ∞).  相似文献   

15.
The use of Gibbs distribution-based Bayesian segmentation of electron microscopy images for visualizing nanostructures is investigated. Bayesian segmentation involves dividing an image into nonoverlapping areas that correspond as closely as possible to the observed image. A quantitative characteristic of this correspondence is the a posteriori probability of one variant of division or another. The most likely version is always the division with the greatest a posteriori probability. The Metropolis algorithm for stochastic relaxation is used to obtain Bayesian estimates of the a posteriori probability of a division. Our study of Bayesian segmentation requires visualization of nanostructures on an electron microscopy image of a film made of NiW nanocrystalline alloy.  相似文献   

16.
The problem is reconsidered and it stressed that viscosity must be taken into account. This makes superfluous introduction of an “initial hadronic volume” Vomπ?1 which in the e+e? case can hardly be physically motivated. The viscosity becomes the single source of dissipation. Rough estimate of multiplicity is presented. Further, black body electromagnetic radiation from the hot hadronic clump turns out to be essential. It leads to an increase of energy spent on neutral component.  相似文献   

17.
The performance (accuracy and robustness) of several clustering algorithms is studied for linearly dependent random variables in the presence of noise. It turns out that the error percentage quickly increases when the number of observations is less than the number of variables. This situation is common situation in experiments with DNA microarrays. Moreover, an a posteriori criterion to choose between two discordant clustering algorithm is presented.  相似文献   

18.
We discuss recent developments in the “one-body” dissipation theory described in B?ocki et al. [Ann. Phys. (N.Y.)113 (1978), 330]. The principal new result is the derivation of the functional form of the dissipation expression (the Rayleigh Dissipation Function) for a finite idealized nucleus with a diffuse surface, in the form of an expansion in powers of the dimensionless ratio of the surface diffuseness to the size, R, of the system. The leading term in such an expansion is a surface contribution, of relative order R2, in the form of the “Wall Formula” of B?ocki et al. The next is a curvature correction of order R. At the next level (R0) there are two higher order curvature corrections and a correction for the presence of gradients in the normal velocity field specifying the motion of the surface. For simple models of the nuclear surface profile we work out analytically the coefficients in the curvature and velocity-gradient correction terms. We compare the one-body dissipation theory formulated in this way with recent linear-response and Time-Dependent Hartree-Fock treatments of the nuclear problem. The principal theme that emerges from this study is the close analogy between the problem of the nuclear macroscopic dissipation function and the problem of the nuclear macroscopic potential energy.  相似文献   

19.
We investigate two-quark correlations in hot and dense quark matter. To this end we use the light front field theory extended to finite temperature T and chemical potential μ. Therefore it is necessary to develop quantum statistics formulated on the light front plane. As a test case for light front quantization at finite T and μ we consider the NJL model. The solution of the in-medium gap equation leads to a constituent quark mass which depends on T and μ. Two-quark systems are considered in the pionic and diquark channel. We compute the masses of the two-body system using a T-matrix approach.  相似文献   

20.
Non-equilibrium rarefied flows are encountered frequently in supersonic flight at high altitudes, vacuum technology and in microscale devices. Prediction of the onset of non-equilibrium is important for accurate numerical simulation of such flows. We formulate and apply the discrete version of Boltzmann’s H-theorem for analysis of non-equilibrium onset and accuracy of numerical modeling of rarefied gas flows. The numerical modeling approach is based on the deterministic solution of kinetic model equations. The numerical solution approach comprises the discrete velocity method in the velocity space and the finite volume method in the physical space with different numerical flux schemes: the first-order, the second-order minmod flux limiter and a third-order WENO schemes. The use of entropy considerations in rarefied flow simulations is illustrated for the normal shock, the Riemann and the two-dimensional shock tube problems. The entropy generation rate based on kinetic theory is shown to be a powerful indicator of the onset of non-equilibrium, accuracy of numerical solution as well as the compatibility of boundary conditions for both steady and unsteady problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号