首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multiscale reliability places priority on the shifting of space-time scale while dual-scale reliability concentrates on time limits. Both can be ranked by applying the principle of least variance, although the prevailing criteria for assessment may differ. The elements measuring reliability can be ideally assumed to be non-interactive or interactive as a rule. Different formulations of the latter can be adopted to yield weak, strong, and mixed reliability depending on the application. Variance can also be referred to the average based on the linear sum, the root mean square, or otherwise. Preference will again depend on the physical system under consideration. Different space-time scale ranges can be chosen for the appropriate time span to failure. Up to now, only partial validation can be made due to the lack of lower scale data that are generated theoretically.A set of R-integrals is defined to account for the evolution effects by way of the root functions from Ideomechanics. The approach calls for a “pulsating mass” model that can connect the physical laws for the small and large bodies, including energy dissipation at all scale level. Non-linearity is no longer an issue when characterization of matter is made by the multiscaling of space-time. Ordinary functions can also be treated with minor modifications.The key objective is not to derive new theories, but to explain the underlying physics of existing test data, and the reliability of diversified propositions for predicting the time span to failure. Present and past investigations have remained at the micro-macro or mi-ma scale range for several decades due to the inability to quantify lower scale data. To this end, the available mi-ma fatigue crack growth data are used to generate those at the na-mi and pi-na scale ranges. Reliability variances are computed for the three different scale ranges, covering effects from the atomic to the macroscopic scale. They include the initial crack or defect length and velocities. Specimen with large initial defects are found to be more reliable. This trend also holds for each of the na-mi and pi-na scale range. Also, large specimen data had smaller reliability variances than the smaller specimens making them more reliable. Variances for the nano- and pico-scale range had much more scatter and were diversified. Uncertainties and un-reliabilities at the atomic and sub-atomic scale are no doubt related, although their connections remain to be found.Reliability with high order precisions are also defined for multi-component systems that can involve trillions of elements at the different scale ranges. Such large scale computations are now within reach by the advent of super-speed computers, especially when reliability, risk, and among other factors may have to be considered simultaneously.  相似文献   

2.
A R-integral is defined to account for the evolution of the root functions from Ideomechanics. They can be identified with, though not limited to, the fatigue crack length or velocity. The choice was dictated by the available validated data for relating accelerated testing to real time life expectancy. The key issue is to show that there exists a time range of high reliability for the crack length and velocity that correspond to the least variance of the time dependent R-integrals. Excluded from the high reliability time range are the initial time span where the lower scale defects are predominant and the time when the macrocrack approaches instability at relatively high velocity. What remains is the time span for micro-macro cracking. The linear sum (ls) and root mean square (rms) average are used to delineate two different types of variance. The former yields a higher reliability in comparison with that for the latter. The results support the scale range established empirically by in-service health monitoring for the crack length and velocity. The principle of least variance can be extended to multiscale reliability analysis and assessment for multi-component and multi-function systems.  相似文献   

3.
Validation of Ideomechanics (IDM) is manifested by removing the inconsistency of applying open system test data to closed system theories. Instead, the available open system test data can be used rightly to determine the physical parameters of the transitional functions defined by the mean values of length (free path), velocity, mass and energy. Multiscaling and size/time effects are considered. Ambiguities are mitigated when energy takes precedent in lieu of the concept of force.Determined directly from IDM is the energy density function from the velocity that can represent the magnitude of the energy sink and source. The formulation involves grouping pairs of variables of opposing poles that can be constructed as ideograms, much like yin-yang of I-Chin. The flow of Chi implicates the arrow of time and irreversibility. Mass activation/inactivation (AIA) is assumed to be related to the expansion/contraction (EXCO) of matter. Inadvertently, physical systems are identified with inhaling and exhaling of energy corresponding, respectively, to direct-absorption and self-dissipation (DASD). They are postulated to be the basic process for determining the integrity of the system.In contrast to Newtonian/Einsteinian mechanics (NEM) that uses field equations for determining the behavior of the whole everywhere for all time, IDM considers the mean behavior at any given size/time scale, however, large and small. Uncertainties are addressed by the scale transitional functions. The new paradigm can be applied to scaling shifting and to construct equivalence relations for open systems and to the use of existing test data free of ambiguities. Classical conservation laws for closed systems are reducible from the equivalence principles of open systems. The same holds for the classical kinetic molecular theory of matter that can be modified to include dissipation.  相似文献   

4.
In this study, two multi-scale analyses codes are newly developed by combining a homogenization algorithm and an elastic/crystalline viscoplastic finite element (FE) method (Nakamachi, E., 1988. A finite element simulation of the sheet metal forming process. Int. J. Numer. Meth. Eng. 25, 283–292; Nakamachi, E., Dong, X., 1996. Elastic/crystalline viscoplastic finite element analysis of dynamic deformation of sheet metal. Int. J. Computer-Aided Eng. Software 13, 308–326; Nakamachi, E., Dong, X., 1997. Study of texture effect on sheet failure in a limit dome height test by using elastic/crystalline viscoplastic finite element analysis. J. Appl. Mech. Trans. ASME(E) 64, 519–524; Nakamachi, E., 1998. Elastic/crystalline viscoplastic finite element modeling based on hardening–softening evaluation equation. In: Proc. of the 6th NUMIFORM, pp. 315–321; Nakamachi, E., Hiraiwa, K., Morimoto, H., Harimoto, M., 2000a. Elastic/crystalline viscoplastic finite element analyses of single- and poly-crystal sheet deformations and their experimental verification. Int. J. Plasticity 16, 1419–1441; Nakamachi, E., Xie, C.L., Harimoto, M., 2000b. Drawability assessment of BCC steel sheet by using elastic/crystalline viscoplastic finite element analyses. Int. J. Mech. Sci. 43, 631–652); (1) a “semi-implicit” finite element (FE) code and (2) a “dynamic explicit” FE code. These were applied to predict the plastic strain induced yield loci and the formability of sheet metal in the macro scale, and simultaneously the crystal texture and hardening evolutions in the micro scale. The isotropic and kinematical hardening laws are employed in the crystalline plasticity constitutive equation. For the multi-scale structure, two-scales are considered. One is a microscopic polycrystal structure and the other a macroscopic elastic plastic continuum. We measure crystal morphologies by using the SEM-EBSD apparatus with a unit of about 3.8 μm voxel, and define a three dimensional (3D) representative volume element (RVE) for the micro polycrystal structure, which satisfy the periodicity condition of crystal orientation distribution. A “micro” finite element modeling technique is newly established to minimize the total number of finite elements in the micro scale. Next, the “semi-implicit” crystallographic homogenization FE code, which employs the SEM-EBSD measured RVE, is applied to the 99.9% pure-iron uni-axial tensile problem to predict the texture evolution and the subsequent yield loci in the various strain paths. These “semi implicit” results reveal that the plastic strain induced anisotropy in the micro and macro levels can be predicted by our FE analyses. The kinematical hardening law leads a distinct plastic strain induced anisotropy. Our “dynamic-explicit” FE code is applied to simulate the limit dome height (LDH) test problem of the mild steel DQSK, the high strength steel HSLA and the aluminum alloy AL6022 sheet metals, which were adopted as the NUMISHEET2005 Benchmark sheet metals (Smith, L.M., Pourboghrat, F., Yoon, J.-W., Stoughton, T.B., 2005. NUMISHEET2005. In: Proc. of 6th Int. Conf. Numerical Simulation of 3D Sheet Metal Forming Processes, PART A and B(Benchmark), pp. 409–451) to estimate formability. The “dynamic explicit” results reveal that the initial crystal orientation distribution has a large affects to a plastic strain induced texture and anisotropic hardening evolutions and sheet formability.  相似文献   

5.
A novel method based on genetic algorithm (GA) is proposed, to the best of our knowledge for the first time, for finding the neutral instability curve of the Orr-Sommerfeld equation in (nearly) parallel flows. New concepts such as “proximity of parents” and “gender discrimination” are added to the conventional GA in order for this algorithm to find the neutral instability curve. Certain GA operators such as “crossover” and “mutation” will also be modified in such a way that this algorithm can meet this purpose. To check the applicability of the modified genetic algorithm (MGA) developed in this work in finding the neutral instability curve, the case of plane Poiseuille flow will be used as a benchmark. It will be shown that the modified genetic algorithm developed in this work is well capable of determining the neutral instability curve for this particular flow geometry.  相似文献   

6.
This paper suggests a procedure for estimating excursion probabilities for linear and non-linear systems subjected to Gaussian excitation processes. In this paper, the focus is on non-linear systems which might also have stochastic properties. The approach is based on the so-called “averaged excursion probability flow” which allows for a simple solution for the interaction in excursion problems. Simplifying, the dynamic reliability problem can be reduced to a simpler “static” problem by considering the probability flow at fixed time instances. The proposed approach is very general and can be applied to both linear and non-linear systems of which the response can be determined by deterministic methods. Hence, the procedure applies to arbitrary structures and any suitable mathematical model including large FE-models solved by deterministic FE-codes.  相似文献   

7.
8.
A specific flow rules and the corresponding constitutive elasto-viscoplastic model combined with new experimental strategy are introduced in order to represent a spheroidal graphite cast-iron behaviour on a wide range of strain, strain rate and temperature. A “full model” is first proposed to correctly reproduce the alloy behaviour even for very small strain levels. A “light model” with a bit poorer experimental agreement but a simpler formulation is also proposed. These macroscopic models, whose equations are based on physical phenomena observed at the dislocation scale, are able to cope with the various load conditions tested – progressive straining and cyclic hardening tests – and to correctly describe anisothermal evolution. The accuracy of these two models and the experimental databases to which they are linked is estimated on different types of experimental tests and compared with the accuracy of more standard Chaboche-type constitutive models. Each test leads to the superiority of the “full model”, particularly for slow strain rates regimes. After developing a material user subroutine, FEM simulations are performed on Abaqus for a car engine exhaust manifold and confirm the good results obtained from the experimental basis. We obtain more accurate results than those given by more traditional laws. A very good correlation is observed between the simulations and the engine bench tests.  相似文献   

9.
The generation of slugs was studied for air–water flow in horizontal 0.0763 m and 0.095 m pipes. The emphasis was on high liquid rates (uLS ? 0.5 m/s) for which slugs are formed close to the entry and the time intervals between slugs are stochastic. A “fully developed” slug flow is defined as consisting of slugs with different sizes interspersed in a stratified flow with a height slightly larger than the height, h0, needed for a slug to be stable. Properties of this “fully developed” pattern are discussed. A correlation for the frequency of slugging is suggested, which describes our data as well as the data from other laboratories for a wide range of conditions. The possibility is explored that there is a further increase of slug length beyond the “fully developed” condition because slugs slowly overtake one another.  相似文献   

10.
In this essay I will attempt to identify the main events in the history of thought about irrotational flow of viscous fluids. I am of the opinion that when considering irrotational solutions of the Navier–Stokes equations it is never necessary and typically not useful to put the viscosity to zero. This observation runs counter to the idea frequently expressed that potential flow is a topic which is useful only for inviscid fluids; many people think that the notion of a viscous potential flow is an oxymoron. Incorrect statements like “… irrotational flow implies inviscid flow but not the other way around” can be found in popular textbooks.  相似文献   

11.
12.
13.
A new model coupling two basic models, the model based on interface tracking method and the two-fluid model, for simulating gas–liquid two-phase flow is presented. The new model can be used to simulate complex multiphase flow in which both large-length-scale interface and small-length-scale gas–liquid interface coexist. By the physical state and the length scale of interface, three phases are divided, including the liquid phase, the large-length-scale-interface phase (LSI phase) and the small-length-scale-interface phase (SSI phase). A unified solution framework shared by the two basic models is built, which makes it convenient to perform the solution process. Based on the unified solution framework, the modified MCBA–SIMPLE algorithm is employed to solve the Navier–Stokes equations for the proposed model. A special treatment called “volume fraction redistribution” is adopted for the special grids containing all three phases. Another treatment is proposed for the advection of large-length-scale interface when some portion of SSI phase coalesces into LSI phase. The movement of the large-length-scale interface is evaluated using VOF/PLIC method. The proposed model is equivalent to the two-fluid model in the zone where only the liquid phase and the SSI phase are present and to the model based on interface tracking method in the zone where only the liquid phase and the LSI phase are present. The characteristics of the proposed model are shown by four problems.  相似文献   

14.
The micromechanics of elasto-viscoplastic composites made up of a random and homogeneous dispersion of spherical inclusions in a continuous matrix was studied with two methods. The first one is an affine homogenization approach, which transforms the local constitutive laws into fictitious linear thermo-elastic relations in the Laplace–Carson domain so that corresponding homogenization schemes can apply, and the temporal response is computed after a numerical inversion of Laplace transform. The second method is the direct numerical simulation by finite elements of a three-dimensional representative volume element of the composite microstructure. The numerical simulations carried out over different realizations of the composite microstructure showed very little scatter and thus provided – for the first time – “exact” results in the elasto-viscoplastic regime that can be used as benchmarks to check the accuracy of other models. Overall, the predictions of the affine homogenization model were excellent, regardless of the volume fraction of spheres, of the loading paths (shear, uniaxial tension and biaxial tension as well as monotonic and cyclic deformation), particularly at low strain rates. It was found, however, that the accuracy decreased systematically as the strain rate increased. The detailed information of the stress and strain microfields given by the finite element simulations was used to analyze the source of this difference, so that better homogenization methods can be developed.  相似文献   

15.
Simple dimensional arguments are used in establishing three different regimes of particle time scale, where explicit expression for particle Reynolds number and Stokes number are obtained as a function of nondimensional particle size (d/η)(d/η) and density ratio. From a comparative analysis of the different computational approaches available for turbulent multiphase flows it is argued that the point–particle approach is uniquely suited to address turbulent multiphase flows where the Stokes number, defined as the ratio of particle time scale to Kolmogorov time scale (τp/τk)(τp/τk), is greater than 1. The Stokes number estimate has been used to establish parameter range where point–particle approach is ideally suited. The point–particle approach can be extended to handle “finite-sized” particles whose diameter approach that of the smallest resolved eddies. However, new challenges arise in the implementation of Lagrangian–Eulerian coupling between the particles and the carrier phase. An approach where the inter-phase momentum and energy coupling can be separated into a deterministic and a stochastic contribution has been suggested.  相似文献   

16.
Uptake of water by plant roots can be considered at two different Darcian scales, referred to as the mesoscopic and macroscopic scales. At the mesoscopic scale, uptake of water is represented by a flux at the soil–root interface, while at the macroscopic scale it is represented by a sink term in the volumetric mass balance. At the mesoscopic scale, uptake of water by individual plant roots can be described by a diffusion equation, describing the flow of water from soil to plant root, and appropriate initial and boundary conditions. The model involves at least two characteristic lengths describing the root–soil geometry and two characteristic times, one describing the capillary flow of water from soil to plant roots and another the ratio of supply of water in the soil and uptake by plant roots. Generally, at a certain critical time, uptake will switch from demand-driven to supply-dependent. In this paper, the solutions of some of the resulting mesoscopic linear and nonlinear problems are reviewed. The resulting expressions for the evolution of the average water content can be used as a basis for upscaling from the mesoscopic to the macroscopic scale. It will be seen that demand-driven and supply-dependent uptake also emerge at the macroscopic scale. Information about root systems needed to operationalize macroscopic models will be reviewed briefly.  相似文献   

17.
Thermoplastic elastomers (TPEs) are block copolymers made up of “hard” (glassy or crystalline) and “soft” (rubbery) blocks that self-organize into “domain” structures at a length scale of a few tens of nanometers. Under typical processing conditions, TPEs also develop a “polydomain” structure at the micron level that is similar to that of metal polycrystals. Therefore, from a continuum point of view, TPEs may be regarded as materials with heterogeneities at two different length scales. In this work, we propose a constitutive model for highly oriented, near-single-crystal TPEs with lamellar domain morphology. Based on small-angle X-ray scattering (SAXS) and transmission electron microscopy (TEM) observations, we consider such materials to have a granular microstructure where the grains are made up of the same, perfect, lamellar structure (single crystal) with slightly different lamination directions (crystal orientations). Having identified the underlying morphology, the overall finite-deformation response of these materials is determined by means of a two-scale homogenization procedure. Interestingly, the model predictions indicate that the evolution of microstructure—especially the rotation of the layers—has a very significant, but subtle effect on the overall properties of near-single-crystal TPEs. In particular, for certain loading conditions—namely, for those with sufficiently large compressive deformations applied in the direction of the lamellae within the individual grains—the model becomes macroscopically unstable (i.e., it loses strong ellipticity). By keeping track of the evolution of the underlying microstructure, we find that such instabilities can be related to the development of “chevron” patterns.  相似文献   

18.
The continuous production “on demand” of large polymerized objects is presented using a versatile, easy to implement and low cost “millifluidic” reactor. Over microfluidic devices, the present set-up offers two considerable advantages: (i) much larger particles are produced with a very good control of sizes and shapes and (ii) no lithography is required for its design. Considering the high modularity of this synthetic pathway, “tubular millifluidic” appears as a new concept of synthesizing particles with a strong control over final object sizes, monodispersity and aspect ratio. The possibility to reach a high scale production makes it a promising production tools for the industry.  相似文献   

19.
A deformation-theory version of strain-gradient plasticity is employed to assess the influence of microstructural scale on the yield strength of composites and polycrystals. The framework is that recently employed by Fleck and Willis (J. Mech. Phys. Solids 52 (2004) 1855-1888), but it is enhanced by the introduction of an interfacial “energy” that penalises the build-up of plastic strain at interfaces. The most notable features of the new interfacial potential are: (a) internal surfaces are treated as surfaces of discontinuity and (b) the scale-dependent enhancement of the overall yield strength is no longer limited by the “Taylor” or “Voigt” upper bound. The variational structure associated with the theory is developed in generality and its implications are demonstrated through consideration of simple one-dimensional examples. Results are presented for a single-phase medium containing interfaces distributed either periodically or randomly.  相似文献   

20.
Test results for critical local fracture stresses are analysed statistically for both “as-received” and “degraded” pressure-vessel weld metal. The values were determined from the fracture loads of blunt-notch four-point-bend specimens fractured over a range of low test temperatures, making use of results from a finite-element stress analysis of the stress-strain distributions ahead of the notch root. The “degraded” material tested in this work has been austenitized at a high temperature, followed by both prestraining and temper embrittlement. This has led to a situation in which the fracture stress for the “degraded” material is reduced significantly below that for the “as-received” material. The fracture mechanisms are different in that the “degraded” material shows evidence of intergranular fracture as well as cleavage fracture (in coarse grain size) whereas the “as-received” material shows only cleavage fracture (in fine grain size). The critical stress (σF) distributions plotted on normal probability paper show that the experimental cumulative distribution function (CDF) is linear for each condition with different mean values: for “as-received” material and for “degraded” material. The values of standard deviation are small and almost identical (33-). The decrease of the local fracture stress after degradation is related to the local fracture micro-mechanisms. Statistical analysis of the results for the two conditions supports the hypothesis that the values of σF are essentially single valued, within random experimental errors. A similar analysis of the data treating both conditions as a single population reveals some interesting points relating to statistical modelling and lower-bound estimation for mechanical properties. Comparisons are made with Weibull analysis of the data. A further conclusion is that it is extremely important to base any statistical model on inferences drawn from micro-mechanical modelling of processes, and that examination of “normal” CDFs can often provide good indications of when it is necessary to subject data to further statistical and physical analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号