首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A computational methodology is developed to efficiently perform uncertainty quantification for fluid transport in porous media in the presence of both stochastic permeability and multiple scales. In order to capture the small scale heterogeneity, a new mixed multiscale finite element method is developed within the framework of the heterogeneous multiscale method (HMM) in the spatial domain. This new method ensures both local and global mass conservation. Starting from a specified covariance function, the stochastic log-permeability is discretized in the stochastic space using a truncated Karhunen–Loève expansion with several random variables. Due to the small correlation length of the covariance function, this often results in a high stochastic dimensionality. Therefore, a newly developed adaptive high dimensional stochastic model representation technique (HDMR) is used in the stochastic space. This results in a set of low stochastic dimensional subproblems which are efficiently solved using the adaptive sparse grid collocation method (ASGC). Numerical examples are presented for both deterministic and stochastic permeability to show the accuracy and efficiency of the developed stochastic multiscale method.  相似文献   

2.
Experimental evidence suggests that the dynamics of many physical phenomena are significantly affected by the underlying uncertainties associated with variations in properties and fluctuations in operating conditions. Recent developments in stochastic analysis have opened the possibility of realistic modeling of such systems in the presence of multiple sources of uncertainties. These advances raise the possibility of solving the corresponding stochastic inverse problem: the problem of designing/estimating the evolution of a system in the presence of multiple sources of uncertainty given limited information.A scalable, parallel methodology for stochastic inverse/design problems is developed in this article. The representation of the underlying uncertainties and the resultant stochastic dependant variables is performed using a sparse grid collocation methodology. A novel stochastic sensitivity method is introduced based on multiple solutions to deterministic sensitivity problems. The stochastic inverse/design problem is transformed to a deterministic optimization problem in a larger-dimensional space that is subsequently solved using deterministic optimization algorithms. The design framework relies entirely on deterministic direct and sensitivity analysis of the continuum systems, thereby significantly enhancing the range of applicability of the framework for the design in the presence of uncertainty of many other systems usually analyzed with legacy codes. Various illustrative examples with multiple sources of uncertainty including inverse heat conduction problems in random heterogeneous media are provided to showcase the developed framework.  相似文献   

3.
Stochastic analysis of random heterogeneous media provides useful information only if realistic input models of the material property variations are used. These input models are often constructed from a set of experimental samples of the underlying random field. To this end, the Karhunen–Loève (K–L) expansion, also known as principal component analysis (PCA), is the most popular model reduction method due to its uniform mean-square convergence. However, it only projects the samples onto an optimal linear subspace, which results in an unreasonable representation of the original data if they are non-linearly related to each other. In other words, it only preserves the first-order (mean) and second-order statistics (covariance) of a random field, which is insufficient for reproducing complex structures. This paper applies kernel principal component analysis (KPCA) to construct a reduced-order stochastic input model for the material property variation in heterogeneous media. KPCA can be considered as a nonlinear version of PCA. Through use of kernel functions, KPCA further enables the preservation of higher-order statistics of the random field, instead of just two-point statistics as in the standard Karhunen–Loève (K–L) expansion. Thus, this method can model non-Gaussian, non-stationary random fields. In this work, we also propose a new approach to solve the pre-image problem involved in KPCA. In addition, polynomial chaos (PC) expansion is used to represent the random coefficients in KPCA which provides a parametric stochastic input model. Thus, realizations, which are statistically consistent with the experimental data, can be generated in an efficient way. We showcase the methodology by constructing a low-dimensional stochastic input model to represent channelized permeability in porous media.  相似文献   

4.
5.
李杰  彭勇波 《计算物理》2012,29(1):95-100
根据能量保守原理,将微观粒子运动的动能等效成宏观动态屈服的应变能,建立内秉悬浮粒子运动涨落的磁流变液剪切应力的随机多尺度模型.分析表明,悬浮粒子初始随机条件和Brownian运动,以及剪切应变加载过程中,链簇反复断裂、重组的先后次序和数目不均匀,导致系统宏观屈服性态的非线性涨落和随机涨落;同时,微观运动涨落在体积平均过程中被严重弱化,宏观随机涨落相对不明显.拟合Bingham剪变率本构模型则进一步表明,外加场强对宏观屈服性态的变异性有一定程度的影响,磁流变液装置设计中应该考虑物理参数的随机性.  相似文献   

6.
A method based on wavelet transform is developed to characterize variations at multiple scales in non-stationary time series. We consider two different financial time series, S&P CNX Nifty closing index of the National Stock Exchange (India) and Dow Jones industrial average closing values. These time series are chosen since they are known to comprise of stochastic fluctuations as well as cyclic variations at different scales. The wavelet transform isolates cyclic variations at higher scales when random fluctuations are averaged out; this corroborates correlated behaviour observed earlier in financial time series through random matrix studies. Analysis is carried out through Haar, Daubechies-4 and continuous Morlet wavelets for studying the character of fluctuations at different scales and show that cyclic variations emerge at intermediate time scales. It is found that Daubechies family of wavelets can be effectively used to capture cyclic variations since these are local in nature. To get an insight into the occurrence of cyclic variations, we then proceed to model these wavelet coefficients using genetic programming (GP) approach and using the standard embedding technique in the reconstructed phase space. It is found that the standard methods (GP as well as artificial neural networks) fail to model these variations because of poor convergence. A novel interpolation approach is developed that overcomes this difficulty. The dynamical model equations have, primarily, linear terms with additive Padé-type terms. It is seen that the emergence of cyclic variations is due to an interplay of a few important terms in the model. Very interestingly GP model captures smooth variations as well as bursty behaviour quite nicely.   相似文献   

7.
王敏  申玉清  陈震宇  徐鹏 《计算物理》2021,38(5):623-630
根据多孔介质微观结构的分形尺度标度特征,采用蒙特卡罗方法分别重构随机多孔介质的微观颗粒和孔隙结构,并基于分形毛管束模型研究多尺度多孔介质的气体渗流特性,建立多孔介质微观结构和宏观渗流特性的定量关系。结果表明:分形蒙特卡罗重构的多孔介质微细结构接近真实介质结构,气体渗流特性的计算结果与格子玻尔兹曼模拟数据较为吻合; 多孔介质气体渗透率随着克努森数的增加而增大,孔隙分形维数对于气体渗流的微尺度效应具有显著影响,而迂曲度分形维数对于表观渗透率和固有渗透率的比值影响可以忽略。提出的分形蒙特卡罗方法具有收敛速度快且计算误差与维数无关的优点,有利于深入理解多尺度多孔介质的渗流机理。  相似文献   

8.
结合人工神经网络建立裂缝介质多尺度深度学习流动模型.基于一套粗网格和一套细网格,通过在粗网格上训练数据,多尺度神经网络能够以较少的自由度训练出准确的神经网络.并在粗网格上通过求解局部流动问题获得多尺度基函数,结合神经网络进一步得到精细网格的解.基于离散裂缝的流动方程可视为多层网络,网络层数依赖于求解时间步数.阐述裂缝介质多尺度机器学习数值计算格式的建立,介绍如何使用多尺度算法构建离散裂缝模型的多尺度基函数,并采用超样本技术进一步提高计算准确性.数值结果表明,多尺度有限元算法与机器学习结合是一种有效的流体流动模拟算法.  相似文献   

9.
Predicted by stochastic models and observed experimentally in a number of isomerization reactions, viscosity-induced solvent effects manifest themselves in a significant departure of the reaction rates from the values expected on the basis of transition state theory. These effects are well understood within the framework of stochastic models; however, the predictive power of such models is limited by the fact that their parameters are not readily available. Experiment and molecular dynamics (MD) simulations can provide such information and can serve as the testing grounds for various stochastic models. In real solvents, a change in viscosity is inevitably associated with variation of at least one of the three factors – temperature, pressure, or solvent identity, resulting in different solvent–solvent and solvent–solute interactions. A model is proposed in which solvent viscosity is manipulated through mass scaling, which allows one to maintain other factors constant for a series of viscosities. This approach was tested on MD simulations of the kinetics of two model isomerization reactions in Lennard–Jones solvents, whose viscosity was varied over three orders of magnitude. The results reproduce the Kramers turnover and a strong negative viscosity dependence of the reaction rates in the high viscosity limit, somewhat weaker than η ?1.  相似文献   

10.
两相流流型多尺度熵及动力学特性分析   总被引:10,自引:0,他引:10       下载免费PDF全文
郑桂波  金宁德 《物理学报》2009,58(7):4485-4492
研究了几种典型非线性时间序列的多尺度熵特征,在此基础上分析了由插入式阵列电导传感器采集的144种流动条件下的垂直上升气液两相流电导波动信号.研究结果表明:利用小尺度下样本熵的变化速率特征可以分辨三种典型流型(泡状流、段塞流、混状流),而大尺度下样本熵的波动特征可以反映各种流型的动力学特性.泡状流随机可变特性表现为大尺度下样本熵的高值及振荡特征;段塞流气塞与液塞的间歇性运动表现为大尺度下样本熵的低值及平稳性;混状流极不稳定的振荡运动特性表现为介于泡状流及段塞流之间的熵值特点,并在更大尺度时熵值逐渐接近泡状流 关键词: 样本熵 多尺度熵 气液两相流 动力学特性  相似文献   

11.
We develop an efficient, Bayesian Uncertainty Quantification framework using a novel treed Gaussian process model. The tree is adaptively constructed using information conveyed by the observed data about the length scales of the underlying process. On each leaf of the tree, we utilize Bayesian Experimental Design techniques in order to learn a multi-output Gaussian process. The constructed surrogate can provide analytical point estimates, as well as error bars, for the statistics of interest. We numerically demonstrate the effectiveness of the suggested framework in identifying discontinuities, local features and unimportant dimensions in the solution of stochastic differential equations.  相似文献   

12.
A new method for solving numerically stochastic partial differential equations (SPDEs) with multiple scales is presented. The method combines a spectral method with the heterogeneous multiscale method (HMM) presented in [W. E, D. Liu, E. Vanden-Eijnden, Analysis of multiscale methods for stochastic differential equations, Commun. Pure Appl. Math., 58(11) (2005) 1544–1585]. The class of problems that we consider are SPDEs with quadratic nonlinearities that were studied in [D. Blömker, M. Hairer, G.A. Pavliotis, Multiscale analysis for stochastic partial differential equations with quadratic nonlinearities, Nonlinearity, 20(7) (2007) 1721–1744]. For such SPDEs an amplitude equation which describes the effective dynamics at long time scales can be rigorously derived for both advective and diffusive time scales. Our method, based on micro and macro solvers, allows to capture numerically the amplitude equation accurately at a cost independent of the small scales in the problem. Numerical experiments illustrate the behavior of the proposed method.  相似文献   

13.
Aiming to resolve the problem of redundant information concerning rolling bearing degradation characteristics and to tackle the difficulty faced by convolutional deep learning models in learning feature information in complex time series, a prediction model for remaining useful life based on multiscale fusion permutation entropy (MFPE) and a multiscale convolutional attention neural network (MACNN) is proposed. The original signal of the rolling bearing was extracted and decomposed by resonance sparse decomposition to obtain the high-resonance and low-resonance components. The multiscale permutation entropy of the low-resonance component was calculated. Moreover, the locally linear-embedding algorithm was used for dimensionality reduction to remove redundant information. The multiscale convolution module was constructed to learn the feature information at different time scales. The attention module was used to fuse the feature information and input it into the remaining useful life prediction module for evaluation. The appropriate network structure and parameter configuration were determined, and a multiscale convolutional attention neural network was designed to determine the remaining useful life prediction model. The results show that the method demonstrates effectiveness and superiority in degrading the feature information representation and improving the remaining useful life prediction accuracy compared with other models.  相似文献   

14.
Wavelets are widely used now for the analysis of local scales (or frequencies) important in physical events, biological objects, natural phenomena, etc. They provide unique information about scales at different locations. In particular, they are used for analysis of patterns in the phase space of very high multiplicity events.  相似文献   

15.
Variability in the power-law distributions of rupture events   总被引:1,自引:0,他引:1  
Rupture events, as the propagation of cracks or the sliding along faults, associated with the deformation of brittle materials are observed to obey power-law distributions. This is verified at scales ranging from laboratory samples to the Earth’s crust, for various materials and under various loading modes. Besides the claim that this is a universal characteristic of the deformation of heterogeneous media, spatial and temporal variations are observed in the exponent and tail-shape. These have considerable implications for the ability and the reliability of forecasting large events from smaller ones. There is a growing interest in identifying the factors responsible for these variations. In this work, we first present observations at various scales (laboratory tests, field experiments, landslides, mining induced seismicity, crustal Earthquakes) showing that substantial variations exist in both the slope and the tail-shape of the rupture event size distribution. This review allows us to identify potential explanations for these variations (incorrect statistical methods, heterogeneity, stress, brittle/ductile transition, finite size effects, proximity to the failure). A possible link with the critical point theory is also drawn showing that it is able to explain a part of the observed variations considering the distance to the critical point. Using numerical simulations of progressive failure we investigate the role of mechanical properties on the power-law distributions. The results of simulations agree with the critical point theory for various macroscopic behaviors ranging from ductility to brittleness providing a unified framework for the understanding of power-law variability observed in rupture phenomena.  相似文献   

16.
We survey research on radiation propagation or ballistic particle motion through media with randomly variable material density, and we investigate the topic with an emphasis on very high spatial frequencies. Our new results are based on a specific variability model consisting of a zero-mean Gaussian scaling noise riding on a constant value that is large enough with respect to the amplitude of the noise to yield overwhelmingly non-negative density. We first generalize known results about sub-exponential transmission from regular functions, which are almost everywhere continuous, to merely “measurable” ones, which are almost everywhere discontinuous (akin to statistically stationary noises), with positively correlated fluctuations. We then use the generalized measure-theoretic formulation to address negatively correlated stochastic media without leaving the framework of conventional (continuum-limit) transport theory. We thus resolve a controversy about recent claims that only discrete-point process approaches can accommodate negative correlations, i.e., anti-clustering of the material particles. We obtain in this case the predicted super-exponential behavior, but it is rather weak. Physically, and much like the alternative discrete-point process approach, the new model applies most naturally to scales commensurate with the inter-particle distance in the material, i.e., when the notion of particle density breaks down due to Poissonian—or maybe not-so-Poissonian—number-count fluctuations occur in the sample volume. At the same time, the noisy structure must prevail up to scales commensurate with the mean-free-path to be of practical significance. Possible applications are discussed.  相似文献   

17.
When a material is illuminated with a laser beam, it is possible to verify a phenomenon known as dynamic speckle or biospeckle. It exhibits an interference image that contains lots of information about the process being analyzed, and one of its most important applications is determining the activity quantity from the materials under study. The numerical analysis of the dynamic speckle images can be carried out by means of a co-occurrence matrix (COM) that assembles the intensity distributions of a speckle pattern with regard to time. An operational method that is widely used on the biospeckle COMs is the inertia moment (IM). Some studies demonstrate that IM is more sensitive on analyzing processes that involve high activities or high frequencies if considering the spectral analysis of the phenomena. However, when this variation is not so intense, this method is less efficient. For low variations on the activity or low frequencies, qualitative methods such as wavelet based entropy and cross-spectrum analysis have presented better results; however, processes that are in the intermediate range of activity are not well covered for any of these techniques mentioned earlier. The contribution of this research is to present an alternative approach, based on the absolute value of the differences (AVD) when handling the biospeckle COM. By using AVD on the seed-drying process, was found that it is efficient on verifying the behavior of the intermediate frequencies. Accumulated sum test (Coates and Diggle) showed that AVD and IM are generated from the same stochastic process. Thus, AVD is useful as an alternative method in some cases or even as a complementary tool for analyzing the dynamic speckle, mainly when the information of the activity is not present on high frequencies.  相似文献   

18.
Time-dependent conformal maps are used to model a class of growth phenomena limited by coupled non-Laplacian transport processes, such as nonlinear diffusion, advection, and electromigration. Both continuous and stochastic dynamics are described by generalizing conformal-mapping techniques for viscous fingering and diffusion-limited aggregation, respectively. The theory is applied to simulations of advection-diffusion-limited aggregation in a background potential flow. A universal crossover in morphology is observed from diffusion-limited to advection-limited fractal patterns with an associated crossover in the growth rate, controlled by a time-dependent effective Péclet number. Remarkably, the fractal dimension is not affected by advection, in spite of dramatic increases in anisotropy and growth rate, due to the persistence of diffusion limitation at small scales.  相似文献   

19.
We investigate spontaneously generated waves around the interfaces between two different media in a system where the domain scales are limited. These two media are carefully selected so that there exists a theoretical interface wave with the frequency and wave number that can be predicted according to the control parameters. We present the rules of how the frequency and wave number vary with reducing the scales of media domains. We find that the frequency decreases with reducing the scale of antiwave (AW) media, but increases with reducing the scale of normal wave (NW) media in both one-dimensional and two-dimensional systems. The wave number always decreases with reducing scales of either NW or AW media. The least scale to generate the theoretical wave is the predicted wavelength. These special phenomena around the interfaces can be applied to detect the limited scale of a system.  相似文献   

20.
A general problem when analysing NMR spectra that reflect variations in the environment of target molecules is that different resonances are affected to various extents. Often a few resonances that display the largest frequency changes are selected as probes to reflect the examined variation, especially in the case, where the NMR spectra contain numerous resonances. Such a selection is dependent on more or less intuitive judgements and relying on the observed spectral variation being primarily caused by changes in the NMR sample. Second, recording changes observed for a few (albeit significant) resonances is inevitably accompanied by not using all available information in the analysis. Likewise, the commonly used chemical shift mapping (CSM) [Biochemistry 39 (2000) 26, Biochemistry 39 (2000) 12595] constitutes a loss of information since the total variation in the data is not retained in the projection into this single variable. Here, we describe a method for subjecting 2D NMR time-domain data to multivariate analysis and illustrate it with an analysis of multiple NMR experiments recorded at various folding conditions for the protein MerP. The calculated principal components provide an unbiased model of variations in the NMR spectra and they can consequently be processed as NMR data, and all the changes as reflected in the principal components are thereby made available for visual inspection in one single NMR spectrum. This approach is much less laborious than consideration of large numbers of individual spectra, and it greatly increases the interpretative power of the analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号