首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
多项式混沌拓展(polynomial chaos expansion,PCE)模型现已发展为全局灵敏度分析的强大工具,却很少作为替代模型用于可靠性分析。针对该模型缺乏误差项从而很难构造主动学习函数来逐步更新的事实,在结构可靠性分析的框架下提出了基于PCE模型和bootstrap重抽样的仿真方法来计算失效概率。首先,对试验设计(experimental design)使用bootstrap重抽样步骤以刻画PCE模型的预测误差;其次,基于这个局部误差构造主动学习函数,通过不断填充试验设计以自适应地更新模型,直到能够精确地逼近真实的功能函数;最后,当PCE模型具有足够精确的拟合、预测能力,再使用蒙特卡洛仿真方法来计算失效概率。提出的平行加点策略既能在模型更新过程中找到改进模型拟合能力的"最好"的点,又考虑了模型拟合的计算量;而且,当失效概率的数量级较低时,PCE-bootstrap步骤与子集仿真(subset simulation)的结合能进一步加速失效概率估计量的收敛。本文方法将PCE模型在概率可靠性领域的应用从灵敏度分析延伸到了可靠性分析,同时,算例分析结果显示了该方法的精确性和高效性。  相似文献   

2.
Kinetic Monte Carlo methods provide a powerful computational tool for the simulation of microscopic processes such as the diffusion of interacting particles on a surface, at a detailed atomistic level. However such algorithms are typically computationatly expensive and are restricted to fairly small spatiotemporal scales. One approach towards overcoming this problem was the development of coarse-grained Monte Carlo algorithms. In recent literature, these methods were shown to be capable of efficiently describing much larger length scales while still incorporating information on microscopic interactions and fluctuations. In this paper, a coarse-grained Langevin system of stochastic differential equations as approximations of diffusion of interacting particles is derived, based on these earlier coarse-grained models. The authors demonstrate the asymptotic equivalence of transient and long time behavior of the Langevin approximation and the underlying microscopic process, using asymptotics methods such as large deviations for interacting particles systems, and furthermore, present corresponding numerical simulations, comparing statistical quantities like mean paths, auto correlations and power spectra of the microscopic and the approximating Langevin processes. Finally, it is shown that the Langevin approximations presented here are much more computationally efficient than conventional Kinetic Monte Carlo methods, since in addition to the reduction in the number of spatial degrees of freedom in coarse-grained Monte Carlo methods, the Langevin system of stochastic differential equations allows for multiple particle moves in a single timestep.  相似文献   

3.
A differentially weighted operator splitting Monte Carlo (DWOSMC) method is further developed to study multi-component aerosol dynamics. The proposed method involves the use of an excellent combination of stochastic and deterministic approaches. Component-related particle volume density distributions are examined, and the computational accuracy and efficiency of the two-component DWOSMC method is verified against a sectional method. For the one-component aerosol system, the sectional method is more computationally efficient than the DWOSMC method, while for two-component aerosol systems, the DWOSMC method proves to be much more computationally efficient than the sectional method. Using this newly developed multi-component DWOSMC method, compositional distributions of particles can be obtained to determine simultaneous coagulation and condensation processes that occur in different regimes of two-component aerosol systems.  相似文献   

4.
基于“蒙特卡罗仿真”的思想,采用随机模拟的方法从混合数据形式的角度对密度算子进行拓展研究。首先,给出了一种将混合数据转化为区间数的方法,并通过平移和放大或缩小处理,将所有区间数放到同一区间范围内;然后,运用随机数发生器给出区间上某分布的随机数信息,并依据随机数的分布情况对其进行聚类,给出了密度权重的确定方法;在此基础上,将随机模拟的方法应用于密度算子信息集结模型中,得到带有概率信息的评价结论。最后,通过一个算例验证了方法的有效性。  相似文献   

5.
This paper reformulates the valuation of interest rate swaps, swap leg payments and swap risk measures, all under stochastic interest rates, as a problem of solving a system of linear equations with random perturbations. A sequence of uniform approximations which solves this system is developed and allows for fast and accurate computation. The proposed method provides a computationally efficient alternative to Monte Carlo based valuations and risk measurement of swaps. This is demonstrated by conducting numerical experiments and so our method provides a potentially important real-time application for analysis and calculation in markets.  相似文献   

6.
In this paper, the problem of passivity analysis is investigated for stochastic interval neural networks with interval time-varying delays and Markovian jumping parameters. By constructing a proper Lyapunov-Krasovskii functional, utilizing the free-weighting matrix method and some stochastic analysis techniques, we deduce new delay-dependent sufficient conditions, that ensure the passivity of the proposed model. These sufficient conditions are computationally efficient and they can be solved numerically by linear matrix inequality (LMI) Toolbox in Matlab. Finally, numerical examples are given to verify the effectiveness and the applicability of the proposed results.  相似文献   

7.
Measured and analytical data are unlikely to be equal due to measured noise, model inadequacies, structural damage, etc. It is necessary to update the physical parameters of analytical models for proper simulation and design studies. Starting from simulated measured modal data such as natural frequencies and their corresponding mode shapes, a new computationally efficient and symmetry preserving method and associated theories are presented in this paper to update the physical parameters of damping and stiffness matrices simultaneously for analytical modeling. A conjecture which is proposed in [Y.X. Yuan, H. Dai, A generalized inverse eigenvalue problem in structural dynamic model updating, J. Comput. Appl. Math. 226 (2009) 42-49] is solved. Two numerical examples are presented to show the efficiency and reliability of the proposed method. It is more important that, some numerical stability analysis on the model updating problem is given combining with numerical experiments.  相似文献   

8.
We consider stochastic discrete optimization problems where the decision variables are nonnegative integers. We propose and analyze an online control scheme which transforms the problem into a surrogate continuous optimization problem and proceeds to solve the latter using standard gradient-based approaches, while simultaneously updating both the actual and surrogate system states. It is shown that the solution of the original problem is recovered as an element of the discrete state neighborhood of the optimal surrogate state. For the special case of separable cost functions, we show that this methodology becomes particularly efficient. Finally, convergence of the proposed algorithm is established under standard technical conditions; numerical results are included in the paper to illustrate the fast convergence of this approach.  相似文献   

9.
Many optimal experimental designs depend on one or more unknown model parameters. In such cases, it is common to use Bayesian optimal design procedures to seek designs that perform well over an entire prior distribution of the unknown model parameter(s). Generally, Bayesian optimal design procedures are viewed as computationally intensive. This is because they require numerical integration techniques to approximate the Bayesian optimality criterion at hand. The most common numerical integration technique involves pseudo Monte Carlo draws from the prior distribution(s). For a good approximation of the Bayesian optimality criterion, a large number of pseudo Monte Carlo draws is required. This results in long computation times. As an alternative to the pseudo Monte Carlo approach, we propose using computationally efficient Gaussian quadrature techniques. Since, for normal prior distributions, suitable quadrature techniques have already been used in the context of optimal experimental design, we focus on quadrature techniques for nonnormal prior distributions. Such prior distributions are appropriate for variance components, correlation coefficients, and any other parameters that are strictly positive or have upper and lower bounds. In this article, we demonstrate the added value of the quadrature techniques we advocate by means of the Bayesian D-optimality criterion in the context of split-plot experiments, but we want to stress that the techniques can be applied to other optimality criteria and other types of experimental designs as well. Supplementary materials for this article are available online.  相似文献   

10.
This paper proposes a novel multi-scale approach for the reliability analysis of composite structures that accounts for both microscopic and macroscopic uncertainties, such as constituent material properties and ply angle. The stochastic structural responses, which establish the relationship between structural responses and random variables, are achieved using a stochastic multi-scale finite element method, which integrates computational homogenisation with the stochastic finite element method. This is further combined with the first- and second-order reliability methods to create a unique reliability analysis framework. To assess this approach, the deterministic computational homogenisation method is combined with the Monte Carlo method as an alternative reliability method. Numerical examples are used to demonstrate the capability of the proposed method in measuring the safety of composite structures. The paper shows that it provides estimates very close to those from Monte Carlo method, but is significantly more efficient in terms of computational time. It is advocated that this new method can be a fundamental element in the development of stochastic multi-scale design methods for composite structures.  相似文献   

11.
This article discusses a new methodology, which combines two efficient methods known as Monte Carlo (MC) and Stochastic‐algebraic (SA) methods for stochastic analyses and probabilistic assessments in electric power systems. The main idea is to use the advantages of each former method to cover the blind spots of the other. This new method is more efficient and more accurate than SA method and also faster than MC method while is less dependent of the sampling process. In this article, the proposed method and two other ones are used to obtain the probability density function of different variables in a power system. Different examples are studied to show the effectiveness of the hybrid method. The results of the proposed method are compared to the ones obtained using the MC and SA methods. © 2014 Wiley Periodicals, Inc. Complexity 21: 100–110, 2015  相似文献   

12.
In this paper, we propose and investigate a new general model of fuzzy stochastic discrete-time complex networks (SDCNs) described by Takagi–Sugeno (T–S) fuzzy model with discrete and distributed time-varying delays. The proposed model takes some well-studied models as special cases. By employing a new Lyapunov functional candidate, we utilize some stochastic analysis techniques and Kronecker product to deduce delay-dependent synchronization criteria that ensure the mean-square synchronization of the proposed T–S fuzzy SDCNs with mixed time-varying delays. These sufficient conditions are computationally efficient as it can be solved numerically by the LMI toolbox in Matlab. A numerical simulation example is provided to verify the effectiveness and the applicability of the proposed approach.  相似文献   

13.
We propose an efficient global sensitivity analysis method for multivariate outputs that applies polynomial chaos-based surrogate models to vector projection-based sensitivity indices. These projection-based sensitivity indices, which are powerful measures of the comprehensive effects of model inputs on multiple outputs, are conventionally estimated by the Monte Carlo simulations that incur prohibitive computational costs for many practical problems. Here, the projection-based sensitivity indices are efficiently estimated via two polynomial chaos-based surrogates: polynomial chaos expansion and a proper orthogonal decomposition-based polynomial chaos expansion. Several numerical examples with various types of outputs are tested to validate the proposed method; the results demonstrate that the polynomial chaos-based surrogates are more efficient than Monte Carlo simulations at estimating the sensitivity indices, even for models with a large number of outputs. Furthermore, for models with only a few outputs, polynomial chaos expansion alone is preferable, whereas for models with a large number of outputs, implementation with proper orthogonal decomposition is the best approach.  相似文献   

14.
It has been recognized through theory and practice that uniformly distributed deterministic sequences provide more accurate results than purely random sequences. A quasi Monte Carlo (QMC) variant of a multi level single linkage (MLSL) algorithm for global optimization is compared with an original stochastic MLSL algorithm for a number of test problems of various complexities. An emphasis is made on high dimensional problems. Two different low-discrepancy sequences (LDS) are used and their efficiency is analysed. It is shown that application of LDS can significantly increase the efficiency of MLSL. The dependence of the sample size required for locating global minima on the number of variables is examined. It is found that higher confidence in the obtained solution and possibly a reduction in the computational time can be achieved by the increase of the total sample size N. N should also be increased as the dimensionality of problems grows. For high dimensional problems clustering methods become inefficient. For such problems a multistart method can be more computationally expedient.  相似文献   

15.
为提高随机模型修正效率,减小计算量,提出了一种基于Kriging模型和提升小波变换的随机模型修正方法.首先,对加速度频响函数进行提升小波变换,提取第5层近似系数代替原频响函数.其次,采用拉丁超立方抽样抽取待修正样本,将其作为Kriging模型的输入,对应的近似系数作为输出,构建Kriging模型.提出了一种引入莱维飞行(Lévy flight)的蝴蝶优化算法(LBOA),并将其应用于Kriging模型相关参数的寻优中,提高Kriging模型的精度.最后,以最小化Wasserstein距离为目标,通过鲸鱼优化算法求解待修正参数的均值.测试函数结果表明,LBOA在寻优能力、收敛精度和稳定性等方面有了很大的提升.数值算例的修正误差均低于0.4%,验证了所提模型修正方法具有较高的修正精度和效率.  相似文献   

16.
Stochastic processes that are sampled in Monte Carlo analyses can be so complex that sampling efficiency is difficult to attain. To handle these difficulties we introduce a model of the elements of a stochastic process which are relevant to the problem of sampling efficiency. From this model we derive a multistage estimating procedure which automatically adjusts the parameters of an efficient sampling design.  相似文献   

17.
This paper considers the problem of finding limits for a statistical process control (SPC) chart for the process mean, when the process distribution is unknown. The bootstrap method estimates these limits relying on Monte Carlo methods, which are subject to simulation errors. Therefore this paper develops a computationally efficient enumeration method for exact calculations of the control limits.  相似文献   

18.
We recast the valuation of annuities and life insurance contracts under mortality and interest rates, both of which are stochastic, as a problem of solving a system of linear equations with random perturbations. A sequence of uniform approximations is developed which allows for fast and accurate computation of expected values. Our reformulation of the valuation problem provides a general framework which can be employed to find insurance premiums and annuity values covering a wide class of stochastic models for mortality and interest rate processes. The proposed approach provides a computationally efficient alternative to Monte Carlo based valuation in pricing mortality-linked contingent claims.  相似文献   

19.
A novel numerical method for updating stiffness matrix by displacement output feedback technique is presented. The required displacement output feedback gain matrix is determined by the QR-decompositions and the singular value decompositions of matrices, and thus the optimal approximation stiffness matrix is found and the large number of unmeasured and unknown eigeninformation of the original model is preserved. The proposed method is computationally efficient since it does not require iterations and the updated stiffness matrix is also symmetric and has the compact expression.  相似文献   

20.
Although generalized linear mixed effects models have received much attention in the statistical literature, there is still no computationally efficient algorithm for computing maximum likelihood estimates for such models when there are a moderate number of random effects. Existing algorithms are either computationally intensive or they compute estimates from an approximate likelihood. Here we propose an algorithm—the spherical–radial algorithm—that is computationally efficient and computes maximum likelihood estimates. Although we concentrate on two-level, generalized linear mixed effects models, the same algorithm can be applied to many other models as well, including nonlinear mixed effects models and frailty models. The computational difficulty for estimation in these models is in integrating the joint distribution of the data and the random effects to obtain the marginal distribution of the data. Our algorithm uses a multidimensional quadrature rule developed in earlier literature to integrate the joint density. This article discusses how this rule may be combined with an optimization algorithm to efficiently compute maximum likelihood estimates. Because of stratification and other aspects of the quadrature rule, the resulting integral estimator has significantly less variance than can be obtained through simple Monte Carlo integration. Computational efficiency is achieved, in part, because relatively few evaluations of the joint density may be required in the numerical integration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号