首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 112 毫秒
1.
Datasets in the fields of climate and environment are often very large and irregularly spaced. To model such datasets, the widely used Gaussian process models in spatial statistics face tremendous challenges due to the prohibitive computational burden. Various approximation methods have been introduced to reduce the computational cost. However, most of them rely on unrealistic assumptions for the underlying process and retaining statistical efficiency remains an issue. We develop a new approximation scheme for maximum likelihood estimation. We show how the composite likelihood method can be adapted to provide different types of hierarchical low rank approximations that are both computationally and statistically efficient. The improvement of the proposed method is explored theoretically; the performance is investigated by numerical and simulation studies; and the practicality is illustrated through applying our methods to two million measurements of soil moisture in the area of the Mississippi River basin, which facilitates a better understanding of the climate variability. Supplementary material for this article is available online.  相似文献   

2.
The safety analysis of systems with nonlinear performance function and small probability of failure is a challenge in the field of reliability analysis. In this study, an efficient approach is presented for approximating small failure probabilities. To meet this aim, by introducing Probability Density Function (PDF) control variates, the original failure probability integral was reformulated based on the Control Variates Technique (CVT). Accordingly, using the adaptive cooperation of the subset simulation (SubSim) and the CVT, a new formulation was offered for the approximation of small failure probabilities. The proposed formulation involves a probability term (resulting from a fast-moving SubSim) and an adaptive weighting term that refines the obtained probability. Several numerical and engineering problems, involving nonlinear performance functions and system-level reliability problems, are solved by the proposed approach and common reliability methods. Results showed that the proposed simulation approach is not only more efficient, but is also robust than common reliability methods. It also presents a good potential for application in engineering reliability problems.  相似文献   

3.
This paper proposes a method combining projection-outline-based active learning strategy with Kriging metamodel for reliability analysis of structures with mixed random and convex variables. In this method, it is determined that the approximation accuracy of projection outlines on the limit-state surface is crucial for estimation of failure probability instead of the whole limit-state surface. To efficiently improve the approximation accuracy of projection outlines, a new projection-outline-based active learning strategy is developed to sequentially obtain update points located around the projection outlines. Taking into account the influence of metamodel uncertainty on the estimation of failure probability, a quantification function of metamodel uncertainty is developed and introduced in the stopping condition of Kriging metamodel update. Finally, Monte Carlo simulation is employed to calculate the failure probability based on the refined Kriging metamodel. Four examples including the Burro Creek Bridge and a piezoelectric energy harvester are tested to validate the performance of the proposed method. Results indicate that the proposed method is accurate and efficient for reliability analysis of structures with mixed random and convex variables.  相似文献   

4.
The analysis of a posteriori error estimates used in reduced basis methods leads to a model reduction scheme for linear time-invariant systems involving the iterative approximation of the associated error systems. The scheme can be used to improve reduced-order models (ROMs) with initial poor approximation quality at a computational cost proportional to that for computing the original ROM. We also show that the iterative approximation scheme is applicable to parametric systems and demonstrate its performance using illustrative examples.  相似文献   

5.
For the parameter sensitivity estimation with implicit limit state functions in the time-invariant reliability analysis, the common Monte Carlo simulation based approach involves multiple trials for each parameter being varied, which will increase associated computational cost and the cost may become inevitably high especially when many random variables are involved. Another effective approach for this problem is featured as constructing the equivalent limit state function (usually called response surface) and performing the estimation in FORM/SORM. However, as the equivalent limit state function is polynomial in the traditional response surface method, it is not a good approximation especially for some highly non-linear limit state functions. To solve the above two problems, a new method, support vector regression based response surface method, is therefore presented in this paper. The support vector regression algorithm is employed to construct the equivalent limit state function and FORM/SORM is used in the parameter sensitivity estimation, and then two illustrative examples are given. It is shown that the computational cost of the sensitivity estimation can be greatly reduced and the accuracy can be retained, and results of the sensitivity estimation obtained by the proposed method are in satisfactory agreement with those computed by the conventional Monte Carlo methods.  相似文献   

6.
Gaussian process models have been widely used in spatial statistics but face tremendous modeling and computational challenges for very large nonstationary spatial datasets. To address these challenges, we develop a Bayesian modeling approach using a nonstationary covariance function constructed based on adaptively selected partitions. The partitioned nonstationary class allows one to knit together local covariance parameters into a valid global nonstationary covariance for prediction, where the local covariance parameters are allowed to be estimated within each partition to reduce computational cost. To further facilitate the computations in local covariance estimation and global prediction, we use the full-scale covariance approximation (FSA) approach for the Bayesian inference of our model. One of our contributions is to model the partitions stochastically by embedding a modified treed partitioning process into the hierarchical models that leads to automated partitioning and substantial computational benefits. We illustrate the utility of our method with simulation studies and the global Total Ozone Matrix Spectrometer (TOMS) data. Supplementary materials for this article are available online.  相似文献   

7.
Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model with normal base measure, Gibbs samplingalgorithms based on the Pólya urn scheme are often used to simulate posterior draws in conjugate models (essentially, linear regression models and models for binary outcomes). In the nonconjugate case, some common problems associated with existing simulation algorithms include convergence and mixing difficulties.

This article proposes an algorithm for MDP models with exponential family likelihoods and normal base measures. The algorithm proceeds by making a Laplace approximation to the likelihood function, thereby matching the proposal with that of the Gibbs sampler. The proposal is accepted or rejected via a Metropolis-Hastings step. For conjugate MDP models, the algorithm is identical to the Gibbs sampler. The performance of the technique is investigated using a Poisson regression model with semi-parametric random effects. The algorithm performs efficiently and reliably, even in problems where large-sample results do not guarantee the success of the Laplace approximation. This is demonstrated by a simulation study where most of the count data consist of small numbers. The technique is associated with substantial benefits relative to existing methods, both in terms of convergence properties and computational cost.  相似文献   

8.
This paper proposes an accelerated solution method to solve two-stage stochastic programming problems with binary variables in the first stage and continuous variables in the second stage. To develop the solution method, an accelerated sample average approximation approach is combined with an accelerated Benders’ decomposition algorithm. The accelerated sample average approximation approach improves the main structure of the original technique through the reduction in the number of mixed integer programming problems that need to be solved. Furthermore, the recently accelerated Benders’ decomposition approach is utilized to expedite the solution time of the mixed integer programming problems. In order to examine the performance of the proposed solution method, the computational experiments are performed on developed stochastic supply chain network design problems. The computational results show that the accelerated solution method solves these problems efficiently. The synergy of the two accelerated approaches improves the computational procedure by an average factor of over 42%, and over 12% in comparison with the original and the recently modified methods, respectively. Moreover, the betterment of the computational process increases substantially with the size of the problem.  相似文献   

9.
Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices, which are then suitably combined to estimate the constant for the entire lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and that may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios, enabling us to implement a practical and efficient nonsimulation-based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The online supplementary materials include our C++ source code for computing the approximate normalizing constant and simulation studies.  相似文献   

10.
A simple and yet powerful method is presented to estimate nonlinearly and nonparametrically the components of additive models using wavelets. The estimator enjoys the good statistical and computational properties of the Waveshrink scatterplot smoother and it can be efficiently computed using the block coordinate relaxation optimization technique. A rule for the automatic selection of the smoothing parameters, suitable for data mining of large datasets, is derived. The wavelet-based method is then extended to estimate generalized additive models. A primal-dual log-barrier interior point algorithm is proposed to solve the corresponding convex programming problem. Based on an asymptotic analysis, a rule for selecting the smoothing parameters is derived, enabling the estimator to be fully automated in practice. We illustrate the finite sample property with a Gaussian and a Poisson simulation.  相似文献   

11.
Slope failure mechanisms (e.g., why and where slope failure occurs) are usually unknown prior to slope stability analysis. Several possible failure scenarios (e.g., slope sliding along different slip surfaces) can be assumed, leading to a number of scenario failure events of slope stability. How to account rationally for various scenario failure events in slope stability reliability analysis and how to identify key failure events that have significant contributions to slope failure are critical questions in slope engineering. In this study, these questions are resolved by developing an efficient computer-based simulation method for slope system reliability analysis. The proposed approach decomposes a slope system failure event into a series of scenario failure events representing possible failure scenarios and calculates their occurrence probabilities by a single run of an advanced Monte Carlo simulation (MCS) method, called generalized Subset Simulation (GSS). Using GSS results, representative failure events (RFEs) that are considered relatively independent are identified from scenario failure events using probabilistic network evaluation technique. Their relative contributions are assessed quantitatively, based on which key failure events are determined. The proposed approach is illustrated using a soil slope example and a rock slope example. It is shown that the proposed approach provides proper estimates of occurrence probabilities of slope system failure event and scenario failure events by a single GSS run, which avoids repeatedly performing simulations for each failure event. Compared with direct MCS, the proposed approach significantly improves computational efficiency, particularly for failure events with small failure probabilities. Key failure events of slope stability are determined among scenario failure events in a cost-effective manner. Such information is valuable in making slope design decisions and remedial measures.  相似文献   

12.
Many existing latent failure time models for competing risks do not provide closed form expressions of sub-distribution functions. This paper suggests a generalized FGM copula models with the Burr III failure time distribution such that the sub-distribution functions have closed form expressions. Under the suggested model, we develop a likelihood-based inference method along with its computational tools and asymptotic theory. Based on the expressions of the sub-distribution functions, we propose goodness-of-fit tests. Simulations are conducted to examine the performance of the proposed methods. A real data from the reliability analysis of the radio transmitter-receivers are analyzed to illustrate the proposed methods. The computational programs are made available in the R package GFGM.copula.  相似文献   

13.
The frozen Gaussian approximation provides a highly efficient computational method for high‐frequency wave propagation. The derivation of the method is based on asymptotic analysis. In this paper, for general linear strictly hyperbolic systems, we establish the rigorous convergence result for frozen Gaussian approximation. As a byproduct, higher‐order frozen Gaussian approximation is developed. © 2011 Wiley Periodicals, Inc.  相似文献   

14.
An adaptive model reduction algorithm is proposed for systems of ODEs from chemical kinetics. Its goal is to provide an accurate approximation to the solution of these systems faster than could be obtained through straightforward numerical integration. The algorithm approximates a system with a sequence of reduced models, each one appropriate to the dynamics of the system during a period of the trajectory. Reduced models are identical to the original system except for the deletion of some chemical reactions. This saves the cost of computing unimportant reaction coefficients. Both the reduced models and the durations for which they are used are selected adaptively in order to efficiently yield an accurate approximate solution. The performance of the algorithm is assessed through numerical experiments.  相似文献   

15.
The present study deals with support vector regression-based metamodeling approach for efficient seismic reliability analysis of structure. Various metamodeling approaches e.g. response surface method, Kriging interpolation, artificial neural network, etc. are usually adopted to overcome computational challenge of simulation based seismic reliability analysis. However, the approximation capability of such empirical risk minimization principal-based metamodeling approach is largely affected by number of training samples. The support vector regression based on the principle of structural risk minimization has revealed improved response approximation ability using small sample learning. The approach is explored here for improved estimate of seismic reliability of structure in the framework of Monte Carlo Simulation technique. The parameters necessary to construct the metamodel are obtained by a simple effective search algorithm by solving an optimization sub-problem to minimize the mean square error obtained by cross-validation method. The simulation technique is readily applied by random selection of metamodel to implicitly consider record to record variations of earthquake. Without additional computational burden, the approach avoids a prior distribution assumption about approximated structural response unlike commonly used dual response surface method. The effectiveness of the proposed approach compared to the usual polynomial response surface and neural network based metamodels is numerically demonstrated.  相似文献   

16.
A function on Rn with multiple local minima is approximated from below, via linear programming, by a linear combination of convex kernel functions using sample points from the given function. The resulting convex kernel underestimator is then minimized, using either a linear equation solver for a linear-quadratic kernel or by a Newton method for a Gaussian kernel, to obtain an approximation to a global minimum of the original function. Successive shrinking of the original search region to which this procedure is applied leads to fairly accurate estimates, within 0.0001% for a Gaussian kernel function, relative to global minima of synthetic nonconvex piecewise-quadratic functions for which the global minima are known exactly. Gaussian kernel underestimation improves by a factor of ten the relative error obtained using a piecewise-linear underestimator (O.L. Mangasarian, J.B. Rosen, and M.E. Thompson, Journal of Global Optimization, Volume 32, Number 1, Pages 1–9, 2005), while cutting computational time by an average factor of over 28.  相似文献   

17.
Moment-based methods use only statistical moments of random variables for reliability analysis. The cumulative distribution function (CDF) or probability density function (PDF) of a performance function can be constructed from the perspective of the first few statistical moments, and the failure probability can be evaluated accordingly. However, existing moment-based methods may lead to large errors or instability. As such, the present paper focuses on the high order moment method for higher accuracy of reliability estimation by combining the common saddlepoint approximation technique, and an improved high order moment-based saddlepoint approximation (SPA) method for reliability analysis is presented. The approximated cumulant generating function (CGF) and the CDF of the performance function in terms of its first four statistical-moments are constructed. The developed method can be used for reliability evaluation of uncertain structures follow any types of distribution. Several numerical examples are given to demonstrate the efficacy and accuracy of the proposed method. Comparisons of the new method and several existing high order moment methods are also made on the reliability assessment.  相似文献   

18.
Αn optimized MPI+OpenACC implementation model that performs efficiently in CPU/GPU systems using large-eddy simulation is presented. The code was validated for the simulation of wave boundary-layer flows against numerical and experimental data in the literature. A direct Fast-Fourier-Transform-based solver was developed for the solution of the Poisson equation for pressure taking advantage of the periodic boundary conditions. This solver was optimized for parallel execution in CPUs and outperforms by 10 times in computational time a typical iterative preconditioned conjugate gradient solver in GPUs. In terms of parallel performance, an overlapping strategy was developed to reduce the overhead of performing MPI communications using GPUs. As a result, the weak scaling of the algorithm was improved up to 30%. Finally, a large-scale simulation (Re = 2 × 105) using a grid of 4 × 108 cells was executed, and the performance of the code was analyzed. The simulation was launched using up to 512 nodes (512 GPUs + 6144 CPU-cores) on one of the current top 10 supercomputers of the world (Piz Daint). A comparison of the overall computational time showed that the GPU version was 4.2 times faster than the CPU one. The parallel efficiency of this strategy (47%) is competitive compared with the state-of-the-art CPU implementations, and it has the potential to take advantage of modern supercomputing capabilities.  相似文献   

19.
Practically, the performance of many engineering problems can be defined using a complex implicit limit state function. Approximation of the accurate failure probability is very time-consuming and inefficient based on Monte Carlo simulation (MCS) for complex performance functions. M5 model tree (M5Tree) model is robust approach for simulation and prediction phenomena, which provides ability to dealing with complex implicit problems by dividing them into smaller problems. By improving the efficiency of reliability method using accurate approximated failure probability, an efficient reliability method using the MCS and M5Tree is proposed to calibrate the performance function and estimate the failure probability, respectively. The superiorities including simplicity and accuracy of M5Tree meta-model are investigated to evaluate the actual performance function through five nonlinear complex mathematical and structural reliability problems. The proposed reliability method-based MCS and M5Tree improved the computational efforts for evaluating the performance function in reliability analysis. The M5Tree significantly increased the efficiency of reliability analysis with accurate failure probability.  相似文献   

20.
Decisions during the reliability growth development process of engineering equipment involve trade-offs between cost and risk. However slight, there exists a chance an item of equipment will not function as planned during its specified life. Consequently the producer can incur a financial penalty. To date, reliability growth research has focussed on the development of models to estimate the rate of failure from test data. Such models are used to support decisions about the effectiveness of options to improve reliability. The extension of reliability growth models to incorporate financial costs associated with ‘unreliability’ is much neglected. In this paper, we extend a Bayesian reliability growth model to include cost analysis. The rationale of the stochastic process underpinning the growth model and the cost structures are described. The ways in which this model can be used to support cost–benefit analysis during product development are discussed and illustrated through a simple case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号