首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
结构可靠性分析的支持向量机方法   总被引:10,自引:0,他引:10  
针对结构可靠性分析中功能函数不能显式表达的问题,将支持向量机方法引入到结构可靠性分析中.支持向量机是一种实现了结构风险最小化原则的分类技术,它具有出色的小样本学习性能和良好的泛化性能,因此提出了两种基于支持向量机的结构可靠性分析方法.与传统的响应面法和神经网络法相比,支持向量机可靠性分析方法的显著特点是在小样本下高精度地逼近函数,并且可以避免维数灾难.算例结果也充分表明支持向量机方法可以在抽样范围内很好地逼近真实的功能函数,减少隐式功能函数分析(通常是有限元分析)的次数,具有一定的工程实用价值.  相似文献   

2.
This paper deals with beams under static loads, in presence of multiple cracks with uncertain parameters. The crack is modelled as a linearly-elastic rotational spring and, following a non-probabilistic approach, both stiffness and position of the spring are taken as uncertain-but-bounded parameters.A novel approach is proposed to compute the bounds of the response. The key idea is a preliminary monotonicity test, which evaluates sensitivity functions of the beam response with respect to the separate variation of every uncertain parameter within the pertinent interval. Next, two alternative procedures calculate lower and upper bounds of the response. If the response is monotonic with respect to all the uncertain parameters, the bounds are calculated by a straightforward sensitivity-based method making use of the sensitivity functions built in the monotonicity test. In contrast, if the response is not monotonic with respect to even one parameter only, the bounds are evaluated via a global optimization technique. The presented approach applies for every response function and the implementation takes advantage of closed analytical forms for all response variables and related sensitivity functions.Numerical results prove efficiency and robustness of the approach, which provides very accurate bounds even for large uncertainties, avoiding the computational effort required by the vertex method and Monte Carlo simulation.  相似文献   

3.
The paper describes a theoretical apparatus and an algorithmic part of application of the Green matrix-valued functions for time-domain analysis of systems of linear stochastic integro-differential equations. It is suggested that these systems are subjected to Gaussian nonstationary stochastic noises in the presence of model parameter uncertainties that are described in the framework of the probability theory. If the uncertain model parameter is fixed to a given value, then a time-history of the system will be fully represented by a second-order Gaussian vector stochastic process whose properties are completely defined by its conditional vector-valued mean function and matrix-valued covariance function. The scheme that is proposed is constituted of a combination of two subschemes. The first one explicitly defines closed relations for symbolic and numeric computations of the conditional mean and covariance functions, and the second one calculates unconditional characteristics by the Monte Carlo method. A full scheme realized on the base of Wolfram Mathematica and Intel Fortran software programs, is demonstrated by an example devoted to an estimation of a nonstationary stochastic response of a mechanical system with a thermoviscoelastic component. Results obtained by using the proposed scheme are compared with a reference solution constructed by using a direct Monte Carlo simulation.  相似文献   

4.
Maximum likelihood estimation in random effects models for non-Gaussian data is a computationally challenging task that currently receives much attention. This article shows that the estimation process can be facilitated by the use of automatic differentiation, which is a technique for exact numerical differentiation of functions represented as computer programs. Automatic differentiation is applied to an approximation of the likelihood function, obtained by using either Laplace's method of integration or importance sampling. The approach is applied to generalized linear mixed models. The computational speed is high compared to the Monte Carlo EM algorithm and the Monte Carlo Newton–Raphson method.  相似文献   

5.
A general framework is proposed for what we call the sensitivity derivative Monte Carlo (SDMC) solution of optimal control problems with a stochastic parameter. This method employs the residual in the first-order Taylor series expansion of the cost functional in terms of the stochastic parameter rather than the cost functional itself. A rigorous estimate is derived for the variance of the residual, and it is verified by numerical experiments involving the generalized steady-state Burgers equation with a stochastic coefficient of viscosity. Specifically, the numerical results show that for a given number of samples, the present method yields an order of magnitude higher accuracy than a conventional Monte Carlo method. In other words, the proposed variance reduction method based on sensitivity derivatives is shown to accelerate convergence of the Monte Carlo method. As the sensitivity derivatives are computed only at the mean values of the relevant parameters, the related extra cost of the proposed method is a fraction of the total time of the Monte Carlo method.  相似文献   

6.
张巍巍 《经济数学》2020,37(4):159-163
研究随机约束条件下半参数变系数部分线性模型的参数估计问题,当回归模型线性部分变量存在多重共线性时,基于Profile最小二乘方法、s-K估计和加权混合估计构造参数向量的加权随机约束s-K估计,并在均方误差矩阵准则下给出新估计量优于s-K估计和加权混合估计的充要条件,最后通过蒙特卡洛数值模拟验证所提出估计量的有限样本性质.  相似文献   

7.
The cluster-weighted model (CWM) is a mixture model with random covariates that allows for flexible clustering/classification and distribution estimation of a random vector composed of a response variable and a set of covariates. Within this class of models, the generalized linear exponential CWM is here introduced especially for modeling bivariate data of mixed-type. Its natural counterpart in the family of latent class models is also defined. Maximum likelihood parameter estimates are derived using the expectation-maximization algorithm and some computational issues are detailed. Through Monte Carlo experiments, the classification performance of the proposed model is compared with other mixture-based approaches, consistency of the estimators of the regression coefficients is evaluated, and several likelihood-based information criteria are compared for selecting the number of mixture components. An application to real data is also finally considered.  相似文献   

8.
The present study deals with support vector regression-based metamodeling approach for efficient seismic reliability analysis of structure. Various metamodeling approaches e.g. response surface method, Kriging interpolation, artificial neural network, etc. are usually adopted to overcome computational challenge of simulation based seismic reliability analysis. However, the approximation capability of such empirical risk minimization principal-based metamodeling approach is largely affected by number of training samples. The support vector regression based on the principle of structural risk minimization has revealed improved response approximation ability using small sample learning. The approach is explored here for improved estimate of seismic reliability of structure in the framework of Monte Carlo Simulation technique. The parameters necessary to construct the metamodel are obtained by a simple effective search algorithm by solving an optimization sub-problem to minimize the mean square error obtained by cross-validation method. The simulation technique is readily applied by random selection of metamodel to implicitly consider record to record variations of earthquake. Without additional computational burden, the approach avoids a prior distribution assumption about approximated structural response unlike commonly used dual response surface method. The effectiveness of the proposed approach compared to the usual polynomial response surface and neural network based metamodels is numerically demonstrated.  相似文献   

9.
Implementations of the Monte Carlo EM Algorithm   总被引:1,自引:0,他引:1  
The Monte Carlo EM (MCEM) algorithm is a modification of the EM algorithm where the expectation in the E-step is computed numerically through Monte Carlo simulations. The most exible and generally applicable approach to obtaining a Monte Carlo sample in each iteration of an MCEM algorithm is through Markov chain Monte Carlo (MCMC) routines such as the Gibbs and Metropolis–Hastings samplers. Although MCMC estimation presents a tractable solution to problems where the E-step is not available in closed form, two issues arise when implementing this MCEM routine: (1) how do we minimize the computational cost in obtaining an MCMC sample? and (2) how do we choose the Monte Carlo sample size? We address the first question through an application of importance sampling whereby samples drawn during previous EM iterations are recycled rather than running an MCMC sampler each MCEM iteration. The second question is addressed through an application of regenerative simulation. We obtain approximate independent and identical samples by subsampling the generated MCMC sample during different renewal periods. Standard central limit theorems may thus be used to gauge Monte Carlo error. In particular, we apply an automated rule for increasing the Monte Carlo sample size when the Monte Carlo error overwhelms the EM estimate at any given iteration. We illustrate our MCEM algorithm through analyses of two datasets fit by generalized linear mixed models. As a part of these applications, we demonstrate the improvement in computational cost and efficiency of our routine over alternative MCEM strategies.  相似文献   

10.
Markov Chain Monte Carlo (MCMC) algorithms play an important role in statistical inference problems dealing with intractable probability distributions. Recently, many MCMC algorithms such as Hamiltonian Monte Carlo (HMC) and Riemannian Manifold HMC have been proposed to provide distant proposals with high acceptance rate. These algorithms, however, tend to be computationally intensive which could limit their usefulness, especially for big data problems due to repetitive evaluations of functions and statistical quantities that depend on the data. This issue occurs in many statistic computing problems. In this paper, we propose a novel strategy that exploits smoothness (regularity) in parameter space to improve computational efficiency of MCMC algorithms. When evaluation of functions or statistical quantities are needed at a point in parameter space, interpolation from precomputed values or previous computed values is used. More specifically, we focus on HMC algorithms that use geometric information for faster exploration of probability distributions. Our proposed method is based on precomputing the required geometric information on a set of grids before running sampling algorithm and approximating the geometric information for the current location of the sampler using the precomputed information at nearby grids at each iteration of HMC. Sparse grid interpolation method is used for high dimensional problems. Tests on computational examples are shown to illustrate the advantages of our method.  相似文献   

11.
The method of linear associative memory (LAM), a notion from the field of artificial neural nets, has been applied recently in nonlinear parameter estimation. In the LAM method, a model response, nonlinear with respect to the parameters, is approximated linearly by a matrix, which maps inversely from a response vector to a parameter vector. This matrix is determined from a set of initial training parameter vectors and their response vectors, and can be update recursively and adaptively with a pair of newly generated parameter response vectors. The LAM advantage is that it can yield a good estimation of the true parameters from a given observed response, even if the initial training parameter vectors are far from the true values.In this paper, we present a weighted linear associative memory (WLAM) for nonlinear parameter estimation. WLAM improves LAM by taking into account an observed response vector oriented weighting. The basic idea is to weight each pair of parameter response vectors in the cost function such that, if a response vector is closer to the observed one, then this pair plays a more important role in the cost function. This weighting algorithm improves significantly the accuracy of parameter estimation as compared to a LAM without weighting. In addition, we are able to construct the associative memory matrix recursively, while taking the weighting procedure into account, and simultaneously update the ridge parameter of the cost function further improving the efficiency of the WLAM estimation. These features enable WLAM to be a powerful tool for nonlinear parameter simulation.This work was supported by National Science Foundation, Grants BCS-93-15886 and INT-94-17206. We thank Mr. L. Yobas for fruitful discussions.  相似文献   

12.
In structural reliability analysis, computation of reliability index or probability of failure is the main purpose. The Hasofer–Lind and Rackwitz–Fiessler (HL-RF) method is a widely used method in the category of first-order reliability methods (FORM). However, this method cannot be trusted for highly nonlinear limit state functions. Two proposed methods of this paper replace the original real valued constraint of FORM with a non-negative constraint, in all steps and during the whole procedure. First, the non-negative constraint is directly used to construct a non-negative Lagrange function and a search direction vector. Then, the first- and second-order Taylor approximation of the non-negative constraint are employed to compute step sizes of the first and second proposed methods, respectively. Contribution of the non-negative constraint and the effective approach of determining step sizes have led to the efficient computation of reliability index in nonlinear problems. The robustness and efficiency of two proposed methods are shown in various mathematical and structural examples of the literature.  相似文献   

13.
In this paper, we consider the issue of variable selection in partial linear single-index models under the assumption that the vector of regression coefficients is sparse. We apply penalized spline to estimate the nonparametric function and SCAD penalty to achieve sparse estimates of regression parameters in both the linear and single-index parts of the model. Under some mild conditions, it is shown that the penalized estimators have oracle property, in the sense that it is asymptotically normal with the same mean and covariance that they would have if zero coefficients are known in advance. Our model owns a least square representation, therefore standard least square programming algorithms can be implemented without extra programming efforts. In the meantime, parametric estimation, variable selection and nonparametric estimation can be realized in one step, which incredibly increases computational stability. The finite sample performance of the penalized estimators is evaluated through Monte Carlo studies and illustrated with a real data set.  相似文献   

14.
In this paper, a Bayesian hierarchical model for variable selection and estimation in the context of binary quantile regression is proposed. Existing approaches to variable selection in a binary classification context are sensitive to outliers, heteroskedasticity or other anomalies of the latent response. The method proposed in this study overcomes these problems in an attractive and straightforward way. A Laplace likelihood and Laplace priors for the regression parameters are proposed and estimated with Bayesian Markov Chain Monte Carlo. The resulting model is equivalent to the frequentist lasso procedure. A conceptional result is that by doing so, the binary regression model is moved from a Gaussian to a full Laplacian framework without sacrificing much computational efficiency. In addition, an efficient Gibbs sampler to estimate the model parameters is proposed that is superior to the Metropolis algorithm that is used in previous studies on Bayesian binary quantile regression. Both the simulation studies and the real data analysis indicate that the proposed method performs well in comparison to the other methods. Moreover, as the base model is binary quantile regression, a much more detailed insight in the effects of the covariates is provided by the approach. An implementation of the lasso procedure for binary quantile regression models is available in the R-package bayesQR.  相似文献   

15.
In this article we study penalized regression splines (P-splines), which are low-order basis splines with a penalty to avoid undersmoothing. Such P-splines are typically not spatially adaptive, and hence can have trouble when functions are varying rapidly. Our approach is to model the penalty parameter inherent in the P-spline method as a heteroscedastic regression function. We develop a full Bayesian hierarchical structure to do this and use Markov chain Monte Carlo techniques for drawing random samples from the posterior for inference. The advantage of using a Bayesian approach to P-splines is that it allows for simultaneous estimation of the smooth functions and the underlying penalty curve in addition to providing uncertainty intervals of the estimated curve. The Bayesian credible intervals obtained for the estimated curve are shown to have pointwise coverage probabilities close to nominal. The method is extended to additive models with simultaneous spline-based penalty functions for the unknown functions. In simulations, the approach achieves very competitive performance with the current best frequentist P-spline method in terms of frequentist mean squared error and coverage probabilities of the credible intervals, and performs better than some of the other Bayesian methods.  相似文献   

16.
给出了参数的E-Bayes估计的定义,对Pareto分布在尺度参数已知时,在平方损失下给出了形状参数的E-Bayes估计和多层Bayes估计,并且用Monte Carlo方法给出了模拟算例.最后,结合高尔夫球手收入数据的实际问题进行了计算,结果表明本文提出的方法可行且便于应用.  相似文献   

17.
线性回归模型的误差项不服从正态分布或存在多个离群点时,可以将残差秩次的某些函数作为权重引入估计模型来减少离群点的不良影响。本文从参数估计、稳健性质、回归诊断等方面对基于残差秩次的一类稳健回归方法进行介绍.通过模拟研究和实例分析表明,R和GR估计是一种估计效率较高的稳健回归方法,其中GR估计可同时避免X与Y空间离群点,而高失效点HBR估计可通过控制某个参数在稳健性与估计效率之间进行折衷.  相似文献   

18.
Shrinkage estimators of a partially linear regression parameter vector are constructed by shrinking estimators in the direction of the estimate which is appropriate when the regression parameters are restricted to a linear subspace. We investigate the asymptotic properties of positive Stein-type and improved pretest semiparametric estimators under quadratic loss. Under an asymptotic distributional quadratic risk criterion, their relative dominance picture is explored analytically. It is shown that positive Stein-type semiparametric estimators perform better than the usual Stein-type and least square semiparametric estimators and that an improved pretest semiparametric estimator is superior to the usual pretest semiparametric estimator. We also consider an absolute penalty type estimator for partially linear models and give a Monte Carlo simulation comparisons of positive shrinkage, improved pretest and the absolute penalty type estimators. The comparison shows that the shrinkage method performs better than the absolute penalty type estimation method when the dimension of the parameter space is much larger than that of the linear subspace.  相似文献   

19.
This paper proposes a transformed random effects model for analyzing non-normal panel data where both the response and (some of) the covariates are subject to transformations for inducing flexible functional form, normality, homoscedasticity, and simple model structure. We develop a maximum likelihood procedure for model estimation and inference, along with a computational device which makes the estimation procedure feasible in cases of large panels. We provide model specification tests that take into account the fact that parameter values for error components cannot be negative. We illustrate the model and methods with two applications: state production and wage distribution. The empirical results strongly favor the new model to the standard ones where either linear or log-linear functional form is employed. Monte Carlo simulation shows that maximum likelihood inference is quite robust against mild departure from normality. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
Variational approximations provide fast, deterministic alternatives to Markov chain Monte Carlo for Bayesian inference on the parameters of complex, hierarchical models. Variational approximations are often limited in practicality in the absence of conjugate posterior distributions. Recent work has focused on the application of variational methods to models with only partial conjugacy, such as in semiparametric regression with heteroscedastic errors. Here, both the mean and log variance functions are modeled as smooth functions of covariates. For this problem, we derive a mean field variational approximation with an embedded Laplace approximation to account for the nonconjugate structure. Empirical results with simulated and real data show that our approximate method has significant computational advantages over traditional Markov chain Monte Carlo; in this case, a delayed rejection adaptive Metropolis algorithm. The variational approximation is much faster and eliminates the need for tuning parameter selection, achieves good fits for both the mean and log variance functions, and reasonably reflects the posterior uncertainty. We apply the methods to log-intensity data from a small angle X-ray scattering experiment, in which properly accounting for the smooth heteroscedasticity leads to significant improvements in posterior inference for key physical characteristics of an organic molecule.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号