首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 418 毫秒
1.
The computation of Gaussian orthant probabilities has been extensively studied for low-dimensional vectors. Here, we focus on the high-dimensional case and we present a two-step procedure relying on both deterministic and stochastic techniques. The proposed estimator relies indeed on splitting the probability into a low-dimensional term and a remainder. While the low-dimensional probability can be estimated by fast and accurate quadrature, the remainder requires Monte Carlo sampling. We further refine the estimation by using a novel asymmetric nested Monte Carlo (anMC) algorithm for the remainder and we highlight cases where this approximation brings substantial efficiency gains. The proposed methods are compared against state-of-the-art techniques in a numerical study, which also calls attention to the advantages and drawbacks of the procedure. Finally, the proposed method is applied to derive conservative estimates of excursion sets of expensive to evaluate deterministic functions under a Gaussian random field prior, without requiring a Markov assumption. Supplementary material for this article is available online.  相似文献   

2.
针对经典的流形学习算法Isomap在非线性数据稀疏时降维效果下降甚至失效的问题,提出改进的切近邻等距特征映射算法(Cut-Neighbors Isometric feature mapping,CN-Isomap).该算法在数据稀疏的情况下首先通过有效识别样本点的"流形邻居"来剔除近邻图上的"短路"边,然后再通过最短路径算法拟合测地线距离,使得拟合的测地线距离不会偏离流形区域,从而低维嵌入映射能够正确地反映高维输入空间样本点间的内在拓扑特征,很好地发现蕴含在高维空间里的低维流形,有效地对非线性稀疏数据进行降维.通过对Benchmark数据集的实验表明了算法的有效性.CN-Isomap算法是Isomap算法的推广,不仅能有效地对非线性稀疏数据进行降维,同样也适用于数据非稀疏的情况.  相似文献   

3.
采用传统极限平衡法进行边坡可靠度分析时,不可避免会遇到一个问题,即边坡功能函数形式的高度非线性以及隐含性.对于隐式功能函数,传统的求解方法是通过对功能函数进行多次迭代,从而得到安全系数值.但是由于功能函数的形式较为复杂,导致迭代计算的过程变得尤为繁琐且效率低下.鉴于传统边坡可靠度分析中存在的安全系数计算繁琐耗时的问题,提出一种基于粒子群优化(PSO)算法的自动采样Kriging代理模型方法,该方法可以代替功能函数的作用进行安全系数的求解.首先用拉丁超立方抽样方法(LHS)选取少量土体参数组,并通过极限平衡法求出对应的安全系数,将土体参数组和安全系数作为初始样本建立Kriging模型;其次由粒子群优化算法将最有期望改善模型拟合精度的样本点添加到样本集合中,以逐步迭代提升Kriging模型的计算精度;最后集合经典蒙特卡洛模拟(MCS)获得边坡的破坏概率.通过一个双层的土质边坡算例分析,证明了该方法可以实现准确高效的安全系数计算,尤其是在安全系数计算量十分庞大的情况下可以大大节省计算时间,是一种有效的边坡工程稳定可靠度分析方法.  相似文献   

4.
The paper presents a fractional moment method for probabilistic lifetime modelling of uncertain engineering systems. A novel feature of the method is the use of fractional moments, as opposed to integer moments commonly used so far in the structural reliability literature. The fractional moments are calculated from a small, simulated sample of remaining useful life of the system. And the fractional exponents that are used to model the system lifetime distribution are determined through the entropy maximization process, rather than assigned by an analyst in prior. Together with the theory of copula, the efficiency and accuracy of the proposed method are illustrated by the probabilistic lifetime modelling of several dynamical and discontinuous stochastic systems.  相似文献   

5.
针对多观测样本分类问题,提出一种基于Kernel Discriminant CanonicalCorrelation(KDCC)来实现多观测样本分类的模型.该算法首先把原空间样本非线性的投影到高维特征空间,通过KPCA得到核子空间,然后在高维特征空间定义一个使类内核子空间的相关性最大,同时使类间核子空间的相关性最小的KDCC矩阵,通过迭代法训练出最优的KDCC矩阵,把每个核子空间投影到KDCC矩阵上得到转换核子空间,采用典型相关性作为转换核子空间之间的相似性度量,并采用最近邻准则作为多观测样本的分类决策,从而实现多观测样本的分类.在三个数据库上进行了一系列实验,实验结果表明提出的方法对于多观测样本分类具有可行性和有效性.  相似文献   

6.
An efficient approach, called augmented line sampling, is proposed to locally evaluate the failure probability function (FPF) in structural reliability-based design by using only one reliability analysis run of line sampling. The novelty of this approach is that it re-uses the information of a single line sampling analysis to construct the FPF estimation, repeated evaluations of the failure probabilities can be avoided. It is shown that, when design parameters are the distribution parameters of basic random variables, the desired information about FPF can be extracted through a single implementation of line sampling. Line sampling is a highly efficient and widely used reliability analysis method. The proposed method extends the traditional line sampling for the failure probability estimation to the evaluation of the FPF which is a challenge task. The required computational effort is neither relatively sensitive to the number of uncertain parameters, nor grows with the number of design parameters. Numerical examples are given to show the advantages of the approach.  相似文献   

7.
A new and very fast method of bootstrap for sampling without replacement from a finite population is proposed. This method can be used to estimate the variance in sampling with unequal inclusion probabilities and does not require artificial populations or utilization of bootstrap weights. The bootstrap samples are directly selected from the original sample. The bootstrap procedure contains two steps: in the first step, units are selected once with Poisson sampling using the same inclusion probabilities as the original design. In the second step, amongst the non-selected units, half of the units are randomly selected twice. This procedure enables us to efficiently estimate the variance. A set of simulations show the advantages of this new resampling method.  相似文献   

8.
Markov Chain Monte Carlo (MCMC) methods may be employed to search for a probability distribution over a bounded space of function arguments to estimate which argument(s) optimize(s) an objective function. This search-based optimization requires sampling the suitability, or fitness, of arguments in the search space. When the objective function or the fitness of arguments vary with time, significant exploration of the search space is required. Search efficiency then becomes a more relevant measure of the usefulness of an MCMC method than traditional measures such as convergence speed to the stationary distribution and asymptotic variance of stationary distribution estimates. Search efficiency refers to how quickly prior information about the search space is traded-off for search effort savings. Optimal search efficiency occurs when the entropy of the probability distribution over the space during search is maximized. Whereas the Metropolis case of the Hastings MCMC algorithm with fixed candidate generation is optimal with respect to asymptotic variance of stationary distribution estimates, this paper proves that Barker’s case is optimal with respect to search efficiency if the fitness of the arguments in the search space is characterized by an exponential function. The latter instance of optimality is beneficial for time-varying optimization that is also model-independent.  相似文献   

9.
The bootstrap method is based on resampling of the original randomsample drawn from a population with an unknown distribution. In the article it was shown that because of the progress in computer technology resampling is actually unnecessary if the sample size is not too large. It is possible to automatically generate all possible resamples and calculate all realizations of the required statistic. The obtained distribution can be used in point or interval estimation of population parameters or in testing hypotheses. We should stress that in the exact bootstrap method the entire space of resamples is used and therefore there is no additional bias which results from resampling. The method was used to estimate mean and variance. The comparison of the obtained distributions with the limit distributions confirmed the accuracy of the exact bootstrap method. In order to compare the exact bootstrap method with the basic method (with random sampling) probability that 1,000 resamples would allow for estimating a parameter with a given accuracy was calculated. There is little chance of obtaining the desired accuracy, which is an argument supporting the use of the exact method. Random sampling may be interpreted as discretization of a continuous variable.  相似文献   

10.
A new algorithm based on nonlinear transformation is proposed to improve the classical maximum entropy method and solve practical problems of reliability analysis. There are three steps in the new algorithm. Firstly, the performance function of reliability analysis is normalized, dividing by its value when each input is the mean value of the corresponding random variable. Then the nonlinear transformation of such normalized performance function is completed by using a monotonic nonlinear function with an adjustable parameter. Finally, the predictions of probability density function and/or the failure probability in reliability analysis are achieved by looking the result of transformation as a new form of performance function in the classical procedure of maximum entropy method in which the statistic moments are given through the univariate dimension reduction method. In the proposed method, the uncontrollable error of integration on the infinite interval is removed by transforming it into a bounded one. Three typical nonlinear transformation functions are studied and compared in the numerical examples. Comparing with results from Monte Carlo simulation, it is found that a proper choice of the adjustable parameter can lead to a better result of the prediction of failure probability. It is confirmed in the examples that result from the proposed method with the arctangent transformation function is better than the other transformation functions. The error of prediction of failure probability is controllable if the adjustable parameter is chosen in a given interval, but the suggested value of the adjustable parameter can only be given empirically.  相似文献   

11.
用LDA Boosting算法进行客户流失预测   总被引:2,自引:1,他引:1  
本文提出一种LDA boost(Linear Discriminant Analysis boost)分类方法,该算法能有效利用样本的所有特征,并且能够从高维特征空间里提取并组合优化出最具有判别能力的低维特征,使得样本类间离散度和类内离散度的比值最大,从而不会产生过度学习,大大提高算法效率。该算法有效性在某商业银行的客户流失预测过程的真实数据集中得到了验证。与其他同类算法,如人工神经网络、决策树、支持向量机等运算结果相比,该方法可以显著提高运算精度。同时,LDAboosting与其他boosting算法相比,也具有显著的优越性。  相似文献   

12.
An entropy is conceived as a functional on the space of probability distributions. It is used as a measure of diversity (variability) of a population. Cross entropy leads to a measure of dissimilarity between populations. In this paper, we provide a new approach to the construction of a measure of dissimilarity between two populations, not depending on the choice of an entropy function, measuring diversity. The approach is based on the principle of majorization which provides an intrinsic method of comparing the diversities of two populations. We obtain a general class of measures of dissimilarity and show some interesting properties of the proposed index. In particular, it is shown that the measure provides a metric on a probability space. The proposed measure of dissimilarity is essentially a measure of relative difference in diversity between two populations. It satisfies an invariance property which is not shared by other measures of dissimilarity which are used in ecological studies. A statistical application of the new method is given.  相似文献   

13.
For the time-variant hybrid reliability problem under random and interval uncertainties, the upper bound of time-variant failure probability, as a conservative index to quantify the safety level of the structure, is highly concerned. To efficiently estimate it, the adaptive Kriging respectively combined with design point based importance sampling and meta-model based one are proposed. The first algorithm firstly searches the design point of the hybrid problem, on which the candidate random samples are generated by shifting the sampling center from mean value to design point. Then, the Kriging model is iteratively trained and the hybrid problem is solved by the well-trained Kriging model. The second algorithm firstly utilizes the Kriging-based importance sampling to approximate the quasi-optimal importance sampling samples and estimate the augmented upper bound of time-variant failure probability. After that, the Kriging model is further updated based on these importance samples to estimate the correction factor, on which the hybrid failure probability is calculated by the product of augmented upper bound of time-variant failure probability and correction factor. Meanwhile, an improved learning function is presented to efficiently train an accurate Kriging model. The proposed methods integrate the merits of adaptive Kriging and importance sampling, which can conduct the hybrid reliability analysis by as little as possible computational cost. The presented examples show the feasibility of the proposed methods.  相似文献   

14.
In structural reliability analysis, computation of reliability index or probability of failure is the main purpose. The Hasofer–Lind and Rackwitz–Fiessler (HL-RF) method is a widely used method in the category of first-order reliability methods (FORM). However, this method cannot be trusted for highly nonlinear limit state functions. Two proposed methods of this paper replace the original real valued constraint of FORM with a non-negative constraint, in all steps and during the whole procedure. First, the non-negative constraint is directly used to construct a non-negative Lagrange function and a search direction vector. Then, the first- and second-order Taylor approximation of the non-negative constraint are employed to compute step sizes of the first and second proposed methods, respectively. Contribution of the non-negative constraint and the effective approach of determining step sizes have led to the efficient computation of reliability index in nonlinear problems. The robustness and efficiency of two proposed methods are shown in various mathematical and structural examples of the literature.  相似文献   

15.
Clustering is one of the most widely used procedures in the analysis of microarray data, for example with the goal of discovering cancer subtypes based on observed heterogeneity of genetic marks between different tissues. It is well known that in such high-dimensional settings, the existence of many noise variables can overwhelm the few signals embedded in the high-dimensional space. We propose a novel Bayesian approach based on Dirichlet process with a sparsity prior that simultaneous performs variable selection and clustering, and also discover variables that only distinguish a subset of the cluster components. Unlike previous Bayesian formulations, we use Dirichlet process (DP) for both clustering of samples as well as for regularizing the high-dimensional mean/variance structure. To solve the computational challenge brought by this double usage of DP, we propose to make use of a sequential sampling scheme embedded within Markov chain Monte Carlo (MCMC) updates to improve the naive implementation of existing algorithms for DP mixture models. Our method is demonstrated on a simulation study and illustrated with the leukemia gene expression dataset.  相似文献   

16.
System reliability analysis involving correlated random variables is challenging because the failure probability cannot be uniquely determined under the given probability information. This paper proposes a system reliability evaluation method based on non-parametric copulas. The approximated joint probability distribution satisfying the constraints specified by correlations has the maximal relative entropy with respect to the joint probability distribution of independent random variables. Thus the reliability evaluation is unbiased from the perspective of information theory. The estimation of the non-parametric copula parameters from Pearson linear correlation, Spearman rank correlation, and Kendall rank correlation are provided, respectively. The approximated maximum entropy distribution is then integrated with the first and second order system reliability method. Four examples are adopted to illustrate the accuracy and efficiency of the proposed method. It is found that traditional system reliability method encodes excessive dependence information for correlated random variables and the estimated failure probability can be significantly biased.  相似文献   

17.
A new method for estimating high-dimensional covariance matrix based on network structure with heteroscedasticity of response variables is proposed in this paper. This method greatly reduces the computational complexity by transforming the high-dimensional covariance matrix estimation problem into a low-dimensional linear regression problem. Even if the size of sample is finite, the estimation method is still effective. The error of estimation will decrease with the increase of matrix dimension. In addition, this paper presents a method of identifying influential nodes in network via covariance matrix. This method is very suitable for academic cooperation networks by taking into account both the contribution of the node itself and the impact of the node on other nodes.  相似文献   

18.

Sampling in shift-invariant spaces is a realistic model for signals with smooth spectrum. In this paper, we consider phaseless sampling and reconstruction of real-valued signals in a high-dimensional shift-invariant space from their magnitude measurements on the whole Euclidean space and from their phaseless samples taken on a discrete set with finite sampling density. The determination of a signal in a shift-invariant space, up to a sign, by its magnitude measurements on the whole Euclidean space has been shown in the literature to be equivalent to its nonseparability. In this paper, we introduce an undirected graph associated with the signal in a shift-invariant space and use connectivity of the graph to characterize nonseparability of the signal. Under the local complement property assumption on a shift-invariant space, we find a discrete set with finite sampling density such that nonseparable signals in the shift-invariant space can be reconstructed in a stable way from their phaseless samples taken on that set. In this paper, we also propose a reconstruction algorithm which provides an approximation to the original signal when its noisy phaseless samples are available only. Finally, numerical simulations are performed to demonstrate the robustness of the proposed algorithm to reconstruct box spline signals from their noisy phaseless samples.

  相似文献   

19.
In this work we present two different numerical methods to determine the probability of ultimate ruin as a function of the initial surplus. Both methods use moments obtained from the Pollaczek–Kinchine identity for the Laplace transform of the probability of ultimate ruin. One method uses fractional moments combined with the maximum entropy method and the other is a probabilistic approach that uses integer moments directly to approximate the density.  相似文献   

20.
Solving partial differential equations in high dimensions by deep neural networks has brought significant attentions in recent years. In many scenarios, the loss function is defined as an integral over a high-dimensional domain. Monte-Carlo method, together with a deep neural network, is used to overcome the curse of dimensionality, while classical methods fail. Often, a neural network outperforms classical numerical methods in terms of both accuracy and efficiency. In this paper, we propose to use quasi-Monte Carlo sampling, instead of Monte-Carlo method to approximate the loss function. To demonstrate the idea, we conduct numerical experiments in the framework of deep Ritz method. For the same accuracy requirement, it is observed that quasi-Monte Carlo sampling reduces the size of training data set by more than two orders of magnitude compared to that of Monte-Carlo method. Under some assumptions, we can prove that quasi-Monte Carlo sampling together with the deep neural network generates a convergent series with rate proportional to the approximation accuracy of quasi-Monte Carlo method for numerical integration. Numerically the fitted convergence rate is a bit smaller, but the proposed approach always outperforms Monte Carlo method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号