首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 631 毫秒
1.
Analyzing the sensitivity of model outputs to inputs is important to assess risk and make decisions in engineering application. However, for model with multiple outputs, it is difficult to interpret the sensitivity index since the effect of the dimension and the correlation between multiple outputs are often ignored in the existing methods. In this paper, a new kind of sensitivity analysis method is proposed by use of vector projection and dimension normalization for multiple outputs. Through the dimension normalization, the space of multiple outputs can be unified into a dimensionless one to eliminate the effect of the dimension of the different output. After an affine coordinate system is constructed by considering the correlation of the multiple normalized outputs, a total variance vector for the multiple outputs can be composed by the individual variance of each output. Then, by projecting the variance contribution vector composed by the individual variance contribution of the input to each output on the total variance vector, the new sensitivity indices are proposed for measuring the comprehensive effect of the input on the total variance vector of multiple outputs, it is defined as the ratio of the projection of the variance contribution vector to the norm of the total variance vector. We derive that the Sobol’ indices for a scalar output and the covariance decomposition based indices for multiple outputs are special cases of the proposed vector projection based indices. Then, the mathematical properties and geometric interpretation of the proposed method are discussed. Three numerical examples and a rotating shaft model of an aircraft wing are used to validate the proposed method and show their potential benefits.  相似文献   

2.
In order to quantitatively analyze the variance contributions by correlated input variables to the model output, variance based global sensitivity analysis (GSA) is analytically derived for models with correlated variables. The derivation is based on the input-output relationship of tensor product basis functions and the orthogonal decorrelation of the correlated variables. Since the tensor product basis function based simulator is widely used to approximate the input-output relationship of complicated structure, the analytical solution of the variance based global sensitivity is especially applicable to engineering practice problems. The polynomial regression model is employed as an example to derive the analytical GSA in detail. The accuracy and efficiency of the analytical solution of GSA are validated by three numerical examples, and engineering application of the derived solution is demonstrated by carrying out the GSA of the riveting and two dimension fracture problem.  相似文献   

3.
For models with dependent input variables, sensitivity analysis is often a troublesome work and only a few methods are available. Mara and Tarantola in their paper (“Variance-based sensitivity indices for models with dependent inputs”) defined a set of variance-based sensitivity indices for models with dependent inputs. We in this paper propose a method based on moving least squares approximation to calculate these sensitivity indices. The new proposed method is adaptable to both linear and nonlinear models since the moving least squares approximation can capture severe change in scattered data. Both linear and nonlinear numerical examples are employed in this paper to demonstrate the ability of the proposed method. Then the new sensitivity analysis method is applied to a cantilever beam structure and from the results the most efficient method that can decrease the variance of model output can be determined, and the efficiency is demonstrated by exploring the dependence of output variance on the variation coefficients of input variables. At last, we apply the new method to a headless rivet model and the sensitivity indices of all inputs are calculated, and some significant conclusions are obtained from the results.  相似文献   

4.
When a radial basis function network (RBFN) is used for identification of a nonlinear multi-input multi-output (MIMO) system, the number of hidden layer nodes, the initial parameters of the kernel, and the initial weights of the network must be determined first. For this purpose, a systematic way that integrates the support vector regression (SVR) and the least squares regression (LSR) is proposed to construct the initial structure of the RBFN. The first step of the proposed method is to determine the number of hidden layer nodes and the initial parameters of the kernel by the SVR method. Then the weights of the RBFN are determined by solving a simple minimization problem based on the concept of LSR. After initialization, an annealing robust learning algorithm (ARLA) is then applied to train the RBFN. With the proposed initialization approach, one can find that the designed RBFN has few hidden layer nodes while maintaining good performance. To show the feasibility and superiority of the annealing robust radial basis function networks (ARRBFNs) for identification of MIMO systems, several illustrative examples are included.  相似文献   

5.
The influence of emission levels on the concentrations of four important air pollutants (ammonia, ozone, ammonium sulphate and ammonium nitrate) over three European cities (Milan, Manchester, and Edinburgh) with different geographical locations is considered. Sensitivity analysis of the output of the Unified Danish Eulerian Model according to emission levels is provided. The Sobol’ variance-based approach for global sensitivity analysis has been applied to compute the corresponding sensitivity measures. To measure the influence of the variation of emission levels over the pollutants concentrations the Sobol’ global sensitivity indices are estimated using efficient techniques for small sensitivity indices to avoid the effect of loss of accuracy. Theoretical studies, as well as, practical computations are performed in order to analyze efficiency of various variance reduction techniques for computing small indices. The importance of accurate estimation of small sensitivity indices is analyzed. It is shown that the correlated sampling technique for small sensitivity indices gives reliable results for the full set of indices. Its superior efficiency is studied in details.  相似文献   

6.
Quantile regression provides a more complete statistical analysis of the stochastic relationships among random variables. Sometimes quantile regression functions estimated at different orders can cross each other. We propose a new non-crossing quantile regression method using doubly penalized kernel machine (DPKM) which uses heteroscedastic location-scale model as basic model and estimates both location and scale functions simultaneously by kernel machines. The DPKM provides the satisfying solution to estimating non-crossing quantile regression functions when multiple quantiles for high-dimensional data are needed. We also present the model selection method that employs cross validation techniques for choosing the parameters which affect the performance of the DPKM. One real example and two synthetic examples are provided to show the usefulness of the DPKM.  相似文献   

7.
为了对比支持向量回归(SVR)和核岭回归(KRR)预测血糖值的效果,本文选择人工智能辅助糖尿病遗传风险的相关数据进行实证分析.首先对数据进行预处理,将处理后的数据导入Python.其次,为了使SVR和KRR的对比结果具有客观性,使用了三种有代表性的核方法(线性核函数,径向基核函数和sigmod核函数).然后,在训练集上采用网格搜索自动调参分别建立SVR和KRR的最优模型,对血糖值进行预测.最后,在测试集上对比分析SVR和KRR预测的均方误差(MSE)和拟合时间等指标.结果表明:均方误差(MSE)都小于0.006,且KRR的MSE比SVR的小0.0002,KRR的预测精度比SVR更高;而SVR的预测时间比KRR的少0.803秒,SVR的预测效率比KRR好.  相似文献   

8.
This paper gives a new dimension-reduced method of sensitivity analysis for perturbed stochastic user equilibrium assignment (SUEA) model based on the relation between its Lagrange function and logarithmic barrier function combined with a Courant quadratic penalty term. The advantage of this method is of smaller dimension than general sensitivity analysis and reducing complexity. Firstly, it presents the dimension-reduced sensitivity results of the general nonlinear programming perturbation problem and the improved results when the objective or constraint functions are not twice continuously differentiable. Then it proves the corresponding conclusion of SUEA with smooth or non-smooth cost functions by the method of converting constraint conditions and decision variables. Finally, two corresponding examples (smooth and non-smooth) are given to illustrate the feasibility of this method.  相似文献   

9.
The purpose of conventional Data Envelopment Analysis (DEA) is to evaluate the performance of a set of firms or Decision-Making Units using deterministic input and output data. However, the input and output data in the real-life performance evaluation problems are often stochastic. The stochastic input and output data in DEA can be represented with random variables. Several methods have been proposed to deal with the random input and output data in DEA. In this paper, we propose a new chance-constrained DEA model with birandom input and output data. A super-efficiency model with birandom constraints is formulated and a non-linear deterministic equivalent model is obtained to solve the super-efficiency model. The non-linear model is converted into a model with quadratic constraints to solve the non-linear deterministic model. Furthermore, a sensitivity analysis is performed to assess the robustness of the proposed super-efficiency model. Finally, two numerical examples are presented to demonstrate the applicability of the proposed chance-constrained DEA model and sensitivity analysis.  相似文献   

10.
分位数变系数模型是一种稳健的非参数建模方法.使用变系数模型分析数据时,一个自然的问题是如何同时选择重要变量和从重要变量中识别常数效应变量.本文基于分位数方法研究具有稳健和有效性的估计和变量选择程序.利用局部光滑和自适应组变量选择方法,并对分位数损失函数施加双惩罚,我们获得了惩罚估计.通过BIC准则合适地选择调节参数,提出的变量选择方法具有oracle理论性质,并通过模拟研究和脂肪实例数据分析来说明新方法的有用性.数值结果表明,在不需要知道关于变量和误差分布的任何信息前提下,本文提出的方法能够识别不重要变量同时能区分出常数效应变量.  相似文献   

11.
Estimation of statistical moments of structural response is one of the main topics for analysis of random systems. The balance between accuracy and efficiency remains a challenge. After investigating of the existing point estimation method (PEM), a new point estimate method based on the dimension-reduction method (DRM) is presented. By introducing transformations, a system with general variables is transformed into the one with independent variables. Then, the existing PEMs based on the DRMs are investigated. Based on the qualitative analysis of difference in the approximations for response function and moment function, a new PEM is proposed, in which the response function is decomposed directly and the moments are calculated by high dimensional integral directly. Compared with the existing PEM based on univariate DRM, the proposed method is more friendly and easier to implement without loss of accuracy and efficiency; as compared with the PEM based on the generalized DRM, the proposed method is of better precision at the cost of nearly the same efficiency and computational complexity, further, it does hold that the even-order moments are nonnegative. Finally, several examples are investigated to verify the performance of the new method.  相似文献   

12.
The kernel-based regression (KBR) method, such as support vector machine for regression (SVR) is a well-established methodology for estimating the nonlinear functional relationship between the response variable and predictor variables. KBR methods can be very sensitive to influential observations that in turn have a noticeable impact on the model coefficients. The robustness of KBR methods has recently been the subject of wide-scale investigations with the aim of obtaining a regression estimator insensitive to outlying observations. However, existing robust KBR (RKBR) methods only consider Y-space outliers and, consequently, are sensitive to X-space outliers. As a result, even a single anomalous outlying observation in X-space may greatly affect the estimator. In order to resolve this issue, we propose a new RKBR method that gives reliable result even if a training data set is contaminated with both Y-space and X-space outliers. The proposed method utilizes a weighting scheme based on the hat matrix that resembles the generalized M-estimator (GM-estimator) of conventional robust linear analysis. The diagonal elements of hat matrix in kernel-induced feature space are used as leverage measures to downweight the effects of potential X-space outliers. We show that the kernelized hat diagonal elements can be obtained via eigen decomposition of the kernel matrix. The regularized version of kernelized hat diagonal elements is also proposed to deal with the case of the kernel matrix having full rank where the kernelized hat diagonal elements are not suitable for leverage. We have shown that two kernelized leverage measures, namely, the kernel hat diagonal element and the regularized one, are related to statistical distance measures in the feature space. We also develop an efficiently kernelized training algorithm for the parameter estimation based on iteratively reweighted least squares (IRLS) method. The experimental results from simulated examples and real data sets demonstrate the robustness of our proposed method compared with conventional approaches.  相似文献   

13.
针对传统Kriging模型在多变量(高维)输入全局优化中因超参数过多而引发收敛速度慢,精度低,建模效率不高问题,提出了基于偏最小二乘变换技术和Kriging模型的有效全局优化方法.首先,构造偏最小二乘高斯核函数;其次,借助差分进化算法寻找满足期望改进准则最大化条件的新样本点;然后,将不同核函数和期望改进准则组合,构建四种有效全局优化算法并进行比较;最后,数值算例结果表明,基于偏最小二乘变换的Kriging全局优化方法在解决高维全局优化问题方面相比于标准的全局优化算法在收敛精度及收敛速度方面更具优势.  相似文献   

14.
《Applied Mathematical Modelling》2014,38(15-16):3917-3928
This paper develops an economic order quantity (EOQ) model with uncertain data. For modelling the uncertainty in real-world data, the exponents and coefficients in demand and cost functions are considered as interval data and then, the related model is designed. The proposed model maximises the profit and determines the price, marketing cost and lot sizing with the interval data. Since the model parameters are imprecise, the objective value is imprecise, too. So, the upper and lower bounds are specially formulated for the problem and then, the model is transferred to a geometric program. The resulted geometric program is solved by using the duality approach and the lower and upper bounds are found out for the objective function and variables. Two numerical examples and sensitivity analysis are further used to illustrate the performance of the proposed model.  相似文献   

15.
In future the model quality will have an increasing priority in the industrial context and suitable techniques and methods to quantify them have to be developed. In this paper we want to focus on the challenge of selecting the most informative measurement data for a time-dependent simulation model to infer parameter distributions. Sobol indices are used to give a estimate about the global sensitivity of those parameters and they are illustrated for the given benchmark problem. In this context the gained knowledge on the parameters is used to define a Likelihood function for the Bayes' rule. (© 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

16.
ABSTRACT

A new adaptive kernel principal component analysis (KPCA) for non-linear discrete system control is proposed. The proposed approach can be treated as a new proposition for data pre-processing techniques. Indeed, the input vector of neural network controller is pre-processed by the KPCA method. Then, the obtained reduced neural network controller is applied in the indirect adaptive control. The influence of the input data pre-processing on the accuracy of neural network controller results is discussed by using numerical examples of the cases of time-varying parameters of single-input single-output non-linear discrete system and multi-input multi-output system. It is concluded that, using the KPCA method, a significant reduction in the control error and the identification error is obtained. The lowest mean squared error and mean absolute error are shown that the KPCA neural network with the sigmoid kernel function is the best.  相似文献   

17.
A simple method for solving the Fredholm singular integro-differential equations with Cauchy kernel is proposed based on a new reproducing kernel space. Using a transformation and modifying the traditional reproducing kernel method, the singular term is removed and the analytical representation of the exact solution is obtained in the form of series in the new reproducing kernel space. The advantage of the approach lies in the fact that, on the one hand, by improving the definition of traditional inner product, the representation of new reproducing kernel function becomes simple and requirement for image space of operator is weakened comparing with traditional reproducing kernel method; on the other hand, the approximate solution and its derivatives converge uniformly to the exact solution and its derivatives. Some examples are displayed to demonstrate the validity and applicability of the proposed method.  相似文献   

18.
Moment-based methods use only statistical moments of random variables for reliability analysis. The cumulative distribution function (CDF) or probability density function (PDF) of a performance function can be constructed from the perspective of the first few statistical moments, and the failure probability can be evaluated accordingly. However, existing moment-based methods may lead to large errors or instability. As such, the present paper focuses on the high order moment method for higher accuracy of reliability estimation by combining the common saddlepoint approximation technique, and an improved high order moment-based saddlepoint approximation (SPA) method for reliability analysis is presented. The approximated cumulant generating function (CGF) and the CDF of the performance function in terms of its first four statistical-moments are constructed. The developed method can be used for reliability evaluation of uncertain structures follow any types of distribution. Several numerical examples are given to demonstrate the efficacy and accuracy of the proposed method. Comparisons of the new method and several existing high order moment methods are also made on the reliability assessment.  相似文献   

19.
To predict or control the response of a complicated numerical model which involves a large number of input variables but is mainly affected by only a part of variables, it is necessary to screening those active variables. This paper proposes a new space-filling sampling strategy, which is used to screening the parameters based on the Morris’ elementary effect method. The beginning points of sampling trajectories are selected by using the maximin principle of Latin Hypercube Sampling method. The remaining points of trajectories are determined by using the one-factor-at-a-time design. Being different from other sampling strategies to determine the sequence of factors randomly in one-factor-at-a-time design, the proposed method formulates the sequence of factors by a deterministic algorithm, which sequentially maximizes the Euclidean distance among sampling trajectories. A new efficient algorithm is proposed to transform the distance maximization problem to a coordinate sorting problem, which saves computational cost much. After the elementary effects are computed using the sampling points, a detailed criterion is presented to select the active factors. Two mathematic examples and an engineering problem are used to validate the proposed sampling method, which demonstrates the priority in computational efficiency, space-filling performance, and screening efficiency.  相似文献   

20.
We propose an efficient global sensitivity analysis method for multivariate outputs that applies polynomial chaos-based surrogate models to vector projection-based sensitivity indices. These projection-based sensitivity indices, which are powerful measures of the comprehensive effects of model inputs on multiple outputs, are conventionally estimated by the Monte Carlo simulations that incur prohibitive computational costs for many practical problems. Here, the projection-based sensitivity indices are efficiently estimated via two polynomial chaos-based surrogates: polynomial chaos expansion and a proper orthogonal decomposition-based polynomial chaos expansion. Several numerical examples with various types of outputs are tested to validate the proposed method; the results demonstrate that the polynomial chaos-based surrogates are more efficient than Monte Carlo simulations at estimating the sensitivity indices, even for models with a large number of outputs. Furthermore, for models with only a few outputs, polynomial chaos expansion alone is preferable, whereas for models with a large number of outputs, implementation with proper orthogonal decomposition is the best approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号