首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
We propose an efficient global sensitivity analysis method for multivariate outputs that applies polynomial chaos-based surrogate models to vector projection-based sensitivity indices. These projection-based sensitivity indices, which are powerful measures of the comprehensive effects of model inputs on multiple outputs, are conventionally estimated by the Monte Carlo simulations that incur prohibitive computational costs for many practical problems. Here, the projection-based sensitivity indices are efficiently estimated via two polynomial chaos-based surrogates: polynomial chaos expansion and a proper orthogonal decomposition-based polynomial chaos expansion. Several numerical examples with various types of outputs are tested to validate the proposed method; the results demonstrate that the polynomial chaos-based surrogates are more efficient than Monte Carlo simulations at estimating the sensitivity indices, even for models with a large number of outputs. Furthermore, for models with only a few outputs, polynomial chaos expansion alone is preferable, whereas for models with a large number of outputs, implementation with proper orthogonal decomposition is the best approach.  相似文献   

2.
For models with dependent input variables, sensitivity analysis is often a troublesome work and only a few methods are available. Mara and Tarantola in their paper (“Variance-based sensitivity indices for models with dependent inputs”) defined a set of variance-based sensitivity indices for models with dependent inputs. We in this paper propose a method based on moving least squares approximation to calculate these sensitivity indices. The new proposed method is adaptable to both linear and nonlinear models since the moving least squares approximation can capture severe change in scattered data. Both linear and nonlinear numerical examples are employed in this paper to demonstrate the ability of the proposed method. Then the new sensitivity analysis method is applied to a cantilever beam structure and from the results the most efficient method that can decrease the variance of model output can be determined, and the efficiency is demonstrated by exploring the dependence of output variance on the variation coefficients of input variables. At last, we apply the new method to a headless rivet model and the sensitivity indices of all inputs are calculated, and some significant conclusions are obtained from the results.  相似文献   

3.
Global sensitivity analysis is a widely used tool for uncertainty apportionment and is very useful for decision making, risk assessment, model simplification, optimal design of experiments, etc. Density-based sensitivity analysis and regional sensitivity analysis are two widely used approaches. Both of them can work with a given sample set of model input-output pairs. One significant difference between them is that density-based sensitivity analysis analyzes output distributions conditional on input values (forward), while regional sensitivity analysis analyzes input distributions conditional on output values (reverse). In this paper, we study the relationship between these two approaches and show that regional sensitivity analysis (reverse), when focusing on probability density functions of input, converges towards density-based sensitivity analysis (forward) as the number of classes for conditioning model outputs in the reverse method increases. Similar to the existing general form of forward sensitivity indices, we derive a general form of the reverse sensitivity indices and provide the corresponding reverse given-data method. Due to the shown equivalence, the reverse given-data method provides an efficient way to approximate density-based sensitivity indices. Two test examples are used to verify this connection and compare the results. Finally, we use the reverse given-data method to perform sensitivity analysis in a carbon dioxide storage benchmark problem with multiple outputs, where forward analysis of density-based indices would be impossible due to the high-dimensionality of its model outputs.  相似文献   

4.
We consider the problem of constructing a stabilizer described by a system of linear differential equations and such that a given dynamical system becomes stable after being closed by the feedback produced by the stabilizer. Moreover, we require that the dimension of the stabilizer, that is, the dimension of its state vector, be minimal. We assume that the given system has either a single input and multiple outputs (a SIMO system) or, on the opposite, multiple inputs and a single output (a MISO system).  相似文献   

5.
Importance analysis is aimed at finding the contributions of the inputs to the output uncertainty. For structural models involving correlated input variables, the variance contribution by an individual input variable is decomposed into correlated contribution and uncorrelated contribution in this study. Based on point estimate, this work proposes a new algorithm to conduct variance based importance analysis for correlated input variables. Transformation of the input variables from correlation space to independence space and the computation of conditional distribution in the process ensure that the correlation information is inherited correctly. Different point estimate methods can be employed in the proposed algorithm, thus the algorithm is adaptable and evolvable. Meanwhile, the proposed algorithm is also applicable to uncertainty systems with multiple modes. The proposed algorithm avoids the sampling procedure, which usually consumes a heavy computational cost. Results of several examples in this work have proven the proposed algorithm can be used as an effective tool to deal with uncertainty analysis involving correlated inputs.  相似文献   

6.
In order to quantitatively analyze the variance contributions by correlated input variables to the model output, variance based global sensitivity analysis (GSA) is analytically derived for models with correlated variables. The derivation is based on the input-output relationship of tensor product basis functions and the orthogonal decorrelation of the correlated variables. Since the tensor product basis function based simulator is widely used to approximate the input-output relationship of complicated structure, the analytical solution of the variance based global sensitivity is especially applicable to engineering practice problems. The polynomial regression model is employed as an example to derive the analytical GSA in detail. The accuracy and efficiency of the analytical solution of GSA are validated by three numerical examples, and engineering application of the derived solution is demonstrated by carrying out the GSA of the riveting and two dimension fracture problem.  相似文献   

7.
We consider the inversion problem for linear systems, which involves estimation of the unknown input vector. The inversion problem is considered for a system with a vector output and a vector input assuming that the observed output is of higher dimension than the unknown input. The problem is solved by using a controlled model in which the control stabilizes the deviations of the model output from the system output. The stabilizing model control or its averaged form may be used as the estimate of the unknown system input. __________ Translated from Nelineinaya Dinamika i Upravlenie, No. 4, pp. 17–22, 2004.  相似文献   

8.
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker’s value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data.  相似文献   

9.
This work introduces a bi-objective generalized data envelopment analysis (Bi-GDEA) model and defines its efficiency. We show the equivalence between the Bi-GDEA efficiency and the non-dominated solutions of the multi-objective programming problem defined on the production possibility set (PPS) and discuss the returns to scale under the Bi-GDEA model. The most essential contribution is that we further define a point-to-set mapping and the mapping projection of a decision making unit (DMU) on the frontier of the PPS under the Bi-GDEA model. We give an effective approach for the construction of the point-to-set-mapping projection which distinguishes our model from other non-radial models for simultaneously considering input and output. The Bi-GDEA model represents decision makers’ specific preference on input and output and the point-to-set mapping projection provides decision makers with more possibility to determine different input and output alternatives when considering efficiency improvement. Numerical examples are employed for the illustration of the procedure of point-to-set mapping.  相似文献   

10.
Kriging is a popular method for estimating the global optimum of a simulated system. Kriging approximates the input/output function of the simulation model. Kriging also estimates the variances of the predictions of outputs for input combinations not yet simulated. These predictions and their variances are used by ‘efficient global optimization’ (EGO), to balance local and global search. This article focuses on two related questions: (1) How to select the next combination to be simulated when searching for the global optimum? (2) How to derive confidence intervals for outputs of input combinations not yet simulated? Classic Kriging simply plugs the estimated Kriging parameters into the formula for the predictor variance, so theoretically this variance is biased. This article concludes that practitioners may ignore this bias, because classic Kriging gives acceptable confidence intervals and estimates of the optimal input combination. This conclusion is based on bootstrapping and conditional simulation.  相似文献   

11.
Regression and linear programming provide the basis for popular techniques for estimating technical efficiency. Regression-based approaches are typically parametric and can be both deterministic or stochastic where the later allows for measurement error. In contrast, linear programming models are nonparametric and allow multiple inputs and outputs. The purported disadvantage of the regression-based models is the inability to allow multiple outputs without additional data on input prices. In this paper, deterministic cross-sectional and stochastic panel data regression models that allow multiple inputs and outputs are developed. Notably, technical efficiency can be estimated using regression models characterized by multiple input, multiple output environments without input price data. We provide multiple examples including a Monte Carlo analysis.  相似文献   

12.
Monolithic compliant mechanisms are elastic workpieces which transmit force and displacement from an input position to an output position. Continuum topology optimization is suitable to generate the optimized topology, shape and size of such compliant mechanisms. The optimization strategy for a single input single output compliant mechanism under volume constraint is known to be best implemented using an optimality criteria or similar mathematical programming method. In this standard form, the method appears unsuitable for the design of compliant mechanisms which are subject to multiple outputs and multiple constraints. Therefore an optimization model that is subject to multiple design constraints is required. With regard to the design problem of compliant mechanisms subject to multiple equality displacement constraints and an area constraint, we here present a unified sensitivity analysis procedure based on artificial reaction forces, in which the key idea is built upon the Lagrange multiplier method. Because the resultant sensitivity expression obtained by this procedure already compromises the effects of all the equality displacement constraints, a simple optimization method, such as the optimality criteria method, can then be used to implement an area constraint. Mesh adaptation and anisotropic filtering method are used to obtain clearly defined monolithic compliant mechanisms without obvious hinges. Numerical examples in 2D and 3D based on linear small deformation analysis are presented to illustrate the success of the method.  相似文献   

13.
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on response variables. In this paper, a new kernel function derived from orthogonal polynomials is proposed for support vector regression (SVR). Based on this new kernel function, the Sobol’ global sensitivity indices can be computed analytically by the coefficients of the surrogate model built by SVR. In order to improve the performance of the SVR model, a kernel function iteration scheme is introduced further. Due to the excellent generalization performance and structural risk minimization principle, the SVR possesses the advantages of solving non-linear prediction problems with small samples. Thus, the proposed method is capable of computing the Sobol’ indices with a relatively limited number of model evaluations. The proposed method is examined by several examples, and the sensitivity analysis results are compared with the sparse polynomial chaos expansion (PCE), high dimensional model representation (HDMR) and Gaussian radial basis (RBF) SVR model. The examined examples show that the proposed method is an efficient approach for GSA of complex models.  相似文献   

14.
Data envelopment analysis (DEA) is a linear programming methodology to evaluate the relative technical efficiency for each member of a set of peer decision making units (DMUs) with multiple inputs and multiple outputs. It has been widely used to measure performance in many areas. A weakness of the traditional DEA model is that it cannot deal with negative input or output values. There have been many studies exploring this issue, and various approaches have been proposed.  相似文献   

15.
Dimensional and similarity analyses are used in physics and engineering, specially in fluid mechanics, to reduce the dimension of the input variable space with no loss of information. Here, we apply these techniques to the propagation of uncertainties for computer codes by the Monte Carlo method, in order to reduce the variance of the estimators of the parameters of the output variable distribution. In the physics and engineering literature, dimensional analysis is often formulated intuitively in terms of physical quantities or dimensions such as time, longitude, or mass; here we use the more rigorous and more abstract generalized dimensional analysis of Moran and Marshek. The reduction of dimensionality is only successful in reducing estimator variance when applying variance-reduction techniques and not when using ordinary random sampling. In this article we use stratified sampling, and the key point of the success of the reduction in dimensionality in improving the precision of the estimates is a better measurement of the distances betwen the outputs, for given inputs. We illustrate the methodology with an application to a physical problem, a radioactive contaminant transport code. A substantial variance reduction is achieved for the estimators of the mean, variance, and distribution function of the output. Last, we present a discussion on which conditions are necessary for the method to be successful.  相似文献   

16.
Data envelopment analysis (DEA) and multiple objective linear programming (MOLP) can be used as tools in management control and planning. The existing models have been established during the investigation of the relations between the output-oriented dual DEA model and the minimax reference point formulations, namely the super-ideal point model, the ideal point model and the shortest distance model. Through these models, the decision makers’ preferences are considered by interactive trade-off analysis procedures in multiple objective linear programming. These models only consider the output-oriented dual DEA model, which is a radial model that focuses more on output increase. In this paper, we improve those models to obtain models that address both inputs and outputs. Our main aim is to decrease total input consumption and increase total output production which results in solving one mathematical programming model instead of n models. Numerical illustration is provided to show some advantages of our method over the previous methods.  相似文献   

17.
Wu  Jie  Xia  Panpan  Zhu  Qingyuan  Chu  Junfei 《Annals of Operations Research》2019,275(2):731-749

China’s rapid development in economy has intensified many problems. One of the most important issues is the problem of environmental pollution. In this paper, a new DEA approach is proposed to measure the environmental efficiency of thermoelectric power plants, considering undesirable outputs. First, we assume that the total amount of undesirable outputs of any particular type is limited and fixed to current levels. In contrast to previous studies, this study requires fixed-sum undesirable outputs. In addition, the common equilibrium efficient frontier is constructed by using different input/output multipliers (or weights) for each different decision making unit (DMU), while previous approaches which considered fixed-sum outputs assumed a common input/output multiplier for all DMUs. The proposed method is applied to measure the environmental efficiencies of 30 thermoelectric power plants in mainland China. Our empirical study shows that half of the plants perform well in terms of environmental efficiency.

  相似文献   

18.
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.  相似文献   

19.
A systematic procedure for sensitivity analysis of a case study in the area of air pollution modeling has been performed. Contemporary mathematical models should include a large set of chemical and photochemical reactions to be established as a reliable simulation tool. The Unified Danish Eulerian Model is in the focus of our investigation as one of the most advanced large-scale mathematical models that describes adequately all physical and chemical processes.Variance-based methods are one of the most often used approaches for providing sensitivity analysis. To measure the extent of influence of the variation of the chemical rate constants in the mathematical model over pollutants’ concentrations the Sobol’ global sensitivity indices are estimated using efficient techniques for small sensitivity indices to avoid a loss of accuracy. Studying relationships between input parameters and the model’s output as well as internal mechanisms is very useful for a verification and an improvement of the model and also for development of monitoring and control strategies of harmful emissions, for a reliable prediction of the final output of scenarios when the concentration levels of pollutants are exceeded. The proposed procedure can also be applied when other large-scale mathematical models are used.  相似文献   

20.
One of the uses of data envelopment analysis (DEA) is supplier selection. Weight restrictions allow for the integration of managerial preferences in terms of relative importance levels of various inputs and outputs. As well, in some situations there is a strong argument for permitting certain factors to simultaneously play the role of both inputs and outputs. The objective of this paper is to propose a method for selecting the best suppliers in the presence of weight restrictions and dual-role factors. This paper depicts the supplier selection process through a DEA model, while allowing for the incorporation of decision maker’s preferences and considers multiple factors which simultaneously play both input and output roles. The proposed model does not demand exact weights from the decision maker. This paper presents a robust model to solve the multiple-criteria problem. A numerical example demonstrates the application of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号