首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A Monte Carlo study is conducted to compare the stochastic frontier method and the data envelopment analysis (DEA) method in measuring efficiency in situations where firms are subject to the effects of factors which are beyond managerial control. In making efficiency measurements and comparisons, one must separate the effects of the environment (the exogenous factors) and the effects of the productive efficiency. There are two basic approaches to account for the effects of exogenous variables: (1) an one-step procedure which includes the exogenous variables directly in estimating the efficiency measures, and (2) a two-step procedure which first estimates the relative ‘gross’ efficiencies using inputs and outputs, then analyzes the effects of the exogenous variables on the ‘gross’ efficiency. The results show that the magnitude of exogenous variables does not appear to have any significant effect on the performance of the one-step stochastic frontier method as long as the exogenous variables are correctly identified and accounted for. However, the effects of exogenous variables are significant for the two-step approach, especially for the DEA methods.  相似文献   

2.
For models with dependent input variables, sensitivity analysis is often a troublesome work and only a few methods are available. Mara and Tarantola in their paper (“Variance-based sensitivity indices for models with dependent inputs”) defined a set of variance-based sensitivity indices for models with dependent inputs. We in this paper propose a method based on moving least squares approximation to calculate these sensitivity indices. The new proposed method is adaptable to both linear and nonlinear models since the moving least squares approximation can capture severe change in scattered data. Both linear and nonlinear numerical examples are employed in this paper to demonstrate the ability of the proposed method. Then the new sensitivity analysis method is applied to a cantilever beam structure and from the results the most efficient method that can decrease the variance of model output can be determined, and the efficiency is demonstrated by exploring the dependence of output variance on the variation coefficients of input variables. At last, we apply the new method to a headless rivet model and the sensitivity indices of all inputs are calculated, and some significant conclusions are obtained from the results.  相似文献   

3.
In many industrial processes hundreds of noisy and correlated process variables are collected for monitoring and control purposes. The goal is often to correctly classify production batches into classes, such as good or failed, based on the process variables. We propose a method for selecting the best process variables for classification of process batches using multiple criteria including classification performance measures (i.e., sensitivity and specificity) and the measurement cost. The method applies Partial Least Squares (PLS) regression on the training set to derive an importance index for each variable. Then an iterative classification/elimination procedure using k-Nearest Neighbor is carried out. Finally, Pareto analysis is used to select the best set of variables and avoid excessive retention of variables. The method proposed here consistently selects process variables important for classification, regardless of the batches included in the training data. Further, we demonstrate the advantages of the proposed method using six industrial datasets.  相似文献   

4.
A simulation study often requires computation of a point estimate and confidence region for the steady-state mean of a stochastic output process. The literature offers a variety of statistical techniques, including replication/deletion, the batch-means method, and spectrum analysis. We present a new multivariate output-analysis technique that is based on the general autoregressive time-series model with exogenous variables to set up a joint confidence region for the steady-state mean. We demonstrate our technique by an extensive computational experiment, and show that it performs at least as well as other output-analysis techniques, without having some of their drawbacks.  相似文献   

5.
Sensitivity analysis—determination of how prediction variables affect response variables—of individual‐based models (IBMs) are few but important to the interpretation of model output. We present sensitivity analysis of a spatially explicit IBM (HexSim) of a threatened species, the Northern Spotted Owl (NSO; Strix occidentalis caurina) in Washington, USA. We explored sensitivity to HexSim variables representing habitat quality, movement, dispersal, and model architecture; previous NSO studies have well established sensitivity of model output to vital rate variation. We developed “normative” (expected) model settings from field studies, and then varied the values of ≥ 1 input parameter at a time by ±10% and ±50% of their normative values to determine influence on response variables of population size and trend. We determined time to population equilibration and dynamics of populations above and below carrying capacity. Recovery time from small population size to carrying capacity greatly exceeded decay time from an overpopulated condition, suggesting lag time required to repopulate newly available habitat. Response variables were most sensitive to input parameters of habitat quality which are well‐studied for this species and controllable by management. HexSim thus seems useful for evaluating potential NSO population responses to landscape patterns for which good empirical information is available.  相似文献   

6.
This paper studies a nonlinear least squares estimation method for the logarithmic autoregressive conditional duration (Log-ACD) model. We establish the strong consistency and asymptotic normality for our estimator under weak moment conditions suitable for applications involving heavy-tailed distributions. We also discuss inference for the Log-ACD model and Log-ACD models with exogenous variables. Our results can be easily translated to study Log-GARCH models. Both simulation study and real data analysis are conducted to show the usefulness of our results.  相似文献   

7.
The conventional sequential four-step procedure of travel demand forecasting has been widely adopted by practitioners. However, it suffers from inconsistent consideration of travel times and congestion effects in various steps of the procedure. A combined travel demand model overcomes the problems associated with the sequential four-step procedure by integrating travel-destination-mode-route choice together. In this paper, a standard sensitivity analysis for non-linear programming is employed for conducting the sensitivity analysis of the combined travel demand model. Explicit expressions of the derivatives of model variables with respect to perturbations of input variables and parameters of the combined travel demand model are developed. These derivatives could be used to assess changes in solution variables and various system performance measures when the network characteristics are changed slightly. To gain insight into the usefulness of the sensitivity expressions, five applications, such as identification of critical parameters, paradox analysis, access control, destination choice, and error and uncertainty analysis, are presented with numerical results.  相似文献   

8.
通过对电动汽车充电行为的分析,采用顾客带有不耐烦和止步行为的有限容量M/M/C/N排队模型对充电场所充电服务系统进行建模.通过求解模型的平衡方程,获得系统稳态下的队长分布及其它多项性能指标.以系统性能指标为基础,从经济效益与社会效益的角度提出了充电服务系统优化设计的目标函数,并通过数字例子对优化模型进行了说明.分析了模型参数对系统充电桩最优配置数的影响,从中可看出,系统容量、顾客的不耐烦率及顾客的止步概率等参数对充电桩数量的优化设计都会产生不可忽视的影响.  相似文献   

9.
We propose an efficient global sensitivity analysis method for multivariate outputs that applies polynomial chaos-based surrogate models to vector projection-based sensitivity indices. These projection-based sensitivity indices, which are powerful measures of the comprehensive effects of model inputs on multiple outputs, are conventionally estimated by the Monte Carlo simulations that incur prohibitive computational costs for many practical problems. Here, the projection-based sensitivity indices are efficiently estimated via two polynomial chaos-based surrogates: polynomial chaos expansion and a proper orthogonal decomposition-based polynomial chaos expansion. Several numerical examples with various types of outputs are tested to validate the proposed method; the results demonstrate that the polynomial chaos-based surrogates are more efficient than Monte Carlo simulations at estimating the sensitivity indices, even for models with a large number of outputs. Furthermore, for models with only a few outputs, polynomial chaos expansion alone is preferable, whereas for models with a large number of outputs, implementation with proper orthogonal decomposition is the best approach.  相似文献   

10.
周四军  杨超 《经济数学》2013,30(1):45-49
在商业银行效率影响因素研究中引入基于高维投影思想的非参数方法——偏最小二乘方法,并在建模过程中弥补了已有研究未控制时间趋势和宏观经济变量而导致模型不显著和变量符号反常的不足.结果表明:针对本文少样本、自变量存在严重共线性的情况,偏最小二乘法提取的3个成分解释力达到0.984 173,并具有良好的预测性能.财务指标等内生变量对不同产权银行的效率影响是有明显差异的,而产权结构、宏观经济等外生变量的影响是无差异的.  相似文献   

11.
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on response variables. In this paper, a new kernel function derived from orthogonal polynomials is proposed for support vector regression (SVR). Based on this new kernel function, the Sobol’ global sensitivity indices can be computed analytically by the coefficients of the surrogate model built by SVR. In order to improve the performance of the SVR model, a kernel function iteration scheme is introduced further. Due to the excellent generalization performance and structural risk minimization principle, the SVR possesses the advantages of solving non-linear prediction problems with small samples. Thus, the proposed method is capable of computing the Sobol’ indices with a relatively limited number of model evaluations. The proposed method is examined by several examples, and the sensitivity analysis results are compared with the sparse polynomial chaos expansion (PCE), high dimensional model representation (HDMR) and Gaussian radial basis (RBF) SVR model. The examined examples show that the proposed method is an efficient approach for GSA of complex models.  相似文献   

12.
This paper deals with the issue of estimating production frontier and measuring efficiency from a panel data set. First, it proposes an alternate method for the estimation of a production frontier on a short panel data set. The method is based on the so-called mean-and-covariance structure analysis which is closely related to the generalized method of moments. One advantage of the method is that it allows us to investigate the presence of correlations between individual effects and exogenous variables without the requirement of some available instruments uncorrelated with the individual effects as in instrumental variable estimation. Another advantage is that the method is well suited to a panel data set with a short number of periods. Second, the paper considers the question of recovering individual efficiency levels from the estimates obtained from the mean-and-covariance structure analysis. Since individual effects are here viewed as latent variables, they can be estimated as factor scores, i.e., weighted sums of the observed variables. We illustrate the proposed methods with the estimation of a stochastic production frontier on a short panel data of French fruit growers.  相似文献   

13.
Topology optimisation models usually contain a great number of design variables and correspondingly lead to large matrices (pseudo load matrix and sensitivity matrix) which appear in sensitivity analysis. We apply singular value decomposition (SVD) to these matrices to analyse their inner structure. Based on the obtained information, we perform model reduction by transformation of the design variables into a lower-dimensional space. Numerical examples illustrate the advocated theoretical concept. Reasonable results are obtained, based on only a fraction of all design variables. (© 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
In this paper, we proposed a novel forecasting method using grey system theory for the traffic-related emissions at a national level. In our tests, grey relational analysis was used to identify time lags between input and output variables. We introduced a multivariate nonlinear grey model based on the kernel method to improve the accuracy of traffic-related emissions prediction. By solving a convex optimization problem instead of using an ordinary least squares estimation, the proposed model overcame the limitations of the classic grey forecasting models. A model confidence set test on the realistic results of forecasting traffic-related emissions in European Union member countries showed that the proposed model demonstrated a marked superiority over robust linear regression and support vector regression. Based on the non-methane volatile organic compounds from road transport and the relevant factors of the emission from 2004 to 2016, a more stringent European Union emission reduction commitment to the road transport for each year from 2020 to 2029 was suggested. We also investigated the advantages of the proposed model via the analysis on convergence, robustness, and sensitivity.  相似文献   

15.
This paper reconciles two sets of literature with regard to the interactive ecological and economic impacts of invasive grass species and cattle stocking. We model cattle as optimal foragers, satiation foragers, and proportional foragers in order to understand the impact that each assumption imposes on predicted economic and ecological outcomes. Through this model sensitivity (as opposed to parameter sensitivity) analysis, we are able to identify three main drivers of plant invasions: exogenous forces such as climate change or nitrogen deposition, poor land management decisions, and a misalignment of incentives between cattle and ranchers even when ranchers behave optimally.  相似文献   

16.
We propose a procedure based on a latent variable model for the comparison of two partitions of different units described by the same set of variables. The null hypothesis here is that the two partitions come from the same underlying mixture model. We define a method of “projecting” partitions using a supervised classification method: once one partition is taken as a reference; the individuals of the second data set are allocated to the clusters of the reference partition; it gives two partitions of the same units of the second data set: the original and the projected one and we evaluate their difference by usual measures of association. The empirical distributions of the association measures are derived by simulation.  相似文献   

17.
This paper uses a fully nonparametric approach to estimate efficiency measures for primary care units incorporating the effect of (exogenous) environmental factors. This methodology allows us to account for different types of variables (continuous and discrete) describing the main characteristics of patients served by those providers. In addition, we use an extension of this nonparametric approach to deal with the presence of undesirable outputs in data, represented by the rates of hospitalization for ambulatory care sensitive condition (ACSC). The empirical results show that all the exogenous variables considered have a significant and negative effect on efficiency estimates.  相似文献   

18.
On the measurement of technical efficiency in the public sector   总被引:6,自引:0,他引:6  
Existing measures of technical inefficiency obtained through linear programming models in the public sector do not properly control for environmental variables that affect production. It will be shown that the consequences of not controlling for these fixed factors are biased estimates of technical efficiency. This paper extends the mathematical programming approach to frontier estimation known as Data Envelopment Analysis to allow for environmental variables. This modified model will be then contrasted with the existing model that purportedly controls for exogeneous factors to measure public sector efficiency with simulated data. The results provide evidence that the existing Data Envelopment Analysis model will overestimate the level of technical inefficiency and that the modified model developed in this paper does a better job controlling for exogenous factors. The modified model is also applied to analyze the technical efficiency of school districts.  相似文献   

19.
In this paper, we study the fuzzification of Weingartner’s pure capital rationing model and its analysis. We develop a primal–dual pair based on t-norm/t-conorm relation for the constraints and objective function for a fully fuzzified pure capital rationing problem except project selection variables. We define the αα-interval under which the weak duality is proved. We perform sensitivity analysis for a change in a budget level or in a cash flow level of a non-basic as well as a basic variable. We analyze the problem based on duality and complementary slackness results. We illustrate the proposed model by computational analysis, and interpret the results.  相似文献   

20.
We propose numerical and graphical methods for outlier detection in hierarchical Bayes modeling and analyses of repeated measures regression data from multiple subjects; data from a single subject are generically called a “curve”. The first-stage of our model has curve-specific regression coefficients with possibly autoregressive errors of a prespecified order. The first-stage regression vectors for different curves are linked in a second-stage modeling step, possibly involving additional regression variables. Detection of thestage at which the curve appears to be an outlier and themagnitude and specific component of the violation at that stage is accomplished by embedding the null model into a larger parametric model that can accommodate such unusual observations. We give two examples to illustrate the diagnostics, develop a BUGS program to compute them using MCMC techniques, and examine the sensitivity of the conclusions to the prior modeling assumptions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号