首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
While estimating production technology in a primal framework production function, input and output distance functions and input requirement functions are widely used in the empirical literature. This paper shows that these popular primal based models are algebraically equivalent in the sense that they can be derived from the same underlying transformation (production possibility) function. By assuming that producers maximize profit, we show that in all cases, except one, the use of ordinary least squares (OLS) gives inconsistent estimates irrespective of whether the production, input distance and input requirement functions are used. Based on several specifications of the production and input distance function models, we conclude that one can estimate the input elasticities and returns to scale consistently using instruments on only one regressor. No instruments are needed if either it is assumed that producers know the technology entirely (including the so-called error term) or a system approach is used. We used Norwegian timber harvesting data to illustrate workings of various model specifications.  相似文献   

2.
This paper addresses specification and estimation of multiple-outputs and multiple-inputs production technology in the presence of technical inefficiency. The primary focus is on the primal formulations. Several competing specifications such as production function, input (output) distance function, input requirement function are considered. We show that all these specifications come from the same transformation function and are algebraically identical. We also show that: (i) unless the transformation function is separable (i.e., outputs are separable from inputs), the input (output) ratios in the input (output) distance function can not be treated as exogenous (uncorrelated with technical inefficiency) resulting inconsistent estimates of the input (output) distance function parameters. (ii) Even if input (output) ratios are exogenous, estimation of the input (output) distance function will result in inconsistent parameter estimates if outputs (inputs) are endogenous. We address endogeneity and instrumental variable issues in details in the context of flexible (translog) functional forms. Estimation of several specifications using both single and system approaches are discussed using Norwegian dairy farming data.  相似文献   

3.
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker’s value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data.  相似文献   

4.
Electric utilities commonly use econometric modelling for energy and power forecasting. In order to accommodate the uncertainties contained in the input variables, such forecasts are frequently made in three parts: a base forecast, assumed to be the most likely, and a high and a low forecast, often arbitrarily spaced on either side of the base forecast, giving a band of possible values for the forecast. Usually, a single point value forecast is then utilized rather than a distribution of possible forecast values. This paper describes how commercially available spreadsheet software was utilized to convert an econometric energy forecast into probabilistic demand and energy forecasts that incorporate weather variation, as well as other uncertain inputs.  相似文献   

5.
A regular problem in regression analysis is estimating the comparative importance of the predictors in the model. This work considers the ‘net effects’, or shares of the predictors in the coefficient of the multiple determination, which is a widely used characteristic of the quality of a regression model. Estimation of ;the net effects can be a difficult task because multicollinearity among the regressors can produce negative inputs to multiple determination. This paper suggests estimating the incremental net effects as subsequent marginal inputs to the coefficient of multiple determination, and it is shown that the results coincide with estimation by cooperative game theory. This approach guarantees positive and interpretable net effects, which offers a better interpretation of the regression results.  相似文献   

6.
Regression and linear programming provide the basis for popular techniques for estimating technical efficiency. Regression-based approaches are typically parametric and can be both deterministic or stochastic where the later allows for measurement error. In contrast, linear programming models are nonparametric and allow multiple inputs and outputs. The purported disadvantage of the regression-based models is the inability to allow multiple outputs without additional data on input prices. In this paper, deterministic cross-sectional and stochastic panel data regression models that allow multiple inputs and outputs are developed. Notably, technical efficiency can be estimated using regression models characterized by multiple input, multiple output environments without input price data. We provide multiple examples including a Monte Carlo analysis.  相似文献   

7.
In conventional data envelopment analysis it is assumed that the input versus output status of each chosen performance measures is known. In some conditions finding a statue of some variables from the point view of input or output is very difficult; these variables treat as both an input and output and are called flexible measures. This paper proposes a new model based on translog output distance function for classifying inputs and outputs and evaluating the performance of decision-making units by considering flexible measures. Monte Carlo simulation is applied to evaluate the presented model comparing with that of the recent model found in the literature. The result shows that the measure efficiencies of our model are statistically closer to true efficiencies and have higher rank correlation with true efficiencies. Also results obtained from simulated data show that there are high correlation between our model and that of the recent model.  相似文献   

8.
In this paper, Data Envelopment Analysis is used to determine the target output levels (and corresponding required inputs) for the multiple facilities under the responsibility of a Top Decision Maker. The proposed approach aims at minimizing the input costs incurred to attain specified total, company-wide output levels. Specific features of the proposed approach are the consideration of multiple technologies, lower and upper bounds on the production levels of each plant and the possibility of shutting down a plant, with the corresponding costs, if necessary. The consideration of multiple technologies, in particular, takes into account the existence of heterogeneity within the company's facilities. In that scenario, grouping all plants within a single technology gives misleading results.  相似文献   

9.
In this paper we consider radial DEA models without inputs (or without outputs), and radial DEA models with a single constant input (or with a single constant output). We demonstrate that (i) a CCR model without inputs (or without outputs) is meaningless; (ii) a CCR model with a single constant input (or with a single constant output) coincides with the corresponding BCC model; (iii) a BCC model with a single constant input (or a single constant output) collapses to a BCC model without inputs (or without outputs); and (iv) all BCC models, including those without inputs (or without outputs), can be condensed to models having one less variable (the radial efficiency score) and one less constraint (the convexity constraint).  相似文献   

10.
In data envelopment analysis (DEA), operating units are compared on their outputs relative to their inputs. The identification of an appropriate input–output set is of decisive significance if assessment of the relative performance of the units is not to be biased. This paper reports on a novel approach used for identifying a suitable input–output set for assessing central administrative services at universities. A computer-supported group support system was used with an advisory board to enable the analysts to extract information pertaining to the boundaries of the unit of assessment and the corresponding input–output variables. The approach provides for a more comprehensive and less inhibited discussion of input–output variables to inform the DEA model.  相似文献   

11.
As a measure of overall technical inefficiency, the Directional Distance Function (DDF) introduced by Chambers, Chung, and Färe ties the potential output expansion and input contraction together through a single parameter. By duality, the DDF is related to a measure of profit inefficiency, which is calculated as the normalized deviation between optimal and actual profit at market prices. As we show, in the most usual case, the associated normalization represents the sum of the actual revenue and the actual cost of the assessed firm. Consequently, the corresponding profit inefficiency measure associated with the DDF has no obvious economic interpretation. In contrast, in this paper we allow outputs to expand and inputs to contract by different proportions. This results in a modified DDF that retains most of the properties of the original DDF. The corresponding dual problem has a much simpler interpretation as the lost profit on (average) outlay that can be decomposed into a technical and an allocative inefficiency component. In addition, an overall measure of technical inefficiency at the industry level is introduced resorting to the direction corresponding to the average input–output bundle.  相似文献   

12.
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.  相似文献   

13.
Importance analysis is aimed at finding the contributions of the inputs to the output uncertainty. For structural models involving correlated input variables, the variance contribution by an individual input variable is decomposed into correlated contribution and uncorrelated contribution in this study. Based on point estimate, this work proposes a new algorithm to conduct variance based importance analysis for correlated input variables. Transformation of the input variables from correlation space to independence space and the computation of conditional distribution in the process ensure that the correlation information is inherited correctly. Different point estimate methods can be employed in the proposed algorithm, thus the algorithm is adaptable and evolvable. Meanwhile, the proposed algorithm is also applicable to uncertainty systems with multiple modes. The proposed algorithm avoids the sampling procedure, which usually consumes a heavy computational cost. Results of several examples in this work have proven the proposed algorithm can be used as an effective tool to deal with uncertainty analysis involving correlated inputs.  相似文献   

14.
The concept of efficiency in data envelopment analysis (DEA) is defined as weighted sum of outputs/weighted sum of inputs. In order to calculate the maximum efficiency score, each decision making unit (DMU)’s inputs and outputs are assigned to different weights. Hence, the classical DEA allows the weight flexibility. Therefore, even if they are important, the inputs or outputs of some DMUs can be assigned zero (0) weights. Thus, these inputs or outputs are neglected in the evaluation. Also, some DMUs may be defined as efficient even if they are inefficient. This situation leads to unrealistic results. Also to eliminate the problem of weight flexibility, weight restrictions are made in DEA. In our study, we proposed a new model which has not been published in the literature. We describe it as the restricted data envelopment analysis ((ARIII(COR))) model with correlation coefficients. The aim for developing this new model, is to take into account the relations between variables using correlation coefficients. Also, these relations were added as constraints to the CCR and BCC models. For this purpose, the correlation coefficients were used in the restrictions of input–output each one alone and their combination together. Inputs and outputs are related to the degree of correlation between each other in the production. Previous studies did not take into account the relationship between inputs/outputs variables. So, only with expert opinions or an objective method, weight restrictions have been made. In our study, the weights for input and output variables were determined, according to the correlations between input and output variables. The proposed new method is different from other methods in the literature, because the efficiency scores were calculated at the level of correlations between the input and/or output variables.  相似文献   

15.
多属性拍卖在传统价格拍卖的基础上纳入了价格和质量多个属性,已广泛应用于许多领域。本文针对输入几乎没有先验结构且投标人数量足够大的情况,在以往有关多属性拍卖研究的基础上将单个投入扩展为多个投入,将物品的多个属性(价格、完成时间、劳动力数量、质量等因素)合理划分为投入或产出,设计了运行有效的基于数据包络分析的多属性第二分值拍卖机制。与其他方法相比,数据包络分析方法可以有效解决多投入多产出问题,结合多目标规划方法,可以帮助采购方在最大化自身利益的同时,找到整体表现更好的供应商。该机制满足个人理性和激励相容,与传统第二分值拍卖机制相比,在吸引投标人的前提下能够最大化拍卖人的利益。  相似文献   

16.
Researchers rely on the distance function to model multiple product production using multiple inputs. A stochastic directional distance function (SDDF) allows for noise in potentially all input and output variables. Yet, when estimated, the direction selected will affect the functional estimates because deviations from the estimated function are minimized in the specified direction. Specifically, the parameters of the parametric SDDF are point identified when the direction is specified; we show that the parameters of the parametric SDDF are set identified when multiple directions are considered. Further, the set of identified parameters can be narrowed via data-driven approaches to restrict the directions considered. We demonstrate a similar narrowing of the identified parameter set for a shape constrained nonparametric method, where the shape constraints impose standard features of a cost function such as monotonicity and convexity.Our Monte Carlo simulation studies reveal significant improvements, as measured by out of sample radial mean squared error, in functional estimates when we use a directional distance function with an appropriately selected direction and the errors are uncorrelated across variables. We show that these benefits increase as the correlation in error terms across variables increase. This correlation is a type of endogeneity that is common in production settings. From our Monte Carlo simulations we conclude that selecting a direction that is approximately orthogonal to the estimated function in the central region of the data gives significantly better estimates relative to the directions commonly used in the literature. For practitioners, our results imply that selecting a direction vector that has non-zero components for all variables that may have measurement error provides a significant improvement in the estimator’s performance. We illustrate these results using cost and production data from samples of approximately 500 US hospitals per year operating in 2007, 2008, and 2009, respectively, and find that the shape constrained nonparametric methods provide a significant increase in flexibility over second order local approximation parametric methods.  相似文献   

17.
This paper is concerned with using the digital computer as an aid to the solution of a class of multivariable design problems and reports some experience. The input to the computer is a set of design variables and a set of interactions between these variables, while the output consists of a hierarchy of groups of highly interacting variables. Both the preparation of the input and the interpretation of the output depend on subjective decisions by the designer using the computer. A specific application was therefore investigated in which different sets of inputs were compiled by three different designers, using 18, 38, 58 and 185 variables. The four outputs were analysed to compare their usefulness as a design aid in proportion to the effort required for their preparation. An IBM 7044 computer was used and a guide is given for the estimation of program running times on this machine.  相似文献   

18.
Both technology and market demands within the high-tech electronics manufacturing industry change rapidly. Accurate and efficient estimation of cycle-time (CT) distribution remains a critical driver of on-time delivery and associated customer satisfaction metrics in these complex manufacturing systems. Simulation models are often used to emulate these systems in order to estimate parameters of the CT distribution. However, execution time of such simulation models can be excessively long limiting the number of simulation runs that can be executed for quantifying the impact of potential future operational changes. One solution is the use of simulation metamodeling which is to build a closed-form mathematical expression to approximate the input–output relationship implied by the simulation model based on simulation experiments run at selected design points in advance. Metamodels can be easily evaluated in a spreadsheet environment “on demand” to answer what-if questions without needing to run lengthy simulations. The majority of previous simulation metamodeling approaches have focused on estimating mean CT as a function of a single input variable (i.e., throughput). In this paper, we demonstrate the feasibility of a quantile regression based metamodeling approach. This method allows estimation of CT quantiles as a function of multiple input variables (e.g., throughput, product mix, and various distributional parameters of time-between-failures, repair time, setup time, loading and unloading times). Empirical results are provided to demonstrate the efficacy of the approach in a realistic simulation model representative of a semiconductor manufacturing system.  相似文献   

19.
Voting algorithms are used to arbitrate between the results of redundant modules in fault-tolerant systems. Inexact majority and weighted average voters have been used in many applications, although both have problems associated with them. Inexact majority voters require an application-specific 'voter threshold’ value to be specified, whereas weighted average voters are unable to produce a benign output when no agreement exists between the voter inputs. Neither voter type is able to cope with uncertainties associated with the voter inputs. This paper introduces a novel voting scheme based on fuzzy set theory. It softens the harsh behaviour of the inexact majority voter in the neighbourhood of the ‘voter threshold’, and handles uncertainty and some multiple error cases in the region defined by the fuzzy input variables. The voter assigns a fuzzy difference value to each pair of voter inputs based on their numerical distance. A set of fuzzy rules then determines a single fuzzy agreeability value for each individual input which describes how well it matches the other inputs. The agreeability of each voter input is then defuzzified to give a weighting value for that input which determines its contribution to the voter output. The weight values are then used in the weighted average algorithm for calculating the voter final output. The voter is experimentally evaluated from the point of view safety and availability, and compared with the inexact majority voter in a Triple Modular Redundant structured framework. The impact of changing some fuzzy variables on the performance of the voter is also investigated. We show that the fuzzy voter gives more correct outputs (higher availability) than the inexact majority voter with small and large errors, less incorrect outputs (higher safety) than the inexact majority voter in the presence of small errors, and less benign outputs than the inexact majority voter. The percentage of the benign outputs of the majority voter that are successfully handled by the fuzzy voter (resulting in correct outputs) is more than the percentage of those that are unsuccessfully resolved by the fuzzy voter (resulting in incorrect outputs). Our results suggest that the fuzzy voter is a viable alternative to a traditional inexact voter in cases where the benefits of a large increase in availability, and a considerable decrease in the number of benign outputs outweighs the cost of a small degradation in the safety performance of the system. The fuzzy voter is also a useful voting algorithm when arbitrating between the responses of dynamic channels of control systems incorporating uncertainties. This is the first reported use of a complete fuzzy voter in the context of fault tolerance.  相似文献   

20.
Data envelopment analysis has become an important technique for modelling the relationship between inputs and outputs in the production process, particularly in the public sector. However, whenever measures of the output of public sector activity receive public attention, there is a strong possibility that there will be a feedback from the achieved output to the resources devoted to the activity. In other words, the level of resources is endogenous. The implications of such endogeneity for standard econometric estimation techniques are well known, and methods exist to deal with the problem. Most commentators have assumed that endogeneity poses no analogous problems for DEA because the technique merely places an envelope around feasible production possibilities. Using Monte Carlo simulation techniques, however, this paper shows that the efficiency estimates generated by DEA in the presence of endogeneity can be subject to bias, in the sense that inefficient units using low levels of the endogenous resource may be set tougher efficiency targets than equally inefficient units using more of the resource, particularly when sample sizes are small. The paper concludes that, in such circumstances, great caution should be exercised when comparing efficiency measures for units using different levels of the endogenous input.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号