首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
《Optimization》2012,61(11):2441-2454
Inverse data envelopment analysis (InDEA) is a well-known approach for short-term forecasting of a given decision-making unit (DMU). The conventional InDEA models use the production possibility set (PPS) that is composed of an evaluated DMU with current inputs and outputs. In this paper, we replace the fluctuated DMU with a modified DMU involving renewal inputs and outputs in the PPS since the DMU with current data cannot be allowed to establish the new PPS. Besides, the classical DEA models such as InDEA are assumed to consider perfect knowledge of the input and output values but in numerous situations, this assumption may not be realistic. The observed values of the data in these situations can sometimes be defined as interval numbers instead of crisp numbers. Here, we extend the InDEA model to interval data for evaluating the relative efficiency of DMUs. The proposed models determine the lower and upper bounds of the inputs of a given DMU separately when its interval outputs are changed in the performance analysis process. We aim to remain the current interval efficiency of a considered DMU and the interval efficiencies of the remaining DMUs fixed or even improve compared with the current interval efficiencies.  相似文献   

2.
This paper discusses and reviews the use of super-efficiency approach in data envelopment analysis (DEA) sensitivity analyses. It is shown that super-efficiency score can be decomposed into two data perturbation components of a particular test frontier decision making unit (DMU) and the remaining DMUs. As a result, DEA sensitivity analysis can be done in (1) a general situation where data for a test DMU and data for the remaining DMUs are allowed to vary simultaneously and unequally and (2) the worst-case scenario where the efficiency of the test DMU is deteriorating while the efficiencies of the other DMUs are improving. The sensitivity analysis approach developed in this paper can be applied to DMUs on the entire frontier and to all basic DEA models. Necessary and sufficient conditions for preserving a DMU’s efficiency classification are developed when various data changes are applied to all DMUs. Possible infeasibility of super-efficiency DEA models is only associated with extreme-efficient DMUs and indicates efficiency stability to data perturbations in all DMUs.  相似文献   

3.
《Applied Mathematical Modelling》2014,38(7-8):2028-2036
Conventional DEA models assume deterministic, precise and non-negative data for input and output observations. However, real applications may be characterized by observations that are given in form of intervals and include negative numbers. For instance, the consumption of electricity in decentralized energy resources may be either negative or positive, depending on the heat consumption. Likewise, the heat losses in distribution networks may be within a certain range, depending on e.g. external temperature and real-time outtake. Complementing earlier work separately addressing the two problems; interval data and negative data; we propose a comprehensive evaluation process for measuring the relative efficiencies of a set of DMUs in DEA. In our general formulation, the intervals may contain upper or lower bounds with different signs. The proposed method determines upper and lower bounds for the technical efficiency through the limits of the intervals after decomposition. Based on the interval scores, DMUs are then classified into three classes, namely, the strictly efficient, weakly efficient and inefficient. An intuitive ranking approach is presented for the respective classes. The approach is demonstrated through an application to the evaluation of bank branches.  相似文献   

4.
The objective of the present paper is to propose a novel pair of data envelopment analysis (DEA) models for measurement of relative efficiencies of decision-making units (DMUs) in the presence of non-discretionary factors and imprecise data. Compared to traditional DEA, the proposed interval DEA approach measures the efficiency of each DMU relative to the inefficiency frontier, also called the input frontier, and is called the worst relative efficiency or pessimistic efficiency. On the other hand, in traditional DEA, the efficiency of each DMU is measured relative to the efficiency frontier and is called the best relative efficiency or optimistic efficiency. The pair of proposed interval DEA models takes into account the crisp, ordinal, and interval data, as well as non-discretionary factors, simultaneously for measurement of relative efficiencies of DMUs. Two numeric examples will be provided to illustrate the applicability of the interval DEA models.  相似文献   

5.
This paper proposes a statistical approach to handle the problem of detecting influential observations in deterministic nonparametric Data Envelopment Analysis (DEA) models. We use the bootstrap method to estimate the underlying distribution for efficiency scores in order to avoid making unrealistic assumptions about the true distribution. To measure whether a specific DMU is truly influential, we employ relative entropy to detect the change in the distribution after the DMU in question is removed. A statistical test has been applied to determine the significance level. Two examples from the literature are discussed and comparisons to previous methods are provided.  相似文献   

6.
Missing data and time-dependent covariates often arise simultaneously in longitudinal studies, and directly applying classical approaches may result in a loss of efficiency and biased estimates. To deal with this problem, we propose weighted corrected estimating equations under the missing at random mechanism, followed by developing a shrinkage empirical likelihood estimation approach for the parameters of interest when time-dependent covariates are present. Such procedure improves efficiency over generalized estimation equations approach with working independent assumption, via combining the independent estimating equations and the extracted additional information from the estimating equations that are excluded by the independence assumption. The contribution from the remaining estimating equations is weighted according to the likelihood of each equation being a consistent estimating equation and the information it carries. We show that the estimators are asymptotically normally distributed and the empirical likelihood ratio statistic and its profile counterpart follow central chi-square distributions asymptotically when evaluated at the true parameter. The practical performance of our approach is demonstrated through numerical simulations and data analysis.  相似文献   

7.
In this paper stochastic models in data envelopment analysis (DEA) are developed by taking into account the possibility of random variations in input-output data, and dominance structures on the DEA envelopment side are used to incorporate the modelbuilder's preferences and to discriminate efficiencies among decision making units (DMUs). The efficiency measure for a DMU is defined via joint dominantly probabilistic comparisons of inputs and outputs with other DMUs and can be characterized by solving a chance constrained programming problem. Deterministic equivalents are obtained for multivariate symmetric random errors and for a single random factor in the production relationships. The goal programming technique is utilized in deriving linear deterministic equivalents and their dual forms. The relationship between the general stochastic DEA models and the conventional DEA models is also discussed.  相似文献   

8.
Stochastic differential equations with mixed effects provide means to model intra-individual and inter-individual variability in repeated experiments leading to longitudinal data. We consider N i.i.d. stochastic processes defined by a stochastic differential equation with linear mixed effects which are discretely observed. We study a parametric framework with distributions leading to explicit approximate likelihood functions and investigate the asymptotic behavior of estimators under the asymptotic framework : the number N of individuals (trajectories) and the number n of observations per individual tend to infinity within a fixed time interval. The estimation method is assessed on simulated data for various models.  相似文献   

9.
Efficiency is a relative measure because it can be measured within different ranges. The traditional data envelopment analysis (DEA) measures the efficiencies of decision-making units (DMUs) within the range of less than or equal to one. The corresponding efficiencies are referred to as the best relative efficiencies, which measure the best performances of DMUs and determine an efficiency frontier. If the efficiencies are measured within the range of greater than or equal to one, then the worst relative efficiencies can be used to measure the worst performances of DMUs and determine an inefficiency frontier. In this paper, the efficiencies of DMUs are measured within the range of an interval, whose upper bound is set to one and the lower bound is determined through introducing a virtual anti-ideal DMU, whose performance is definitely inferior to any DMUs. The efficiencies turn out to be all intervals and are thus referred to as interval efficiencies, which combine the best and the worst relative efficiencies in a reasonable manner to give an overall measurement and assessment of the performances of DMUs. The new DEA model with the upper and lower bounds on efficiencies is referred to as bounded DEA model, which can incorporate decision maker (DM) or assessor's preference information on input and output weights. A Hurwicz criterion approach is introduced and utilized to compare and rank the interval efficiencies of DMUs and a numerical example is examined using the proposed bounded DEA model to show its potential application and validity.  相似文献   

10.
To take sample biases and skewness in the observations into account, practitioners frequently weight their observations according to some marginal distribution. The present paper demonstrates that such weighting can indeed improve the estimation. Studying contingency tables, estimators for marginal distributions are proposed under the assumption that another marginal is known. It is shown that the weighted estimators have a strictly smaller asymptotic variance whenever the two marginals are correlated. The finite sample performance is illustrated in a simulation study. As an application to traffic accident data the method allows for correcting a well‐known bias in the observed injury severity distribution.  相似文献   

11.
We introduce a novel strategy to address the issue of demand estimation in single-item single-period stochastic inventory optimisation problems. Our strategy analytically combines confidence interval analysis and inventory optimisation. We assume that the decision maker is given a set of past demand samples and we employ confidence interval analysis in order to identify a range of candidate order quantities that, with prescribed confidence probability, includes the real optimal order quantity for the underlying stochastic demand process with unknown stationary parameter(s). In addition, for each candidate order quantity that is identified, our approach produces an upper and a lower bound for the associated cost. We apply this approach to three demand distributions in the exponential family: binomial, Poisson, and exponential. For two of these distributions we also discuss the extension to the case of unobserved lost sales. Numerical examples are presented in which we show how our approach complements existing frequentist—e.g. based on maximum likelihood estimators—or Bayesian strategies.  相似文献   

12.
逆DEA模型讨论了在保持决策单元的效率指数(即最优值)不变的情况下,当输入水平给定时估计输出值.在逆DEA模型的基础上研究了效率指数提高的输出估计,讨论了带有随机因素的情况,将该问题转化成机会约束的线性规划问题,并用数值算例加以说明.  相似文献   

13.
The Gaussian hidden Markov model (HMM) is widely considered for the analysis of heterogenous continuous multivariate longitudinal data. To robustify this approach with respect to possible elliptical heavy-tailed departures from normality, due to the presence of outliers, spurious points, or noise (collectively referred to as bad points herein), the contaminated Gaussian HMM is here introduced. The contaminated Gaussian distribution represents an elliptical generalization of the Gaussian distribution and allows for automatic detection of bad points in the same natural way as observations are typically assigned to the latent states in the HMM context. Once the model is fitted, each observation has a posterior probability of belonging to a particular state and, inside each state, of being a bad point or not. In addition to the parameters of the classical Gaussian HMM, for each state we have two more parameters, both with a specific and useful interpretation: one controls the proportion of bad points and one specifies their degree of atypicality. A sufficient condition for the identifiability of the model is given, an expectation-conditional maximization algorithm is outlined for parameter estimation and various operational issues are discussed. Using a large-scale simulation study, but also an illustrative artificial dataset, we demonstrate the effectiveness of the proposed model in comparison with HMMs of different elliptical distributions, and we also evaluate the performance of some well-known information criteria in selecting the true number of latent states. The model is finally used to fit data on criminal activities in Italian provinces. Supplementary materials for this article are available online  相似文献   

14.
The purpose of this paper is to develop a new DEA with an interval efficiency. An original DEA model is to evaluate each DMU optimistically. There is another model called “Inverted DEA” to evaluate each DMU pessimistically. But, there are no relations essentially between DEA and inverted DEA. Thus, we formulate a DEA model with an interval efficiency which consists of efficiencies obtained from the optimistic and pessimistic viewpoints. Thus, two end points can construct an interval efficiency. With the same idea, we deal with the interval inefficiency model which is inverse to interval efficiency. Finally, we extend the proposed DEA model to interval data and fuzzy data.  相似文献   

15.
We discuss efficient Bayesian estimation of dynamic covariance matrices in multivariate time series through a factor stochastic volatility model. In particular, we propose two interweaving strategies to substantially accelerate convergence and mixing of standard MCMC approaches. Similar to marginal data augmentation techniques, the proposed acceleration procedures exploit nonidentifiability issues which frequently arise in factor models. Our new interweaving strategies are easy to implement and come at almost no extra computational cost; nevertheless, they can boost estimation efficiency by several orders of magnitude as is shown in extensive simulation studies. To conclude, the application of our algorithm to a 26-dimensional exchange rate dataset illustrates the superior performance of the new approach for real-world data. Supplementary materials for this article are available online.  相似文献   

16.
Consider the two problems of simulating observations and estimating expectations and normalizing constants for multiple distributions. First, we present a self-adjusted mixture sampling method, which accommodates both adaptive serial tempering and a generalized Wang–Landau algorithm. The set of distributions are combined into a labeled mixture, with the mixture weights depending on the initial estimates of log normalizing constants (or free energies). Then, observations are generated by Markov transitions, and free energy estimates are adjusted online by stochastic approximation. We propose two stochastic approximation schemes by Rao–Blackwellization of the scheme commonly used, and derive the optimal choice of a gain matrix, resulting in the minimum asymptotic variance for free energy estimation, in a simple and feasible form. Second, we develop an offline method, locally weighted histogram analysis, for estimating free energies and expectations, using all the simulated data from multiple distributions by either self-adjusted mixture sampling or other sampling algorithms. This method can be computationally much faster, with little sacrifice of statistical efficiency, than a global method currently used, especially when a large number of distributions are involved. We provide both theoretical results and numerical studies to demonstrate the advantages of the proposed methods.  相似文献   

17.
Data envelopment analysis (DEA) is a data-oriented approach for evaluating the performances of a set of peer entities called decision-making units (DMUs), whose performance is determined based on multiple measures. The traditional DEA, which is based on the concept of efficiency frontier (output frontier), determines the best efficiency score that can be assigned to each DMU. Based on these scores, DMUs are classified into DEA-efficient (optimistic efficient) or DEA-non-efficient (optimistic non-efficient) units, and the DEA-efficient DMUs determine the efficiency frontier. There is a comparable approach which uses the concept of inefficiency frontier (input frontier) for determining the worst relative efficiency score that can be assigned to each DMU. DMUs on the inefficiency frontier are specified as DEA-inefficient or pessimistic inefficient, and those that do not lie on the inefficient frontier, are declared to be DEA-non-inefficient or pessimistic non-inefficient. In this paper, we argue that both relative efficiencies should be considered simultaneously, and any approach that considers only one of them will be biased. For measuring the overall performance of the DMUs, we propose to integrate both efficiencies in the form of an interval, and we call the proposed DEA models for efficiency measurement the bounded DEA models. In this way, the efficiency interval provides the decision maker with all the possible values of efficiency, which reflect various perspectives. A numerical example is presented to illustrate the application of the proposed DEA models.  相似文献   

18.
The log-normal distribution is a common choice for modeling positively skewed data arising from many practical applications.This article introduces a new method of constructing confidence interval for a common mean shared by several log-normal populations through confidence distributions, which combines all information from independent sources. We develop a non-trivial weighting approach by taking account of the sample variances of related quantities to enhance efficiency. Combined confidence distributions are used to construct confidence intervals for the common mean and a simplified version of one existing method is also proposed. We conduct simulation studies to evaluate the performance of the proposed methods in comparison with existing methods. Our simulation results show that the weighting approach yields shorter interval length than the non-weighting approach. The newly proposed confidence intervals perform very well in terms of empirical coverage probability and average interval length. Finally, applications of the proposed methodology is illustrated through three real data examples.  相似文献   

19.
In this paper, we consider the application of order statistics to establish the optimality in stochastic and heuristic optimization algorithms. A method for estimating the minimum value with an associated confidence interval is developed using the formalism of the theory of order statistics for i.i.d. variables; we examine it by computer simulation. We build a method for the estimation of confidence intervals of the minimum value using order statistics, implemented for optimality testing and stopping in Markov type random search algorithms. The efficiency of this approach is discussed, using the results of application to stochastic approximation and simulated annealing.  相似文献   

20.
A procedure is presented for finding maximum likelihood estimates of the parameters of a mixture of two Weibull distributions. Estimation of a nonlinear discriminant function on the basis of small sample size is considered. Throughout simulation experiments, the total probabilities of misclassification and percentage biases are evaluated and discussed. The problem of updating a nonlinear discriminant function on the basis of two Weibull distributions is studied in situations when the additional observations are mixed or classified. The performance of all results is investigated using a series of simulation experiments by means of relative efficiencies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号