首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
In DEA, we have two measures of technical efficiency with different characteristics: radial and non-radial. In this paper we compile them into a composite model called “epsilon-based measure (EBM).” For this purpose we introduce two parameters which connect radial and non-radial models. These two parameters are obtained from the newly defined affinity index between inputs or outputs along with principal component analysis on the affinity matrix. Thus, EBM takes into account diversity of input/output data and their relative importance for measuring technical efficiency.  相似文献   

2.
In data analysis problems where the data are represented by vectors of real numbers, it is often the case that some of the data-points will have “missing values”, meaning that one or more of the entries of the vector that describes the data-point is not observed. In this paper, we propose a new approach to the imputation of missing binary values. The technique we introduce employs a “similarity measure” introduced by Anthony and Hammer (2006) [1]. We compare experimentally the performance of our technique with ones based on the usual Hamming distance measure and multiple imputation.  相似文献   

3.
This paper discusses a new meta-DEA approach to solve the problem of choosing direction vectors when estimating the directional distance function. The proposed model emphasizes finding the “direction” for productivity improvement rather than estimating the “score” of efficiency; focusing on “planning” over “evaluation”. In fact, the direction towards marginal profit maximization implies a step-by-step improvement and “wait-and-see” decision process, which is more consistent with the practical decision-making process. An empirical study of U.S. coal-fired power plants operating in 2011 validates the proposed model. The results show that the efficiency measure using the proposed direction is consistent with all other indices with the exception of the direction towards the profit-maximized benchmark. We conclude that the marginal profit maximization is a useful guide for determining direction in the directional distance function.  相似文献   

4.
This article develops principles for an evaluation of the efficiency of a savings bank. It starts out from the observation that such a bank is less profit oriented than a commercial bank. The customer is a vital stakeholder to the savings bank implying a greater emphasis on customer service provision. We are using data envelopment analysis (DEA) as a method to consider the service orientation of savings banks. We thereby demonstrate how an evaluation of the performance of savings banks according to “service efficiency” differs from an evaluation based on the traditional “profit” or shareholder concept. We determine the number of Swedish savings banks being “service efficient” as well as the average degree of service efficiency in this industry.  相似文献   

5.
Since the concept of “structural efficiency” first appeared in Farrell [Farrell, M.J., 1957. The measurement of productive efficiency. Journal of the Royal Statistical Society, Series A, Part III 120, 253–281], attempts have been made to derive measures for the performance of a group of production units (often referred to an industry with many firms). Many empirical studies used the technical efficiency of an average unit to measure the structural efficiency of a group, but researchers have been puzzled by the discrepancies between the average of individual efficiency scores and the performance of the group as a whole. In this paper, we point out that the “shadow price model” provides a useful framework for understanding the economic meaning of the structural efficiency as well as its components. By recognizing these components, the puzzles related to the inconsistencies between the individual and group performance can be solved readily.  相似文献   

6.
Conventional data envelopment analysis (DEA) models assume real-valued inputs and outputs. In many occasions, some inputs and/or outputs can only take integer values. In some cases, rounding the DEA solution to the nearest whole number can lead to misleading efficiency assessments and performance targets. This paper develops the axiomatic foundation for DEA in the case of integer-valued data, introducing new axioms of “natural disposability” and “natural divisibility”. We derive a DEA production possibility set that satisfies the minimum extrapolation principle under our refined set of axioms. We also present a mixed integer linear programming formula for computing efficiency scores. An empirical application to Iranian university departments illustrates the approach.  相似文献   

7.
This paper presents a general approach to solving multi-objective programming problems with multiple decision makers. The proposal is based on optimizing a bi-objective measure of “collective satisfaction”. Group satisfaction is understood as a reasonable balance between the strengths of an agreeing and an opposing coalition, considering also the number of decision makers not belonging to any of these coalitions. Accepting the vagueness of “collective satisfaction”, even the vagueness of “person satisfaction”, fuzzy outranking relations and other fuzzy logic models are used.  相似文献   

8.
The Global Information Technology Report released by the World Economic Forum (WEF) has employed networked readiness index (NRI) to measure the global competitiveness of a country’s information and communication technologies (ICT) diffusion. The final NRI overall scores were measured by an arithmetic mean aggregation of the composite pillars scores, which implicitly assumed that all the pillars have constant weights. The Report did not explore the critical pillars and causal relations for better decision making. To add values to this Report, the objective of this paper is to propose an innovative approach by using data mining techniques and partial least squares path modeling to scrutinize the critical pillars within the NRI and to further explore the causal relations amongst them. An empirical analysis based on the latest Report (2009-2010) is carried out. The results show that “business usage,” “business readiness,” and “market environment” are the three root drivers—critical pillars to manipulate the NRI overall scores; whereas “government readiness,” which is further mostly affected by the “government usage,” is the foremost enabler to the NRI overall scores. Based on the results, policy makers are suggested to allocate limited resources with priority to the three root drivers and one foremost enabler to frog-leap the global competitiveness of national ICT diffusion.  相似文献   

9.
In algorithmic randomness, when one wants to define a randomness notion with respect to some non-computable measure λ, a choice needs to be made. One approach is to allow randomness tests to access the measure λ as an oracle (which we call the “classical approach”). The other approach is the opposite one, where the randomness tests are completely effective and do not have access to the information contained in λ (we call this approach “Hippocratic”). While the Hippocratic approach is in general much more restrictive, there are cases where the two coincide. The first author showed in 2010 that in the particular case where the notion of randomness considered is Martin-Löf randomness and the measure λ is a Bernoulli measure, classical randomness and Hippocratic randomness coincide. In this paper, we prove that this result no longer holds for other notions of randomness, namely computable randomness and stochasticity.  相似文献   

10.
Based on the minimal reduction strategy, Yang et al. (2011) developed a fixed-sum output data envelopment analysis (FSODEA) approach to evaluate the performance of decision-making units (DMUs) with fixed-sum outputs. However, in terms of such a strategy, all DMUs compete over fixed-sum outputs with “no memory” that will result in differing efficient frontiers’ evaluations. To address the problem, in this study, we propose an equilibrium efficiency frontier data envelopment analysis (EEFDEA) approach, by which all DMUs with fixed-sum outputs can be evaluated based on a common platform (or equilibrium efficient frontier). The proposed approach can be divided into two stages. Stage 1 constructs a common evaluation platform via two strategies: an extended minimal adjustment strategy and an equilibrium competition strategy. The former ensures that original efficient DMUs are still efficient, guaranteeing the existence of a common evaluation platform. The latter makes all DMUs achieve a common equilibrium efficient frontier. Then, based on the common equilibrium efficient frontier, Stage 2 evaluates all DMUs with their original inputs and outputs. Finally, we illustrate the proposed approach by using two numerical examples.  相似文献   

11.
We look at the problem of optimizing complex operations with incomplete information where the missing information is revealed indirectly and imperfectly through historical decisions. Incomplete information is characterized by missing data elements governing operational behavior and unknown cost parameters. We assume some of this information may be indirectly captured in historical databases through flows characterizing resource movements. We can use these flows or other quantities derived from these flows as “numerical patterns” in our optimization model to reflect some of the incomplete information. We develop our methodology for representing information in resource allocation models using the concept of pattern regression. We use a popular goodness-of-fit measure known as the Cramer–Von Mises metric as the foundation of our approach. We then use a hybrid approach of solving a cost model with a term known as the “pattern metric” that minimizes the deviations of model decisions from observed quantities in a historical database. We present a novel iterative method to solve this problem. Results with real-world data from a large freight railroad are presented.  相似文献   

12.
A finite sample performance measure of multivariate location estimators is introduced based on “tail behavior”. The tail performance of multivariate “monotone” location estimators and the halfspace depth based “non-monotone” location estimators including the Tukey halfspace median and multivariate L-estimators is investigated. The connections among the finite sample performance measure, the finite sample breakdown point, and the halfspace depth are revealed. It turns out that estimators with high breakdown point or halfspace depth have “appealing” tail performance. The tail performance of the halfspace median is very appealing and also robust against underlying population distributions, while the tail performance of the sample mean is very sensitive to underlying population distributions. These findings provide new insights into the notions of the halfspace depth and breakdown point and identify the important role of tail behavior as a quantitative measure of robustness in the multivariate location setting.  相似文献   

13.
Since it is well-known (De Marchi and Schaback (2001) [4]) that standard bases of kernel translates are badly conditioned while the interpolation itself is not unstable in function space, this paper surveys the choices of other bases. All data-dependent bases turn out to be defined via a factorization of the kernel matrix defined by these data, and a discussion of various matrix factorizations (e.g. Cholesky, QR, SVD) provides a variety of different bases with different properties. Special attention is given to duality, stability, orthogonality, adaptivity, and computational efficiency. The “Newton” basis arising from a pivoted Cholesky factorization turns out to be stable and computationally cheap while being orthonormal in the “native” Hilbert space of the kernel. Efficient adaptive algorithms for calculating the Newton basis along the lines of orthogonal matching pursuit conclude the paper.  相似文献   

14.
In this paper we prove existence and uniqueness of the so-called Shapley mapping, which is a solution concept for a class of n-person games with fuzzy coalitions whose elements are defined by the specific structure of their characteristic functions. The Shapley mapping, when it exists, associates to each fuzzy coalition in the game an allocation of the coalitional worth satisfying the efficiency, the symmetry, and the null-player conditions. It determines a “cumulative value” that is the “sum” of all coalitional allocations for whose computation we provide an explicit formula.  相似文献   

15.
It is a well-acknowledged fact that collaboration between different members of a supply chain yields a significant potential to increase overall supply chain performance. Sharing private information has been identified as prerequisite for collaboration and, at the same time, as one of its major obstacles. One potential avenue for overcoming this obstacle is Secure Multi-Party Computation (SMC). SMC is a cryptographic technique that enables the computation of any (well-defined) mathematical function by a number of parties without any party having to disclose its input to another party. In this paper, we show how SMC can be successfully employed to enable joint decision-making and benefit sharing in a simple supply chain setting. We develop secure protocols for implementing the well-known “Joint Economic Lot Size (JELS) Model” with benefit sharing in such a way that none of the parties involved has to disclose any private (cost and capacity) data. Thereupon, we show that although computation of the model’s outputs can be performed securely, the approach still faces practical limitations. These limitations are caused by the potential of “inverse optimization”, i.e., a party can infer another party’s private data from the output of a collaborative planning scheme even if the computation is performed in a secure fashion. We provide a detailed analysis of “inverse optimization” potentials and introduce the notion of “stochastic security”, a novel approach to assess the additional information a party may learn from joint computation and benefit sharing. Based on our definition of “stochastic security” we propose a stochastic benefit sharing rule, develop a secure protocol for this benefit sharing rule, and assess under which conditions stochastic benefit sharing can guarantee secure collaboration.  相似文献   

16.
We study the rate of convergence of some recursive procedures based on some “exact” or “approximate” Euler schemes which converge to the invariant measure of an ergodic SDE driven by a Lévy process. The main interest of this work is to compare the rates induced by “exact” and “approximate” Euler schemes. In our main result, we show that replacing the small jumps by a Brownian component in the approximate case preserves the rate induced by the exact Euler scheme for a large class of Lévy processes.  相似文献   

17.
In this paper, we study quantity discount pricing policies in a channel of one manufacturer and one retailer. The paper assumes that the channel faces a stochastic price-sensitive demand but the retailer can privately observe the realization of an uncertain demand parameter. The problem is analyzed as a Stackelberg game in which the manufacturer declares quantity discount pricing schemes to the retailer and then the retailer follows by selecting the retail price and associated quantity. Proposed in the paper are four quantity-discount pricing policies: “regular quantity discount”; “fixed percentage discount”; “incremental volume discount” and “fixed marginal-profit-rate discount”. Optimal solutions are derived, and numerical examples are presented to illustrate the efficiency of each discount policy.  相似文献   

18.
The goal of harmonic analysis on a (noncommutative) group is to decompose the most “natural” unitary representations of this group (like the regular representation) on irreducible ones. The infinite-dimensional unitary group U(∞) is one of the basic examples of “big” groups whose irreducible representations depend on infinitely many parameters. Our aim is to explain what the harmonic analysis on U(∞) consists of.We deal with unitary representations of a reasonable class, which are in 1-1 correspondence with characters (central, positive definite, normalized functions on U(∞)). The decomposition of any representation of this class is described by a probability measure (called spectral measure) on the space of indecomposable characters. The indecomposable characters were found by Dan Voiculescu in 1976.The main result of the present paper consists in explicitly constructing a 4-parameter family of “natural” representations and computing their characters. We view these representations as a substitute of the nonexisting regular representation of U(∞). We state the problem of harmonic analysis on U(∞) as the problem of computing the spectral measures for these “natural” representations. A solution to this problem is given in the next paper (Harmonic analysis on the infinite-dimensional unitary group and determinantal point processes, math/0109194, to appear in Ann. Math.), joint with Alexei Borodin.We also prove a few auxiliary general results. In particular, it is proved that the spectral measure of any character of U(∞) can be approximated by a sequence of (discrete) spectral measures for the restrictions of the character to the compact unitary groups U(N). This fact is a starting point for computing spectral measures.  相似文献   

19.
Environmental assessment recently becomes a major policy issue in the world. This study discusses how to apply Data Envelopment Analysis (DEA) for environmental assessment. An important feature of the DEA environmental assessment is that it needs to classify outputs into desirable (good) and undesirable (bad) outputs because private and public entities often produce not only desirable outputs but also undesirable outputs as a result of their production activities. This study proposes the three types of unification for DEA environmental assessment by using non-radial DEA models. The first unification considers both an increase and a decrease in the input vector along with a decrease in the direction vector of undesirable outputs. This type of unification measures “unified efficiency”. The second unification considers a decrease in an input vector along with a decrease in the vector of undesirable outputs. This type of unification is referred to as “natural disposability” and measures “unified efficiency under natural disposability”. The third unification considers an increase in an input vector but a decrease in the vector of undesirable outputs. This type of unification is referred to as “managerial disposability” and measures “unified efficiency under managerial disposability”. All the unifications increase the vector of desirable outputs. To document their practical implications, this study has applied the proposed approach to compare the performance of national oil firms with that of international oil firms. This study identifies two important findings on the petroleum industry. One of the two findings is that national oil companies under public ownership outperform international oil companies under private ownership in terms of unified (operational and environmental) efficiency and unified efficiency under natural disposability. However, the performance of international oil companies exhibits an increasing trend in unified efficiency. The other finding is that national oil companies need to satisfy the environmental standard of its own country while international oil companies need to satisfy the international standard that is more restricted than the national standards. As a consequence, international oil companies outperform national oil companies in terms of unified efficiency under managerial disposability.  相似文献   

20.
The paper investigates overlapping generations (OLG) economies in vector lattices framework. Agents' preferences are assumed uniformly proper, though they may be nontransitive and incomplete. Existence is stated for the “equilibrium with nonstandard prices,” a notion that may be looked upon as a particular (or generalized, in other aspect) case of known “compensated equilibria” of OLG-economies. The difference is that compensated values are described via explicit formula given in nonstandard analysis terms. This approach enables more clear economic interpretation and shows some new properties of compensated values, such as their linearity over agents' endowments. Also it allows easy to prove the existence of equilibria under classical additional assumptions on agents' endowments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号