首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In conventional data envelopment analysis (DEA), measures are classified as either input or output. However, in some real cases there are variables which act as both input and output and are known as flexible measures. Most of the previous suggested models for determining the status of flexible measures are oriented. One important issue of these models is that unlike standard DEA, even under constant returns to scale the input- and output-oriented model may produce different efficiency scores. Also, can be expected a flexible measure is selected as an input variable in one model but an output variable in the other model. In addition, in all of the previous studies did not point to variable returns to scale (VRS), but the VRS assumption is prevailed on many real applications. To deal with these issues, this study proposes a new non-oriented model that not only selects the status of each flexible measure as an input or output but also determines returns to scale status. Then, the aggregate model and an extension with the negative data related to the proposed approach are presented.  相似文献   

2.
Efficiency is a relative measure because it can be measured within different ranges. The traditional data envelopment analysis (DEA) measures the efficiencies of decision-making units (DMUs) within the range of less than or equal to one. The corresponding efficiencies are referred to as the best relative efficiencies, which measure the best performances of DMUs and determine an efficiency frontier. If the efficiencies are measured within the range of greater than or equal to one, then the worst relative efficiencies can be used to measure the worst performances of DMUs and determine an inefficiency frontier. In this paper, the efficiencies of DMUs are measured within the range of an interval, whose upper bound is set to one and the lower bound is determined through introducing a virtual anti-ideal DMU, whose performance is definitely inferior to any DMUs. The efficiencies turn out to be all intervals and are thus referred to as interval efficiencies, which combine the best and the worst relative efficiencies in a reasonable manner to give an overall measurement and assessment of the performances of DMUs. The new DEA model with the upper and lower bounds on efficiencies is referred to as bounded DEA model, which can incorporate decision maker (DM) or assessor's preference information on input and output weights. A Hurwicz criterion approach is introduced and utilized to compare and rank the interval efficiencies of DMUs and a numerical example is examined using the proposed bounded DEA model to show its potential application and validity.  相似文献   

3.
Similarity measures of type-2 fuzzy sets are used to indicate the similarity degree between type-2 fuzzy sets. Inclusion measures for type-2 fuzzy sets are the degrees to which a type-2 fuzzy set is a subset of another type-2 fuzzy set. The entropy of type-2 fuzzy sets is the measure of fuzziness between type-2 fuzzy sets. Although several similarity, inclusion and entropy measures for type-2 fuzzy sets have been proposed in the literatures, no one has considered the use of the Sugeno integral to define those for type-2 fuzzy sets. In this paper, new similarity, inclusion and entropy measure formulas between type-2 fuzzy sets based on the Sugeno integral are proposed. Several examples are used to present the calculation and to compare these proposed measures with several existing methods for type-2 fuzzy sets. Numerical results show that the proposed measures are more reasonable than existing measures. On the other hand, measuring the similarity between type-2 fuzzy sets is important in clustering for type-2 fuzzy data. We finally use the proposed similarity measure with a robust clustering method for clustering the patterns of type-2 fuzzy sets.  相似文献   

4.
The purpose of cost-benefit analysis is to provide information for better decision-making for the future which is full of uncertainty. The results of cost-benefit analysis are not based on empirical evidence but on prediction with a theoretical basis. This paper shows what kind of traffic safety measure is most effective for the prevention of traffic accidents on the basis of behavior analysis.The role played by the human being is significant in driving. In this study, the behavioral units of driving are stipulated from the viewpoint that the driver's behavior is materialized at a point where human, vehicle and environmental factors are concerned. According to the psychological idea that behavior shifts with the shifts of environment, the shifts of driver's behavior by the establishment of traffic safety measures will be discussed. On the other hand, the correspondence between accident examples and behavioral units will be discussed by accident analysis, and in conclusion the decrease in accidents will be predicted by the shifts of behavior due to traffic safety measures. The method for deciding the priority order of establishing traffic safety measures will be explained considering the expenses for taking these measures.  相似文献   

5.
We consider quasi-self-similar measures with respect to all real numbers on a Cantor dust. We define a local index function on the real numbers for each quasi-self-similar measure at each point in a Cantor dust, The value of the local index function at the real number zero for all the quasi-self-similar measures at each point is the weak local dimension of the point. We also define transformed measures of a quasi-self-similar measure which are closely related to the local index function. We compute the local dimensions of transformed measures of a quasi-self-similar measure to find the multifractal spectrum of the quasi-self-similar measure, Furthermore we give an essential example for the theorem of local dimension of transformed measure. In fact, our result is an ultimate generalization of that of a self- similar measure on a self-similar Cantor set. Furthermore the results also explain the recent results about weak local dimensions on a Cantor dust.  相似文献   

6.
We compute the correlation dimension of a measure defined on a general Sierpinski carpet. We relate this function to a free energy function associated to a partition composed of ‘nearly squares’ and well fitted to the planar Cantor set. Actually, we prove that these functions are real analytic on , are strictly increasing and are strictly concave (respectively linear in the degenerate case). This is an example of a two-dimensional dynamical system contracting in the two directions with different ratios. We first study measures of Gibbsian type before generalizing to Markovian measures.  相似文献   

7.
In this paper, we present a pilot study in which we use probabilistic risk analysis (PRA) to assess patient risk in anesthesia and its human factor component. We then identify and evaluate the benefits of several risk reduction policies. We focus on healthy patients, in modern hospitals, and on cases where the anesthetist is a trained medical doctor. When an accident occurs for such patients, it is often because an error was made by the anesthesiologist, either triggering the event that initiated the accident sequence, or failing to take timely corrective measures. We present first a dynamic PRA model of anesthesia accidents. Our data include published results of the Australian Incident Monitoring Study as well as expert opinions. We link the probabilities of the different types of accidents to the state of the anesthesiologist characterized both in terms of alertness and competence. We consider different management factors that affect the state of the anesthesiologist, we identify several risk reduction policies, and we compute the corresponding risk reduction benefits based on the PRA model. We conclude that periodic recertification of all anesthesiologists, the use of anesthesia simulators in training, and closer supervision of residents could reduce substantially the patient risk.  相似文献   

8.
In this article, we investigate and compare a number of real inversion formulas for the Laplace transform. The focus is on the accuracy and applicability of the formulas for numerical inversion. In this contribution, we study the performance of the formulas for measures concentrated on a positive half-line to continue with measures on an arbitrary half-line. As our trial measure concentrated on a positive half-line, we take the broad Gamma probability distribution family.  相似文献   

9.
Measures with values in a countably order-complete vector lattice are considered. The underlying σ-algebra is assumed to be σ-isomorphic to the Borel sets of the real line. Given one such measure, densities are searched which are not necessarily scalar-valued for smaller measures. The results can be used to prove the existence of a least upper bound for two such measures.  相似文献   

10.
This paper evaluates a variety of technical efficiency measures based on a given nonparametric reference technology, the free disposal hull (FDH). Specifically, we consider the radial measure of Debreu (1951)/Farrell (1957) and the nonradial measures of Färe (1975), Färe and Lovell (1978) and Zieschang (1984). Furthermore, input-based, output-based, and graph efficiency versions of these four measures are computed. Theoretical consideration as to the best choice among these alternative measures is inconclusive; therefore, we examine this problem from an empiricalviewpoint. Calculating thirteen different measures of technical efficiency for a sample of US banks, we compare the measures' efficiency distributions and rankings, paying particular attention to how well the radial measure approximates its nonradial alternatives.  相似文献   

11.
Existing measures of input allocative efficiency may be biased when estimated via data envelopment analysis (DEA) because of the possibility of slack in the constraints defining the reference technology. In this paper we derive a new measure of input allocative efficiency and compare it to existing measures. We measure efficiency by comparing the actual outputs of a decision-making unit relative to Koopmans’ efficient subset of the direct and indirect output possibility sets. We estimate the existing measures and our new measure of input allocative efficiency for a sample of public school districts operating in Texas.  相似文献   

12.
A new measure called diversity difference is proposed for the inequality of a pair of distributions. The diversity difference measure satisfies eight properties of a measure of inequality. This measure is simple to calculate and provides easily interpreted results.Existing inequality measures examine the distribution of a single variable whose data are arranged in a monotonic order. The new measure can employ multiple variables and does not require each to be monotonic but can be used if the data happen to be monotonic. The pair of distributions is useful for organizational diversity data because one of the distributions represents the actual proportions of employees in any class or set of classes and the other distribution is the benchmark or anchoring distribution. Data from the measure can be displayed in diversity difference trees for quick interpretation.The diversity difference measure can be arranged to define a Lorenz curve. An example with three classes (gender, race, and age) is employed to provide examples of the measure, the resulting Lorenz curve, and the disparity ratio.  相似文献   

13.
Quantum ensembles, as generalizations of quantum states, are a universal instrument for describing the physical or informational status in measurement theory and communication theory because of the ubiquitous presence of incomplete information and the necessity of encoding classical messages in quantum states. The interrelations between the constituent states of a quantum ensemble can display more or less quantum characteristics when the involved quantum states do not commute because no single classical basis diagonalizes all these states. This contrasts sharply with the situation of a single quantum state, which is always diagonalizable. To quantify these quantum characteristics and, in particular, to more clearly understand the possibilities of secure data transmission in quantum cryptography, based on certain prototypical quantum ensembles, we introduce some figures of merit quantifying the quantumness of a quantum ensemble, review some existing quantities that are interpretable as measures of quantumness, and investigate their fundamental properties such as subadditivity and concavity. Comparing these measures, we find that different measures can yield different quantumness orderings for quantum ensembles. This reveals the elusive and complex nature of quantum ensembles and shows that no unique measure can describe all the fundamental and subtle properties of quantumness.  相似文献   

14.
Operational measurement methods may be developed to measure the value of information in the data reported by the U.S. Government. Illustrative measures for cost of cotton production statistics indicate that the benefits from these data in certain important uses may far exceed their costs. If similar measures could be provided for major Government data programmes, it would facilitate the development of a national data policy that is oriented toward decision making and improvements in economic growth, national well-being, and quality of life. Making these estimates would begin to provide the dollar values in important uses of information in the nation's data bases. These values are needed for allocating resources to maintain, refine, and develop fundamental data series. Cutting data expenditure in the absence of these value measures may be false economy indeed; because, reducing data series with very high benefit/cost ratios might well limit, if not reduce, living standards for many generations to come.Robert R. Nathan Associates, Inc.  相似文献   

15.
We prove that the free additive convolution of two Borel probability measures supported on the real line can have a component that is singular continuous with respect to the Lebesgue measure on ${\mathbb{R}}$ only if one of the two measures is a point mass. The density of the absolutely continuous part with respect to the Lebesgue measure is shown to be analytic wherever positive and finite. The atoms of the free additive convolution of Borel probability measures on the real line have been described by Bercovici and Voiculescu in a previous paper.  相似文献   

16.

This paper reviews real estate price estimation in France, a market that has received little attention. We compare seven popular machine learning techniques by proposing a different approach that quantifies the relevance of location features in real estate price estimation with high and fine levels of granularity. We take advantage of a newly available open dataset provided by the French government that contains 5 years of historical data of real estate transactions. At a high level of granularity, we obtain important differences regarding the models’ prediction powers between cities with medium and high standards of living (precision differences beyond 70% in some cases). At a low level of granularity, we use geocoding to add precise geographical location features to the machine learning algorithm inputs. We obtain important improvements regarding the models’ forecasting powers relative to models trained without these features (improvements beyond 50% for some forecasting error measures). Our results also reveal that neural networks and random forest techniques particularly outperform other methods when geocoding features are not accounted for, while random forest, adaboost and gradient boosting perform well when geocoding features are considered. For identifying opportunities in the real estate market through real estate price prediction, our results can be of particular interest. They can also serve as a basis for price assessment in revenue management for durable and non-replenishable products such as real estate.

  相似文献   

17.
In finance theory the standard deviation of asset returns is almost universally recognized as a measure of risk. This universality continues to exist even in the presence of known limitations of using the standard deviation and also an extensive and growing literature on alternative risk measures. One possible reason for this persistence is that the sample properties of alternative risk measures are not well understood. This paper attempts to compare the sample distribution of the semi-variance with that of the variance. In particular, the belief that, while there are convincing theoretical reasons to use the semi-variance the volatility of the sample measure is so high as to make the measure impractical in applied work, is investigated. In addition arguments based on stochastic dominance are also used to compare the distribution of the two statistics. Conditions are developed to identify situations in which the semi-variance may be preferred to the variance. An empirical example using equity data from emerging markets demonstrates this approach.  相似文献   

18.
As far as medical diagnosis problem is concerned, predicting the actual disease in complex situations has been a concerning matter for the doctors/experts. The divergence measure for intuitionistic fuzzy sets is an effective and potent tool in addressing the medical decision making problems. We define a new divergence measure for intuitionistic fuzzy sets (IFS) and its interesting properties are studied. The existing divergence measures under intuitionistic fuzzy environment are reviewed and their counter-intuitive cases has been explored. The parameter $\alpha $ is incorporated in the proposed divergence measure and it is defined as parametric intuitionistic fuzzy divergence measure (PIFDM). The different choices of the parameter $\alpha$ provide different decisions about the disease. As we increase the value of $\alpha $, the information about the disease increases and move towards the optimal solution with the reduced in the uncertainty. Finally, we compare our results with the already existing results, which illustrate the role of the parameter $\alpha $ in obtaining the optimal solution in the medical decision making application. The results demonstrate that the parametric intuitionistic fuzzy divergence measure (PIFDM) is more comprehensive and effective than the proposed intuitionistic fuzzy divergence measure and the existing intuitionistic fuzzy divergence measures for decision making in medical investigations.  相似文献   

19.
In this paper, four methods are proposed for feature selection in an unsupervised manner by using genetic algorithms. The proposed methods do not use the class label information but select a set of features using a task independent criterion that can preserve the geometric structure (topology) of the original data in the reduced feature space. One of the components of the fitness function is Sammon’s stress function which tries to preserve the topology of the high dimensional data when reduced into the lower dimensional one. In this context, in addition to using a fitness criterion, we also explore the utility of unfitness criterion to select chromosomes for genetic operations. This ensures higher diversity in the population and helps unfit chromosomes to become more fit. We use four different ways for evaluation of the quality of the features selected: Sammon error, correlation between the inter-point distances in the two spaces, a measure of preservation of cluster structure found in the original and reduced spaces and a classifier performance. The proposed methods are tested on six real data sets with dimensionality varying between 9 and 60. The selected features are found to be excellent in terms of preservation topology (inter-point geometry), cluster structure and classifier performance. We do not compare our methods with other methods because, unlike other methods, using four different ways we check the quality of the selected features by finding how well the selected features preserve the “structure” of the original data.  相似文献   

20.
In sport tournaments in which teams are matched two at a time, it is useful for a variety of reasons to be able to quantify how important a particular game is. The need for such quantitative information has been addressed in the literature by several more or less simple measures of game importance. In this paper, we point out some of the drawbacks of those measures and we propose a different approach, which rather targets how decisive a game is with respect to the final victory. We give a definition of this idea of game decisiveness in terms of the uncertainty about the eventual winner prevailing in the tournament at the time of the game. As this uncertainty is strongly related to the notion of entropy of a probability distribution, our decisiveness measure is based on entropy-related concepts. We study the suggested decisiveness measure on two real tournaments, the 1988 NBA Championship Series and the UEFA 2012 European Football Championship (Euro 2012), and we show how well it agrees with what intuition suggests. Finally, we also use our decisiveness measure to objectively analyse the recent UEFA decision to expand the European Football Championship from 16 to 24 nations in the future, in terms of the overall attractiveness of the competition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号