首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 511 毫秒
1.
In both public administration and economics, efficiency is brought forward as an important criterion for evaluating administrative actions. Clearly, its value as an assessment principle depends on our ability to adequately measure efficiency. This article argues that citizen’s coproduction in public services requires a careful reassessment of how we approach the measurement of productive efficiency in public service delivery. Theoretically, we illustrate that using observable outcomes (e.g., library circulation, school results, health outcomes, fires extinguished, and crimes solved) as output indicators is inappropriate and leads to biased estimates of public service providers’ productive efficiency. This bias arises because citizens co-determine final outputs, leaving them at least partly beyond the service providers’ control. Empirically, we find supportive evidence of both the existence and importance of such ‘demand-induced’ bias.  相似文献   

2.
This paper, by using conditional directional distance functions as introduced by Simar and Vanhems [J. Econometrics 166 (2012) 342–354] modifies the model by Färe and Grosskopf [Eur. J. Operat. Res. 157 (2004) 242–245] and examines the link between regional environmental efficiency and economic growth. The proposed model using conditional directional distance functions incorporates the effect of regional economic growth on regions’ environmental efficiency levels. The results from UK regional data reveal a negative relationship between regions’ GDP per capita and environmental inefficiency up to a certain GDP per capita level. After that level it appears that the relationship becomes positive. As an overall result the regional environmental inefficiency-GDP per capita relationship appears to have a ‘U’ shape form.  相似文献   

3.
The long-debated issue of the business value of information technology (IT) to the firm (country) has received a great deal of attention in the literature. But the studies have rarely examined the dynamic patterns of the IT value as measured by the firm’s productive efficiency over time. The objective of this paper is to apply the three-factor constant elasticity of substitution (CES) time-varying stochastic production frontier models and use the same data set as used in several previous studies to investigate the dynamic patterns of IT value over time in connection with the issues of inputs substitution and complement and the productivity paradox. This paper adopts two analytical skills, collective and individual, to analyze the empirical results. Collectively, we find that the dynamic patterns of IT value are closely related to the substitution and complement of three inputs and the IT productivity paradox. Individually, we identify five common dynamic patterns of the IT value measured by productive efficiency and interpret their implications for the productivity paradox as summarized in a two by two matrix of practical applications and strategies. This matrix accounts for four different scenarios of the relationship between the average productive efficiency and the IT productivity paradox, illustrates some practical applications by the analytical results, and provides some business insights and managerial strategies for IT decision makers and PO/IS managers. This represents a new contribution to the understanding of the dynamic influence of IT investments upon the value of IT over time.  相似文献   

4.
Based on the grey system theory and methods, the grey-target decision-making problem is discussed, in which the attribute values are grey numbers and the maximum probability of the value of grey number is known. Firstly, the optimal effect vector is the positive bull’s-eye and positive bull’s-eye distance of each scheme is defined. Subjectively or objectively weighting method is integrated to determine the index weight and integrated optimization model of index weight is established. Finally, the critical effect vector is the negative bull’s-eye and negative bull’s-eye distance of each scheme is defined, then relative bull’s-eye distance and comprehensive the bull’s-eye distance of grey target decision-making are given. An example is also presented to illustrate the usefulness and effectiveness of the methods obtained in this paper and provides a new idea for grey target decision-making method research.  相似文献   

5.
基于超效DEA模型的方法,选取了从1998年至2010年的第三产业投入产出数据,分析了泛珠三角区9个省第三产业效率并进行评价.通过对9个省的第三产业生产效率的横向对比发现四川省第三产业生产效率最高,并对四川省第三产业生产效率进行分解研究,总结了四川省所采取的有利于第三产业技术进步和规模效率提高的政策措施.对泛珠三角区内其它各省第三产业发展提出了相应建议.  相似文献   

6.
The traditional data envelopment analysis (DEA) model does not include a decision maker’s (DM) preference structure while measuring relative efficiency, with no or minimal input from the DM. To incorporate DM’s preference information in DEA, various techniques have been proposed. An interesting method to incorporate preference information, without necessary prior judgment, is the use of an interactive decision making technique that encompasses both DEA and multi-objective linear programming (MOLP). In this paper, we will use Zionts-Wallenius (Z-W) method to reflecting the DM’s preferences in the process of assessing efficiency in the general combined-oriented CCR model. A case study will conducted to illustrate how combined-oriented efficiency analysis can be conducted using the MOLP method.  相似文献   

7.
The paper discusses long-term trends in relationships between energy use and the overall productive efficiency of the American economy. While total energy consumption grew strongly during the twentieth century, the intensity of energy use (i.e. the energy/GNP ratio) fell persistently much of the time. Thus, there were simultaneous long-term improvements in labor productivity, total factor productivity,and energy productivity. The historical record appears to be at odds with conventional beliefs that gains in productive efficiency depend upon the rising intensity of energy use in production processes. A key role in bringing about these counter-intuitive results is assigned to what is referred to as the energy-technology-productivity nexus, in which the quality of particular energy forms such as electricity and liquid fuels (along with closely linked changes in energy-using technologies) played a critical part in leveraging the overall efficiency of production. As a result of these energy form-dependent improvements in productive efficiency, outputs grew more rapidly than all inputs, including the inputs of energy. The more recent past stands in sharp contrast to the long-term record. While energy efficiency (as measured by energy/GNP) showed strong gains during the late 1970's and early 1980's, the growth in overall productive efficiency was severely retarded. Implications for the future of suggested linkages between the quality of particular energy forms and technological progress are considered.  相似文献   

8.
There has been a worldwide trend for financial institutions to become larger in scale and more diversified in scope, with Taiwan being no exception. Fourteen financial holding companies (FHCs) have each begun to function as a management umbrella in Taiwan by investing in different types of financial services such as banking, insurance, and securities. This paper focuses on this local financing issue from an integrated methodological perspective by model innovations proposed in several earlier studies. For example, the efficiency of profitability and marketability are combined to evaluate the FHCs’ performance. To conduct a valid and reliable evaluation process while applying the FHC’s case in Taiwan, we integrate the slacks-based measure (SBM) and slacks-based measure of super efficiency (super-SBM) models in order to directly handle the slacks and identify the best performers. A new scheme that deals with the negative output data in the SBM/super-SBM is also introduced. Inter-temporal efficiency change, which is decomposed into ‘catch-up’ and ‘frontier-shift’ effects, is analyzed by means of the SBM-based Malmquist index. A decision-making matrix is also presented to help the FHCs’ managerial authorities position themselves in the industry. The above techniques show with a high degree of consistency that large-sized FHCs perform better than small-sized ones.  相似文献   

9.
This paper investigates the impact of customers’ stock-out based substitution on the product availability and the channel efficiency of a dual-channel supply chain, which consists of a supplier distributing a single product to customers through both its wholly owned direct channel and an independent retailer. The supplier and its retailer, with the objective of optimizing their own profit, simultaneously choose their own base-stock level to satisfy the stochastic demand from the customers whose channel preferences are heterogeneous and may be affected by each channel’s product availability. The customers dynamically substitute between the two channels in the event of a stock-out. The result shows that the effect of the stock-out based substitution may increase or decrease the efficiency of a decentralized supply chain. It is found that while the integrated supplier–retailer may consolidate the base-stock levels to benefit from stock-out based substitution, the independent supplier and retailer are more inattentive to customers’ stock-out based substitution. Thus, the competitive base-stock levels of the decentralized dual-channel supply chain rarely agree with the system optimal levels. Various contracts are examined to shed light on channel coordination mechanisms. In addition, it is shown that the channel efficiency of the dual-channel distribution can be improved by the emergence of Stackelberg leadership from either the supplier or the retailer.  相似文献   

10.
Given a tournament T, Slater’s problem consists in determining a linear order (i.e. a complete directed graph without directed cycles) at minimum distance from T, the distance between T and a linear order O being the number of directed edges with different orientations in T and in O. This paper studies the complexity of this problem and of several variants of it: computing a Slater order, computing a Slater winner, checking that a given vertex is a Slater winner and so on.  相似文献   

11.
The Steklov problem considered in the paper describes free two-dimensional oscillations of an ideal, incompressible, heavy fluid in a half-plane covered by a rigid dock with two symmetric gaps. Equivalent reduction of the problem to two spectral problems for integral operators allows us to find limits for all eigenfrequencies as the distance between the gaps tends to zero or infinity. For the fundamental eigenfrequency and the corresponding eigenfunction, two terms are found in the asymptotic expansion as the distance tends to infinity. It is proved that all eigenvalues are simple for any distance. Bibliography: 15 titles.Dedicated to the centenary of V. A. Steklovs paper [1]__________Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 297, 2003, pp. 162–190.  相似文献   

12.
In this paper, we introduced a new generalized centralized resource allocation model which extends Lozano and Villa’s and Asmild et al.’s models to a more general case. In order to uncover the sources of such total input contraction in the generalized centralized resource allocation model, we applied the structural efficiency to further decompose it into three components: the aggregate technical efficiency, the aggregate allocative efficiency and re-transferable efficiency components. The proposed models are not only flexible enough for the central decision-maker to adjust the inputs and outputs to achieve the total input contraction but also identify the sources of such total input contraction, thereby giving rise to an important interpretation and understanding of the generalized centralized resource allocation model. Finally, an empirical example is used to illustrate the approach.  相似文献   

13.
In this paper we consider the parameter space of all the linear inequality systems, in the n-dimensional Euclidean space, with a fixed and arbitrary (possibly infinite) index set. This parameter space is endowed with the topology of the uniform convergence of the coefficient vectors by means of an extended distance. Some authors, in a different context in which the index set is finite and, accordingly, the coefficients are bounded, consider the boundary of the set of consistent systems as the set of ill-posed systems. The distance from the nominal system to this boundary (distance to ill-posedness), which constitutes itself a measure of the stability of the system, plays a decisive role in the complexity analysis of certain algorithms for finding a solution of the system. In our context, the presence of infinitely many constraints would lead us to consider separately two subsets of inconsistent systems, the so-called strongly inconsistent systems and the weakly inconsistent systems. Moreover, the possible unboundedness of the coefficient vectors of a system gives rise to a special subset formed by those systems whose distance to ill-posedness is infinite. Attending to these two facts, and according to the idea that a system is ill-posed when small changes in the systems data yield different types of systems, now the boundary of the set of strongly inconsistent systems arises as the generalized ill-posedness set. The paper characterizes this generalized ill-posedness of a system in terms of the so-called associated hypographical set, leading to an explicit formula for the distance to generalized ill-posedness. On the other hand, the consistency value of a system, also introduced in the paper, provides an alternative way to determine its distance to ill-posedness (in the original sense), and additionally allows us to distinguish the consistent well-posed systems from the inconsistent well-posed ones. The finite case is shown to be a meeting point of our linear semi-infinite approach to the distance to ill-posedness with certain results derived for conic linear systems. Applications to the analysis of the Lipschitz properties of the feasible set mapping, as well as to the complexity analysis of the ellipsoid algorithm, are also provided.This research has been partially supported by grants BFM2002-04114-C02 (01-02) from MCYT (Spain) and FEDER (E.U.), and Bancaja-UMH (Spain).  相似文献   

14.
With the fast development of financial products and services, bank’s credit departments collected large amounts of data, which risk analysts use to build appropriate credit scoring models to evaluate an applicant’s credit risk accurately. One of these models is the Multi-Criteria Optimization Classifier (MCOC). By finding a trade-off between overlapping of different classes and total distance from input points to the decision boundary, MCOC can derive a decision function from distinct classes of training data and subsequently use this function to predict the class label of an unseen sample. In many real world applications, however, owing to noise, outliers, class imbalance, nonlinearly separable problems and other uncertainties in data, classification quality degenerates rapidly when using MCOC. In this paper, we propose a novel multi-criteria optimization classifier based on kernel, fuzzification, and penalty factors (KFP-MCOC): Firstly a kernel function is used to map input points into a high-dimensional feature space, then an appropriate fuzzy membership function is introduced to MCOC and associated with each data point in the feature space, and the unequal penalty factors are added to the input points of imbalanced classes. Thus, the effects of the aforementioned problems are reduced. Our experimental results of credit risk evaluation and their comparison with MCOC, support vector machines (SVM) and fuzzy SVM show that KFP-MCOC can enhance the separation of different applicants, the efficiency of credit risk scoring, and the generalization of predicting the credit rank of a new credit applicant.  相似文献   

15.
A new modifying Hausdorff distance image matching algorithm was proposed in this paper. After the corners of two images was extracted using Harris corner detector, a kind of Hausdorff distance integrating points set coincidence numbers was presented to aim at the traditional Hausdorff distance. The accuracy of matching was improved by this modifying. Hausdorff distance coefficient matrix is calculating by corners neighborhood’s related matching. The initial matching point-pairs are obtained by the rule that the small coefficient is good matching. Finally the wrong matching point-pairs are deleted by the distance-ration invariant, the right matching point-pairs are acquired. Experimental results show that the proposed method can be easily and quickly to process the multiple sensor images.  相似文献   

16.
Considering the fact that, in some cases, determining precisely the exact value of attributes is difficult and that their values can be considered as fuzzy data, this paper extends the TOPSIS method for dealing with fuzzy data, and an algorithm for determining the best choice among all possible choices when the data are fuzzy is also presented. In this approach, to identify the fuzzy ideal solution and fuzzy negative ideal solution, one of the Yager indices which is used for ordering fuzzy quantities in [0, 1] is applied. Using Yager’s index leads to a procedure for choosing fuzzy ideal and negative ideal solutions directly from the data for observed alternatives. Then, the Hamming distance is proposed for calculating the distance between two triangular fuzzy numbers. Finally, an application is given, to clarify the main results developed in the paper.  相似文献   

17.
Pesticides’ dynamic effects and production uncertainty play an important role in farmers’ production decisions. Pesticides have a current production impact through reducing crop damage in the current period and a future impact through impacting the farm biodiversity which alters the future production environment. This study presents the difference in inefficiency arising from models that ignore the dynamic effects of pesticides in production decisions and the impact of production uncertainty. A dynamic data envelopment analysis (DEA) model is applied to outputs, inputs, and undesirables of Dutch arable farms over the period 2003–2007. A bootstrap approach is used to explain farmers’ performance, providing empirical representations of the impact of stochastic elements on production. These empirical representations are used to adjust firms’ inefficiency scores to incorporate production uncertainty in efficiency evaluation. We find that efficiency increased dramatically when a production technology representation that considers both pesticides’ dynamic impacts, and production uncertainty is adopted.  相似文献   

18.
This paper studies sales effort coordination for a supply chain with one manufacturer and two retail channels, where an online retailer offers a lower price and free-rides a brick-and-mortar retailer’s sales effort. The free riding effect reduces brick-and-mortar retailer’s desired effort level, and thus hurts the manufacturer’s profit and the overall supply chain performance. To achieve sales effort coordination, we designed a contract with price match and selective compensation rebate. We also examined other contracts, including the target rebate contract and the wholesale price discount contract, both with price match. The numerical analysis shows that the selective rebate outperforms other contracts in coordinating the brick-and-mortar retailer’s sales effort and improving supply chain efficiency.  相似文献   

19.
This paper proposes a two-dimensional efficiency decomposition (2DED) of profitability for a production system to account for the demand effect observed in productivity analysis. The first dimension identifies four components of efficiency: capacity design, demand generation, operations, and demand consumption, using Network Data Envelopment Analysis (Network DEA). The second dimension decomposes the efficiency measures and integrates them into a profitability efficiency framework. Thus, each component’s profitability change can be analyzed based on technical efficiency change, scale efficiency change and allocative efficiency change. An empirical study based on data from 2006 to 2008 for the US airline industry finds that the regress of productivity is mainly caused by a demand fluctuation in 2007-2008 rather than technical regression in production capabilities.  相似文献   

20.
This paper aims to relate the LeChatelier principle, first introduced into economics by Samuelson (1947), with the DEA approach through two propositions. These propositions allow for bridging the principle over a DEA model with and without the presence of non-discretionary inputs and enable one to make comparisons for the various efficiency measures under different conditions. The quasi-fixity of some inputs hinders a firm’s capacity from instantly and freely adjusting its input combination in order to minimize its production costs. The assumption that all inputs are discretionary tends to exaggerate managers’ ability to dispense resources and renders invalid information on the adjustment of the current input mix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号