首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
Pawlak’s attribute dependency degree model is applicable to feature selection in pattern recognition. However, the dependency degrees given by the model are often inadequately computed as a result of the indiscernibility relation. This paper discusses an improvement to Pawlak’s model and presents a new attribute dependency function. The proposed model is based on decision-relative discernibility matrices and measures how many times condition attributes are used to determine the decision value by referring to the matrix. The proposed dependency degree is computed by considering the two cases that two decision values are equal or unequal. A feature of the proposed model is that attribute dependency degrees have significant properties related to those of Armstrong’s axioms. An advantage of the proposed model is that data efficiency is considered in the computation of dependency degrees. It is shown through examples that the proposed model is able to compute dependency degrees more strictly than Pawlak’s model.  相似文献   

2.
The work in this paper is an application of vague set theory in relational databases. It is well known that integrity constraints like functional dependency, multivalued dependency, etc. play a key role in any database design. In the present work, the authors introduce a new definition of vague multivalued dependency(VMVD), called α-VMVD, on the basis of α-equality of tuples as defined in the recent work on vague functional dependency [1] in 2012. Next, the definition has been shown to be consistent and finally a set of sound and complete inference axioms have been designed and verified for the α-VMVD.  相似文献   

3.
Directional distance functions and slacks-based measures of efficiency   总被引:1,自引:0,他引:1  
In this paper we introduce a SBM (slacks-based measure) of efficiency based on directional distance functions. This measure is contrasted with the SBM due to Professor Tone [Tone, K., 2001. A slacks-based measure of efficiency in data envelopment analysis. European Journal of Operational Research 130, 498–509].  相似文献   

4.
The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units (DMUs) with exact values of inputs and outputs. For imprecise data, i.e., mixtures of interval data and ordinal data, some methods have been developed to calculate the upper bound of the efficiency scores. This paper constructs a pair of two-level mathematical programming models, whose objective values represent the lower bound and upper bound of the efficiency scores, respectively. Based on the concept of productive efficiency and the application of a variable substitution technique, the pair of two-level nonlinear programs is transformed to a pair of ordinary one-level linear programs. Solving the associated pairs of linear programs produces the efficiency intervals of all DMUs. An illustrative example verifies the idea of this paper. A real case is also provided to give some interpretation of the interval efficiency. Interval efficiency not only describes the real situation in better detail; psychologically, it also eases the tension of the DMUs being evaluated as well as the persons conducting the evaluation.  相似文献   

5.
This paper extends the classical cost efficiency (CE) models to include data uncertainty. We believe that many research situations are best described by the intermediate case, where some uncertain input and output data are available. In such cases, the classical cost efficiency models cannot be used, because input and output data appear in the form of ranges. When the data are imprecise in the form of ranges, the cost efficiency measure calculated from the data should be uncertain as well. So, in the current paper, we develop a method for the estimation of upper and lower bounds for the cost efficiency measure in situations of uncertain input and output data. Also, we develop the theory of efficiency measurement so as to accommodate incomplete price information by deriving upper and lower bounds for the cost efficiency measure. The practical application of these bounds is illustrated by a numerical example.  相似文献   

6.
It is shown how to choose the smoothing parameter when a smoothing periodic spline of degree 2m?1 is used to reconstruct a smooth periodic curve from noisy ordinate data. The noise is assumed “white”, and the true curve is assumed to be in the Sobolev spaceW 2 (2m) of periodic functions with absolutely continuousv-th derivative,v=0, 1, ..., 2m?1 and square integrable 2m-th derivative. The criteria is minimum expected square error, averaged over the data points. The dependency of the optimum smoothing parameter on the sample size, the noise variance, and the smoothness of the true curve is found explicitly.  相似文献   

7.
Determination of the membership functions is vital in practical applications of the fuzzy set theory. This paper presents a guideline to construct the membership functions for fuzzy sets whose elements have a defining feature with a known probability density function (pdf) in the universe of discourse. The method finds the smallest fuzzy set which assigns high average membership values to those objects with the defining features distributed according to the given pdf. It is show that, for any pdf, the method is capable of generating membership functions in accordance with the possibility-probability consistency principle. Membership functions derived from some of the well known pdfs and an application in solving noise contaminated linear system of equations are presented.  相似文献   

8.
Summary A procedure for calculating the trace of the influence matrix associated with a polynomial smoothing spline of degree2m–1 fitted ton distinct, not necessarily equally spaced or uniformly weighted, data points is presented. The procedure requires orderm 2 n operations and therefore permits efficient orderm 2 n calculation of statistics associated with a polynomial smoothing spline, including the generalized cross validation. The method is a significant improvement over an existing method which requires ordern 3 operations.  相似文献   

9.
10.
We propose a parsimonious extension of the classical latent class model to cluster categorical data by relaxing the conditional independence assumption. Under this new mixture model, named conditional modes model (CMM), variables are grouped into conditionally independent blocks. Each block follows a parsimonious multinomial distribution where the few free parameters model the probabilities of the most likely levels, while the remaining probability mass is uniformly spread over the other levels of the block. Thus, when the conditional independence assumption holds, this model defines parsimonious versions of the standard latent class model. Moreover, when this assumption is violated, the proposed model brings out the main intra-class dependencies between variables, summarizing thus each class with relatively few characteristic levels. The model selection is carried out by an hybrid MCMC algorithm that does not require preliminary parameter estimation. Then, the maximum likelihood estimation is performed via an EM algorithm only for the best model. The model properties are illustrated on simulated data and on three real data sets by using the associated R package CoModes. The results show that this model allows to reduce biases involved by the conditional independence assumption while providing meaningful parameters.  相似文献   

11.
12.
Most activities, or events, in discrete event simulation models are probabilistic in nature and describable by density functions. Consequently, persons developing simulation models are frequently required to select and fit density functions to sample data and, correspondingly, to determine the process, or random-event, generator.

This paper seeks to establish that the graduation process involved with simulation modeling may be expedited: generalized Weibull functions numerically fitted by nonlinear least-squares or maximum-likelihood procedures are comparable in performance to the fits of more traditional functions. The use of a single function simplifies the ambiguous process of function selection, while offering, in addition, the important advantages of a closed-form inverse process generator.  相似文献   


13.
14.
Least squares data fitting with implicit functions   总被引:2,自引:0,他引:2  
This paper discusses the computational problem of fitting data by an implicitly defined function depending on several parameters. The emphasis is on the technique of algebraic fitting off(x, y; p) = 0 which can be treated as a linear problem when the parameters appear linearly. Various constraints completing the problem are examined for their effectiveness and in particular for two applications: fitting ellipses and functions defined by the Lotka-Volterra model equations. Finally, we discuss geometric fitting as an alternative, and give examples comparing results.  相似文献   

15.
Summary Vector optimization problems in linear spaces with respect to general domination sets are investigated including corollaries to Pareto-optimality and weak efficiency. The results contain equivalences between vector optimization problems, interdependencies between objective functions and domination sets, statements about the influence of perturbed objective functions on the decision maker's preferences and thus on the domination set, comparisons of efficiency with respect to polyhedral cones with Pareto-optimality, changes in the objective functions which restrict, extend or do not alter the set of Pareto-optima, possibilities for the use of domination sets immediately in the decision space. Conditions for complete efficiency are given.
Zusammenfassung Untersucht werden Vektoroptimierungsprobleme in linearen Räumen bezüglich allgemeiner Dominanzmengen einschließlich Folgerungen für Pareto-Optimalität und schwache Effizienz. Die Ergebnisse enthalten Äquivalenzen zwischen Vektoroptimierungsproblemen, Wechselwirkungen zwischen Zielfunktionen und Dominanzmengen, Aussagen über den Einfluß gestörter Zielfunktionen auf die Präferenzen des Entscheidungsträgers und somit auf die Dominanzmenge, Vergleiche von Effizienz bezüglich polyedrischer Kegel mit Pareto-Optimalität, Änderungen in den Zielfunktionen, die die Menge der Pareto-Optima einschränken, erweitern oder nicht beeinflussen, Möglichkeiten für die Nutzung von Dominanzmengen unmittelbar im Entscheidungsraum. Bedingungen für vollständige Effizienz werden gegeben.
  相似文献   

16.
Efficiency evaluations in data envelopment analysis are shown to be stable for arbitrary perturbations in the convex hulls of input and output data. Also, the corresponding restricted Lagrange multiplier functions are shown to be continuous. The results are proved using point-to-set mappings and a particular region of stability from input optimization.Research partly supported by National Science Foundation Grants, Office of Naval Research Grant, and by the Natural Sciences and Engineering Council of Canada.  相似文献   

17.
Fuzzy Optimization and Decision Making - Data envelopment analysis (DEA) is a classical and prevailing tool for estimating relative efficiencies of multiple decision making units (DMUs). However,...  相似文献   

18.
Elliptic boundary value problems with analytic functionals as data have been studied by Lions and Magenes. In this papér we relax their assumption that the boundary of the domain ω is an analytic surface; we assume only that ω equals the interior of its closure. In this case we obtain results for the Dirichlet problem for second order equations that are analogous to theirs: There is a quasi-analytic class of functions on0ω with a natural topology such that the Dirichlet problem is well posed the data belongs to the dual of this space. The author acknowledges with gratitude the support he received from the National Science Foundation through a graduate fellowship and NSFGP 6761. Entrata in redazione il 31 gennaio 1969  相似文献   

19.
20.
This paper deals with the k-sample problem for functional data when the observations are density functions. We introduce test procedures based on distances between pairs of density functions (L 1 distance and Hellinger distance, among others). A simulation study is carried out to compare the practical behaviour of the proposed tests. Theoretical derivations have been done in order to allow weighted samples in the test procedures. The paper ends with a real data example: for a collection of European regions we estimate the regional relative income densities and then we test the significance of the country effect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号