首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes a dynamic data envelopment analysis (DEA) model to measure the system and period efficiencies at the same time for multi-period systems, where quasi-fixed inputs or intermediate products are the source of inter-temporal dependence between consecutive periods. A mathematical relationship is derived in which the complement of the system efficiency is a linear combination of those of the period efficiencies. The proposed model is also more discriminative than the existing ones in identifying the systems with better performance. Taiwanese forests, where the forest stock plays the role of quasi-fixed input, are used to illustrate this approach. The results show that the method for calculating the system efficiency in the literature produces over-estimated scores when the dynamic nature is ignored. This makes it necessary to conduct a dynamic analysis whenever data is available.  相似文献   

2.
This paper presents an overview of methods for the analysis of data structured in blocks of variables or in groups of individuals. More specifically, regularized generalized canonical correlation analysis (RGCCA), which is a unifying approach for multiblock data analysis, is extended to be also a unifying tool for multigroup data analysis. The versatility and usefulness of our approach is illustrated on two real datasets.  相似文献   

3.
Attribute reduction is very important in rough set-based data analysis (RSDA) because it can be used to simplify the induced decision rules without reducing the classification accuracy. The notion of reduct plays a key role in rough set-based attribute reduction. In rough set theory, a reduct is generally defined as a minimal subset of attributes that can classify the same domain of objects as unambiguously as the original set of attributes. Nevertheless, from a relational perspective, RSDA relies on a kind of dependency principle. That is, the relationship between the class labels of a pair of objects depends on component-wise comparison of their condition attributes. The larger the number of condition attributes compared, the greater the probability that the dependency will hold. Thus, elimination of condition attributes may cause more object pairs to violate the dependency principle. Based on this observation, a reduct can be defined alternatively as a minimal subset of attributes that does not increase the number of objects violating the dependency principle. While the alternative definition coincides with the original one in ordinary RSDA, it is more easily generalized to cases of fuzzy RSDA and relational data analysis.  相似文献   

4.
Traditional studies in data envelopment analysis (DEA) view systems as a whole when measuring the efficiency, ignoring the operation of individual processes within a system. This paper builds a relational network DEA model, taking into account the interrelationship of the processes within the system, to measure the efficiency of the system and those of the processes at the same time. The system efficiency thus measured more properly represents the aggregate performance of the component processes. By introducing dummy processes, the original network system can be transformed into a series system where each stage in the series is of a parallel structure. Based on these series and parallel structures, the efficiency of the system is decomposed into the product of the efficiencies of the stages in the series and the inefficiency slack of each stage into the sum of the inefficiency slacks of its component processes connected in parallel. With efficiency decomposition, the process which causes the inefficient operation of the system can be identified for future improvement. An example of the non-life insurance industry in Taiwan illustrates the whole idea.  相似文献   

5.
Rough set theory provides a powerful tool for dealing with uncertainty in data. Application of variety of rough set models to mining data stored in a single table has been widely studied. However, analysis of data stored in a relational structure using rough sets is still an extensive research area. This paper proposes compound approximation spaces and their constrained versions that are intended for handling uncertainty in relational data. The proposed spaces are expansions of tolerance approximation ones to a relational case. Compared with compound approximation spaces, the constrained version enables to derive new knowledge from relational data. The proposed approach can improve mining relational data that is uncertain, incomplete, or inconsistent.  相似文献   

6.
A structure for representing inexact information in the form of a relational database is presented. The structure differs from ordinary relational databases in two important respects: Components of tuples need not be single values and a similarity relation is required for each domain set of the database. Two critical properties possessed by ordinary relational databases are proven to exist in the fuzzy relational structure. These properties are (1) no two tuples have identical interpretations, and (2) each relational operation has a unique result.  相似文献   

7.
The asymptotic behavior, for large sample size, is given for the distribution of the canonical correlation coefficients. The result is used to examine the Bartlett-Lawley test that the residual population canonical correlation coefficients are zero. A marginal likelihood function for the population coefficients is obtained and the maximum marginal likelihood estimates are shown to provide a bias correction.  相似文献   

8.
Using a wavelets-based estimator of the bivariate density, we introduce an estimation method for nonlinear canonical analysis. Consistency of the resulting estimators of the canonical coefficients and the canonical functions is established. Under some conditions, asymptotic normality results for these estimators are obtained. Then it is shown how to compute in practice these estimators by usingmatrix computations, and the finite-sample performance of the proposed method is evaluated through simulations.  相似文献   

9.
1. IntroductionLet X and Y be p x 1 and q x 1 random vectors, respectively, and let p 5 q. PutCanonica1 correlation analysis focuses on the relationship between the two sets of randomvariables, X.. 1 and Yax 1' We ca1l EFllZl,Zdz Z2l the canonical correlation matrix (CCM).Let pf,',p; (l > Pf 2.' 2 p; 2 0) be the eigenvalues of CCM. Their positive squareroots pl )', pp are the population canonical correlation coefficients. Letbe the sample covaxiance matrix, where U'1,' 5 Wi. ar…  相似文献   

10.
This paper deals with asymptotics for multiple-set linear canonical analysis (MSLCA). A definition of this analysis, that adapts the classical one to the context of Euclidean random variables, is given and properties of the related canonical coefficients are derived. Then, estimators of the MSLCA’s elements, based on empirical covariance operators, are proposed and asymptotics for these estimators is obtained. More precisely, we prove their consistency and we obtain asymptotic normality for the estimator of the operator that gives MSLCA, and also for the estimator of the vector of canonical coefficients. These results are then used to obtain a test for mutual non-correlation between the involved Euclidean random variables.  相似文献   

11.
We consider the problem of optimally separating two multivariate populations. Robust linear discriminant rules can be obtained by replacing the empirical means and covariance in the classical discriminant rules by S or MM-estimates of location and scatter. We propose to use a fast and robust bootstrap method to obtain inference for such a robust discriminant analysis. This is useful since classical bootstrap methods may be unstable as well as extremely time-consuming when robust estimates such as S or MM-estimates are involved. In particular, fast and robust bootstrap can be used to investigate which variables contribute significantly to the canonical variate, and thus the discrimination of the classes. Through bootstrap, we can also examine the stability of the canonical variate. We illustrate the method on some real data examples.  相似文献   

12.
Canonical correlation analysis is shown to be equivalent to the problem of estimating a linear regression matrix, B0, of less than full rank. After reparameterizing B0 some estimates of the new parameters, obtained by solving an eigenvalue problem and closely related to canonical correlations and vectors, are found to be consistent and efficient when the residuals are mutually independent. When the residuals are generated by an autocorrelated, stationary time series these estimates are still consistent and obey a central limit theorem, but they are no longer efficient. Alternative, more general estimates are suggested which are efficient in the presence of serial correlation. Asymptotic theory and iterative computational procedures for these estimates are given. A likelihoodratio test for the rank of B0 is seen to be an extension of a familiar test for canonical correlations.  相似文献   

13.
14.
Generalized canonical correlation analysis is a versatile technique that allows the joint analysis of several sets of data matrices. The generalized canonical correlation analysis solution can be obtained through an eigenequation and distributional assumptions are not required. When dealing with multiple set data, the situation frequently occurs that some values are missing. In this paper, two new methods for dealing with missing values in generalized canonical correlation analysis are introduced. The first approach, which does not require iterations, is a generalization of the Test Equating method available for principal component analysis. In the second approach, missing values are imputed in such a way that the generalized canonical correlation analysis objective function does not increase in subsequent steps. Convergence is achieved when the value of the objective function remains constant. By means of a simulation study, we assess the performance of the new methods. We compare the results with those of two available methods; the missing-data passive method, introduced in Gifi’s homogeneity analysis framework, and the GENCOM algorithm developed by Green and Carroll. An application using world bank data is used to illustrate the proposed methods.  相似文献   

15.
We study the extension of canonical correlation from pairs of random vectors to the case where a data sample consists of pairs of square integrable stochastic processes. Basic questions concerning the definition and existence of functional canonical correlation are addressed and sufficient criteria for the existence of functional canonical correlation are presented. Various properties of functional canonical analysis are discussed. We consider a canonical decomposition, in which the original processes are approximated by means of their canonical components.  相似文献   

16.
We study the limit behavior of the canonical (i.e., degenerate) von Mises statistics based on samples from a sequence of weakly dependent stationary observations satisfying the ψ-mixing condition. The corresponding limit distributions are defined by the multiple stochastic integrals of nonrandom functions with respect to the nonorthogonal Hilbert noises generated by Gaussian processes with nonorthogonal increments.  相似文献   

17.
LP models are usually constructed using index sets and data tables which are closely related to the attributes and relations of relational database (RDB) systems. We extend the syntax of MPL, an existing LP modelling language, in order to connect it to a given RDB system. This approach reuses existing modelling and database software, provides a rich modelling environment and achieves model and data independence. This integrated software enables Mathematical Programming to be widely used as a decision support tool by unlocking the data residing in corporate databases.  相似文献   

18.
19.
Summary In canonical correlation analysis a hypothesis concerning the relevance of a subset of variables from each of the two given variable sets is formulated. The likelihood ratio statistic for the hypothesis and an asymptotic expansion for its null distribution are obtained. In discriminant analysis various alternative forms of a hypothesis concerning the relevance of a specified variable subset are also discussed.  相似文献   

20.
The surplus process of an insurance portfolio is defined as the wealth obtained by the premium payments minus the reimbursements made at the time of claims. When this process becomes negative (if ever), we say that ruin has occurred. The general setting is the Gambler's Ruin Problem. In this paper we address the problem of estimating derivatives (sensitivities) of ruin probabilities with respect to the rate of accidents. Estimating probabilities of rare events is a challenging problem, since naïve estimation is not applicable. Solution approaches are very recent, mostly through the use of importance sampling techniques. Sensitivity estimation is an even harder problem for these situations. We shall study three methods for estimating ruin probabilities: one via importance sampling (IS), and two others via indirect simulation: the storage process (SP), which restates the problems in terms of a queuing system, and the convolution formula (CF). To estimate the sensitivities, we apply the Rare Perturbation Analysis (RPA) method to IS, the Infinitesimal Perturbation Analysis (IPA) method to SP and the score function method to CF. Simulation methods are compared in terms of their efficiency, a criterion that appropriately weighs precision and CPU time. As well, we indicate how other criteria such as set-up time and prior formulae development may actually be problem-dependent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号