首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Classification is one of the most extensively studied problems in the fields of multivariate statistical analysis, operations research and artificial intelligence. Decisions involving a classification of the alternative solutions are of major interest in finance, since several financial decision problems are best studied by classifying a set of alternative solutions (firms, loan applications, investment projects, etc.) in predefined classes. This paper proposes an alternative approach to the classical statistical methodologies that have been extensively used for the study of financial classification problems. The proposed methodology combines the preference disaggregation approach (a multicriteria decision aid method) with decision support systems. More specifically, the FINancial CLASsification (FINCLAS) multicriteria decision support system is presented. The system incorporates a plethora of financial modeling tools, along with powerful preference disaggregation methods that lead to the development of additive utility models for the classification of the considered alternatives into predefined classes. An application in credit granting is used to illustrate the capabilities of the system.  相似文献   

2.
Various problems in statistics have been treated by the decision rule, based on the concept of distance between distributions. The aim of this paper is to give an approach for testing statistical hypotheses, using a general class of dissimilarity measures among k 2 distributions. The test statistics are obtained by the replacement, in the expression of the dissimilarity measure, of the unknown parameters by their maximum likelihood estimators. The asymptotic distributions of the resulting test statistics are investigated and the results are applied to multinomial and multivariate normal populations.  相似文献   

3.
In multi-criteria decision analysis, the overall performance of decision alternatives is evaluated with respect to several, generally conflicting decision criteria. One approach to perform the multi-criteria decision analysis is to use ratio-scale pairwise comparisons concerning the performance of decision alternatives and the importance of decision criteria. In this approach, a classical problem has been the phenomenon of rank reversals. In particular, when a new decision alternative is added to a decision problem, and while the assessments concerning the original decision alternatives remain unchanged, the new alternative may cause rank reversals between the utility estimates of the original decision alternatives. This paper studies the connections between rank reversals and the potential inconsistency of the utility assessments in the case of ratio-scale pairwise comparisons data. The analysis was carried out by recently developed statistical modelling techniques so that the inconsistency of the assessments was measured according to statistical estimation theory. Several type of decision problems were analysed and the results showed that rank reversals caused by inconsistency are natural and acceptable. On the other hand, rank reversals caused by the traditional arithmetic-mean aggregation rule are not in line with the ratio-scale measurement of utilities, whereas geometric-mean aggregation does not cause undesired rank reversals.  相似文献   

4.
This article compares two approaches in aggregating multiple inputs and multiple outputs in the evaluation of decision making units (DMUs), data envelopment analysis (DEA) and principal component analysis (PCA). DEA, a non-statistical efficiency technique, employs linear programming to weight the inputs/outputs and rank the performance of DMUs. PCA, a multivariate statistical method, combines new multiple measures defined by the inputs/outputs. Both methods are applied to three real world data sets that characterize the economic performance of Chinese cities and yield consistent and mutually complementary results. Nonparametric statistical tests are employed to validate the consistency between the rankings obtained from DEA and PCA.  相似文献   

5.
Ratio analysis is a useful tool of financial analysis. Nevertheless, the traditional ratio analysis is under several constraints: over-empiricist, certainty, standard of reference not useful in all circumstances, etc. Recent researches have pointed out that to overcome those constraints formal decision models can be applied. In this article, fuzzy set theory is applied to ratio analysis with respect to one of the major management problems: liquidity. This approach enables the decision maker to include his own experience and any other type of information to that obtained by the ratio. If all the possible decisions are uniform in time, it is possible to adopt them by the decision maker in each period of analysis in a programmed form through a simple model inputs combination. The approach provided in this article can be extended to other ratio or ratio sets.  相似文献   

6.
This paper is devoted to the problems of testing statistical hypotheses about an experiment, when the available information from its sampling is `vague'. When the information supplied by the experimental sampling is exact, the problems of testing statistical hypotheses about the experiment can be regarded as a particular statistical decision problem. In addition, decision procedures may be used in problems of testing hypotheses.In a similar manner, the problem of testing statistical hypotheses about an experiment when the available sample information is vague, is approached in this paper as a particular fuzzy decision problem (as defined by Tanaka, Okuda and Asai). This approach assumes that the previous information about the experiment can be expressed by means of certain conditional probabilistic information, whereas the present information about it can be expressed by means of fuzzy information. The preceding framework allows us to extend the notion of risk function and some nonfuzzy decision procedures to the fuzzy case, and particularize them to the problem of testing.Finally, several illustrative examples are presented.  相似文献   

7.
A memetic Differential Evolution approach in noisy optimization   总被引:1,自引:0,他引:1  
This paper proposes a memetic approach for solving complex optimization problems characterized by a noisy fitness function. The proposed approach aims at solving highly multivariate and multi-modal landscapes which are also affected by a pernicious noise. The proposed algorithm employs a Differential Evolution framework and combines within this three additional algorithmic components. A controlled randomization of scale factor and crossover rate are employed which should better handle uncertainties of the problem and generally enhance performance of the Differential Evolution. Two combined local search algorithms applied to the scale factor, during offspring generation, should enhance performance of the Differential Evolution framework in the case of multi-modal and high dimensional problems. An on-line statistical test aims at assuring that only strictly necessary samples are taken and that all pairwise selections are properly performed. The proposed algorithm has been tested on a various set of test problems and its behavior has been studied, dependent on the dimensionality and noise level. A comparative analysis with a standard Differential Evolution, a modern version of Differential Evolution employing randomization of the control parameters and four metaheuristics tailored to optimization in a noisy environment has been carried out. One of these metaheuristics is a classical algorithm for noisy optimization while the other three are modern Differential Evolution based algorithms for noisy optimization which well represent the state-of-the-art in the field. Numerical results show that the proposed memetic approach is an efficient and robust alternative for various and complex multivariate noisy problems and can be exported to real-world problems affected by a noise whose distribution can be approximated by a Gaussian distribution.  相似文献   

8.
Aiming at the complex mechanical and electrical products quality control and early warning problems, a performance analysis model of control chart, which combines the multivariate Bayesian statistical method with the economic performance analysis is constructed. In the solution model, a FT VSI strategy is used in the multivariate Bayesian control chart. If a small probability of random failure occurs, then a loose sampling scheme is selected. Otherwise, a strict sampling program is applied. To quantify the correlation between the economic and the statistical performance of the multivariate Bayesian control chart, a quality control model based on Monte Carlo simulation is used and the ANOSE (Average Number of Observations to Signals or End of the production run) is taken under different economic parameters, which performs the degree of influence of the statistical performance of the control chart. In addition, the relationship between the quality control cost and the false alarm rate of the multi-Bayesian control chart is explained. Finally, for instance, a multiple quality control process of the automatic transmission of the automobile is used to verify the performance evaluation and optimization of the multivariate FT VSI Bayesian control chart. The results show that the method has a better application.  相似文献   

9.
Within data envelopment analysis (DEA) is a sub-group of papers in which many researchers have sought to improve the differential capabilities of DEA and to fully rank both efficient, as well as inefficient, decision-making units. The ranking methods have been divided in this paper into six, somewhat overlapping, areas. The first area involves the evaluation of a cross-efficiency matrix, in which the units are self and peer evaluated. The second idea, generally known as the super-efficiency method, ranks through the exclusion of the unit being scored from the dual linear program and an analysis of the change in the Pareto Frontier. The third grouping is based on benchmarking, in which a unit is highly ranked if it is chosen as a useful target for many other units. The fourth group utilizes multivariate statistical techniques, which are generally applied after the DEA dichotomic classification. The fifth research area ranks inefficient units through proportional measures of inefficiency. The last approach requires the collection of additional, preferential information from relevant decision-makers and combines multiple-criteria decision methodologies with the DEA approach. However, whilst each technique is useful in a specialist area, no one methodology can be prescribed here as the complete solution to the question of ranking.  相似文献   

10.
In this paper, newsvendor problems for innovative products are analyzed. Because the product is new, no relevant historical data is available for statistical demand analysis. Instead of using the probability distribution, the possibility distribution is utilized to characterize the uncertainty of the demand. We consider products whose life cycles are expected to be smaller than the procurement lead times. Determining optimal order quantities of such products is a typical one-shot decision problem for a retailer. Therefore, newsvendor models for innovative products are proposed based on the one-shot decision theory (OSDT). The main contributions of this research are as follows: the general solutions of active, passive, apprehensive and daring focus points and optimal alternatives are proposed and the existence theorem is established in the one-shot decision theory; a simple and effective approach for identifying the possibility distribution is developed; newsvendor models with four types of focus points are built; managerial insights into the behaviors of different types of retailers are gained by the theoretical analysis; the proposed models are scenario-based decision models which provide a fundamental alternative to analyze newsvendor problems for innovative products.  相似文献   

11.
Statistics education is under review at all educational levels. Statistical concepts, as well as the use of statistical methods and techniques, can be taught in at least two contrasting ways. Specifically, (1) teaching can be theoretically and mathematically oriented, or (2) it can be less mathematically oriented being focused, instead, on application and the use of data to solve real-world problems. The second approach is growing in practice and new goals have recently emerged. At present, statistics courses stress probability concepts, data analysis, and the interpretation and communication of results. Understanding the process of statistical investigation is established as a way of improving mastery of statistical reasoning. In this context, a project-based approach allows the design and implementation of participating learning scenarios in order to understand the statistical methodology and, as a consequence, improve research. This approach points out that statistics is a rational methodology used to solve practical problems. The purpose of this paper is to present the design and results of an applied statistics course for PhD students in ecology and systematics using a project-based approach. Examples involving character coding, species classification, and the interpretation of geographical variation, which are the principal systematic analyses requiring statistical techniques, are presented using the results from student projects. In addition, an example from conservation ecology is presented. Results indicate that the students understood the concepts and applied the systematic and statistical techniques accurately using a data oriented approach.  相似文献   

12.
Automated driving systems are rapidly developing. However, numerous open problems remain to be resolved to ensure this technology progresses before its widespread adoption. A large subset of these problems are, or can be framed as, statistical decision problems. Therefore, we present herein several important statistical challenges that emerge when designing and operating automated driving systems. In particular, we focus on those that relate to request-to-intervene decisions, ethical decision support, operations in heterogeneous traffic, and algorithmic robustification. For each of these problems, earlier solution approaches are reviewed and alternative solutions are provided with accompanying empirical testing. We also highlight open avenues of inquiry for which applied statistical investigation can help ensure the maturation of automated driving systems. In so doing, we showcase the relevance of statistical research and practice within the context of this revolutionary technology.  相似文献   

13.
In this paper we study optimization problems with multivariate stochastic dominance constraints where the underlying functions are not necessarily linear. These problems are important in multicriterion decision making, since each component of vectors can be interpreted as the uncertain outcome of a given criterion. We propose a penalization scheme for the multivariate second order stochastic dominance constraints. We solve the penalized problem by the level function methods, and a modified cutting plane method and compare them to the cutting surface method proposed in the literature. The proposed numerical schemes are applied to a generic budget allocation problem and a real world portfolio optimization problem.  相似文献   

14.
Agents interaction about reputation has to deal with semantic interoperability issues, which can be handled by different approaches using different levels of expressiveness. Previous experiments have already been conducted in order to investigate the effects of a more expressive communication language on agents’ reputation evaluation accuracy, but their analyses disregard the possible correlations among reputation models’ attributes. Here, we propose the use of a multivariate statistical approach in order to take into account such correlations and to encourage the social simulation community to analyze its experimental outputs using formal mathematical approaches. We also applied the presented approach to the experimental results previously analyzed using a univariate statistical approach. Our analysis corroborate with the latter showing that, in most cases, there is benefit in using a more expressive communication language.  相似文献   

15.
Systemic decision making is a new approach for dealing with complex multiactor decision making problems in which the actors’ individual preferences on a fixed set of alternatives are incorporated in a holistic view in accordance with the “principle of tolerance”. The new approach integrates all the preferences, even if they are encapsulated in different individual theoretical models or approaches; the only requirement is that they must be expressed as some kind of probability distribution. In this paper, assuming the analytic hierarchy process (AHP) is the multicriteria technique employed to rank alternatives, the authors present a new methodology based on a Bayesian analysis for dealing with AHP systemic decision making in a local context (a single criterion). The approach integrates the individual visions of reality into a collective one by means of a tolerance distribution, which is defined as the weighted geometric mean of the individual preferences expressed as probability distributions. A mathematical justification of this distribution, a study of its statistical properties and a Monte Carlo method for drawing samples are also provided. The paper further presents a number of decisional tools for the evaluation of the acceptance of the tolerance distribution, the construction of tolerance paths that increase representativeness and the extraction of the relevant knowledge of the subjacent multiactor decisional process from a cognitive perspective. Finally, the proposed methodology is applied to the AHP-multiplicative model with lognormal errors and a case study related to a real-life experience in local participatory budgets for the Zaragoza City Council (Spain).  相似文献   

16.
成分数据具有非常复杂的数学性质,很多传统的统计分析方法对其是失效的,因此,在研究中必须采用特殊处理和专门技术.着重讨论了成分数据相关系数的计算方法,由于普通数据的相关系数计算方法只适用于两组单变量数据,而传统的典型相关分析又鉴于成分数据的特殊性质而不能直接使用,故结合logratio变换和典型相关分析技术,提出了一种针对成分数据的相关系数计算方法,成功地解决了这一问题.  相似文献   

17.
Classification models can be developed by statistical or mathematical programming discriminant analysis techniques. Variable selection extensions of these techniques allow the development of classification models with a limited number of variables. Although stepwise statistical variable selection methods are widely used, the performance of the resultant classification models may not be optimal because of the stepwise selection protocol and the nature of the group separation criterion. A mixed integer programming approach for selecting variables for maximum classification accuracy is developed in this paper and the performance of this approach, measured by the leave-one-out hit rate, is compared with the published results from a statistical approach in which all possible variable subsets were considered. Although this mixed integer programming approach can only be applied to problems with a relatively small number of observations, it may be of great value where classification decisions must be based on a limited number of observations.  相似文献   

18.
在处理多属性决策问题中,QUALIFLEX是一种非常有用的排序算法。针对属性取值为简化中性犹豫模糊集的多属性决策问题,提出了SNHFS-QUALIFLEX算法。另外考虑到属性权重不确定的情况,将LINMAP扩展到简化中性犹豫模糊集中,定义了符号距离,建立了最优数学规划模型来确定属性权重。最后将SNHFS-QUALIFLEX方法应用到多属性决策实例中,并验证了其可行性和有效性。  相似文献   

19.
The concept of statistical decision theory concerning sequential observations is generalized to decision problems, which are based upon a continuous stochastic process.

In this model decision functions are introduced, consisting of a stopping time and a terminal decision rule. A method of discretization shows the connections between the discrete sequential and the continuous model. Concerning Bayes problems we find, that under certain assumptions the decision problem can be viewed as an optimal stopping problem with continuous time parameter.  相似文献   

20.
Artificial Neural Network (ANN) techniques have recently been applied to many different fields and have demonstrated their capabilities in solving complex problems. In a business environment, the techniques have been applied to predict bond ratings and stock price performance. In these applications, ANN techniques outperformed widely-used multivariate statistical techniques. The purpose of this paper is to compare the ANN method with the Discriminant Analysis (DA) method in order to understand the merits of ANN that are responsible for the higher level of performance. The paper provides an overview of the basic concepts of ANN techniques in order to enhance the understanding of this emerging technique. The similarities and differences between ANN and DA techniques in representing their models are described. This study also proposes a method to overcome the limitations of the ANN approach, Finally, a case study using a data set in a business environment demonstrates the superiority of ANN over DA as a method of classification of observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号