首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
内河岸线资源评价与分析方法主要是针对定量数据,而一些影响岸线资源价值的因素是定性的、模糊的,传统方法在处理此类问题时存在局限。针对该问题,本文运用近年来国际上新兴的不确定问题建模理论——软集合理论,考虑模糊环境,结合基于依赖度的模糊双射软集合参数约减方法,提出了一种基于模糊双射软集合的内河岸线资源评价KDD模型。本文采用重庆港区15个港口进行KDD分析。结果表明本模型能够提取决策规则、分析关键因素及其依赖度,并得到了有参考价值的决策建议,验证了该模型的有效性。  相似文献   

2.
从格序结构理论出发,根据梯形模糊数构成的集合按照一定的比较规则构成一偏序,进而满足一定条件构成格.给出了在应用具有梯形模糊数的多属性格序决策过程中格中缺失元素的补充机理.研究了用格序决策理论对梯形模糊数进行排序的方法,并将其用于投资者对投资方案的选择.  相似文献   

3.
广义梯形模糊数决策粗糙集   总被引:2,自引:0,他引:2  
考虑到在决策过程中损失函数的不确定性且广义梯形模糊数作为三角模糊数的一种拓展,从贝叶斯理论出发,在三角模糊数决策粗糙集的基础上,将广义梯形模糊数引入三枝决策粗糙集,建立了广义梯形模糊数决策粗糙集并推导了其性质和规则;然后,通过一个协同知识管理项目的例子来阐明模型的具体应用.优势在于不仅将离散模糊集合扩展到连续集合,而且与其它模糊集合相比较具有更好的泛化性.  相似文献   

4.
软集合理论是处理不确定问题的一种新兴数学理论.作为软集合重要的应用领域之一,软集合的参数约减的研究大都基于完备信息系统.介绍了异或软集合的改进运算以及异或软集合决策系统,提出了一种基于异或软集合的不完备信息系统约减方法,同时与拓扑方法相比较.结果表明,算法得出的约减软集合是拓扑法求出的约减软集合的一个子集,即该算法在对数据刻画方面较为细致和全面.  相似文献   

5.
本文阐述了利用正态云模型的PROMETHEE多准则决策方法。将云模型理论、参数特征(期望、熵、超熵)、云运算原理、云比较规则带入PROMETHEE方法的求解过程,对其中的参数进行云转换,获得云滴形式的决策数据,体现决策的模糊性、波动性和随机性。最后通过应用算例,实现了考虑港口企业偏好的煤炭需求客户分类和关键客户识别,验证了云PROMETHEE方法的有效性与优势。  相似文献   

6.
针对航材备件信息不完备,同时包含定量和定性混合数据的特点,根据粗糙集理论在处理不精确、不完备信息的优势特点,提出航材库存品种的粗糙集方法,通过属性依赖度函数约简含有混合属性的数据集合,避免了离散化处理,最终得到航材库存品种的决策规则,并与差别矩阵计算结果一致,验证了模型的正确性,为航材库存备件提出了计算简单、适应性强的品种确定方法.  相似文献   

7.
由计算机软件编程需要出发,对库存管理中的一种动态规划方法进行了讨论,推导出了统一规范表达的允许状态集合和允许决策集合,并由此给出了计算程序框图,为计算机处理类似问题提供了依据。  相似文献   

8.
集合思想是一种基本的数学思想,集合中的符号语言将数学的简洁和准确表现到极致,也体现了现代数学高度的形式化和统一化.将集合中的符号语言在不同的数学形式之间进行转化,是理解和解决集合问题的重要方法,  相似文献   

9.
通过建立JW(Jackson-Wolinsky)规则之下二元稳定网络的等价条件, 给出其完整算法. 引入边支付后, 证明了增连接情形具有边支付的二元稳定网络集合是二元稳定网络集合与具有边支付的二元稳定网络集合的交集. 考察两个特定的网络模型, 系统分析了它们的二元稳定性.  相似文献   

10.
层次分析法是通过群决策的方法利用专家的定性判断矩阵研究决策问题.当极端专家意见出现时,会使得群决策的结果存在偏误.提出的极端专家意见排除模型是针对层次分析法中极端专家意见排除的改进方法,利用专家判断矩阵和经层次分析法得到结果的距离,给出排除极端专家意见的规则.利用未被排除的专家意见矩阵,计算新的决策结果,使得结果更为代表性、科学性,有效体现了总体意见,对利用层次分析法进行群决策有一定借鉴意义和实际应用价值.  相似文献   

11.
The Isbell desirability relation (I), the Shapley?CShubik index (SS) and the Banzhaf?CColeman index (BC) are power theories that grasp the notion of individual influence in a yes?Cno voting rule. Also, a yes?Cno voting rule is often used as a tool for aggregating individual preferences over any given finite set of alternatives into a collective preference. In this second context, Diffo Lambo and Moulen (DM) have introduced a power relation which ranks the voters with respect to how ably they influence the collective preference. However, DM relies on the metric d that measures closeness between preference relations. Our concern in this work is: do I, SS, BC and DM agree when the same yes?Cno voting rule is the basis for collective decision making? We provide a concrete and intuitive class of metrics called locally generated (LG). We give a characterization of the LG metrics d for which I, SS, BC and DM agree on ranking the voters.  相似文献   

12.
Utility or value functions play an important role of preference models in multiple-criteria decision making. We investigate the relationships between these models and the decision-rule preference model obtained from the Dominance-based Rough Set Approach. The relationships are established by means of special “cancellation properties” used in conjoint measurement as axioms for representation of aggregation procedures. We are considering a general utility function and three of its important special cases: associative operator, Sugeno integral and ordered weighted maximum. For each of these aggregation functions we give a representation theorem establishing equivalence between a very weak cancellation property, the specific utility function and a set of rough-set decision rules. Each result is illustrated by a simple example of multiple-criteria decision making. The results show that the decision rule model we propose has clear advantages over a general utility function and its particular cases.  相似文献   

13.
Rough set theory is a new data mining approach to manage vagueness. It is capable to discover important facts hidden in the data. Literature indicate the current rough set based approaches can’t guarantee that classification of a decision table is credible and it is not able to generate robust decision rules when new attributes are incrementally added in. In this study, an incremental attribute oriented rule-extraction algorithm is proposed to solve this deficiency commonly observed in the literature related to decision rule induction. The proposed approach considers incremental attributes based on the alternative rule extraction algorithm (AREA), which was presented for discovering preference-based rules according to the reducts with the maximum of strength index (SI), specifically the case that the desired reducts are not necessarily unique since several reducts could include the same value of SI. Using the AREA, an alternative rule can be defined as the rule which holds identical preference to the original decision rule and may be more attractive to a decision-maker than the original one. Through implementing the proposed approach, it can be effectively operating with new attributes to be added in the database/information systems. It is not required to re-compute the updated data set similar to the first step at the initial stage. The proposed algorithm also excludes these repetitive rules during the solution search stage since most of the rule induction approaches generate the repetitive rules. The proposed approach is capable to efficiently and effectively generate the complete, robust and non-repetitive decision rules. The rules derived from the data set provide an indication of how to effectively study this problem in further investigations.  相似文献   

14.
In this paper, we propose some decision logic languages for rule representation in rough set-based multicriteria analysis. The semantic models of these logics are data tables, each of which is comprised of a finite set of objects described by a finite set of criteria/attributes. The domains of the criteria may have ordinal properties expressing preference scales, while the domains of the attributes may not. The validity, support, and confidence of a rule are defined via its satisfaction in the data table.  相似文献   

15.
We study rule induction from two decision tables as a basis of rough set analysis of more than one decision tables. We regard the rule induction process as enumerating minimal conditions satisfied with positive examples but unsatisfied with negative examples and/or with negative decision rules. From this point of view, we show that seven kinds of rule induction are conceivable for a single decision table. We point out that the set of all decision rules from two decision tables can be split in two levels: a first level decision rule is positively supported by a decision table and does not have any conflict with the other decision table and a second level decision rule is positively supported by both decision tables. To each level, we propose rule induction methods based on decision matrices. Through the discussions, we demonstrate that many kinds of rule induction are conceivable.  相似文献   

16.
In this paper, a variable-precision dominance-based rough set approach (VP-DRSA) is proposed together with several VP-DRSA-based approaches to attribute reduction. The properties of VP-DRSA are shown in comparison to previous dominance-based rough set approaches. An advantage of VP-DRSA over variable-consistency dominance-based rough set approach in decision rule induction is emphasized. Some relations among the VP-DRSA-based attribute reduction approaches are investigated.  相似文献   

17.
A location problem with future uncertainties about the data is considered. Several possible scenarios about the future values of the parameters are postulated. However, it is not clear which of these scenarios will actually happen. We find the location that will best accommodate the possible scenarios. Four rules utilized in decision theory are examined: the expected value rule, the optimistic rule, the pessimistic rule, and the minimax regret rule. The solution for the squared Euclidean distance is explicitly found. Algorithms are suggested for general convex distance metrics. An example problem is solved in detail to illustrate the findings, and computational experiments with randomly generated problems are reported.  相似文献   

18.
杨红梅 《运筹与管理》2013,22(3):194-200
针对粗糙集和模糊聚类方法提取我国经济增长模糊规则算法复杂的问题,把集对分析用于我国31个省市经济增长模糊规则提取。结果显示,不仅算法简明,而且能同时提取宏观层次上的经济增长规则—固定资产投资对GDP的拉动效果要大于人力资源对GDP的拉动效果,而且还从微观层次上揭示出各省的经济增长规则,为我国十二五经济发展规划的实施提供决策参考。  相似文献   

19.
We present an improved Bernstein global optimization algorithm to solve polynomial mixed-integer nonlinear programming (MINLP) problems. The algorithm is of branch-and-bound type, and uses the Bernstein form of the polynomials for the global optimization. The new ingredients in the algorithm include a modified subdivision procedure, a vectorized Bernstein cut-off test and a new branching rule for the decision variables. The performance of the improved algorithm is tested and compared with earlier reported Bernstein global optimization algorithm (to solve polynomial MINLPs) and with several state-of-the-art MINLP solvers on a set of 19 test problems. The results of the tests show the superiority of the improved algorithm over the earlier reported Bernstein algorithm and the state-of-the-art solvers in terms of the chosen performance metrics. Similarly, efficacy of the improved algorithm in handling a real-world MINLP problem is brought out via a trim-loss minimization problem from the process industry.  相似文献   

20.
Many rule systems generated from decision trees (like CART, ID3, C4.5, etc.) or from direct counting frequency methods (like Apriori) are usually non-significant or even contradictory. Nevertheless, most papers on this subject demonstrate that important reductions can be made to generate rule sets by searching and removing redundancies and conflicts and simplifying the similarities between them. The objective of this paper is to present an algorithm (RBS: Reduction Based on Significance) for allocating a significance value to each rule in the system so that experts may select the rules that should be considered as preferable and understand the exact degree of correlation between the different rule attributes. Significance is calculated from the antecedent frequency and rule frequency parameters for each rule; if the first one is above the minimal level and rule frequency is in a critical interval, its significance ratio is computed by the algorithm. These critical boundaries are calculated by an incremental method and the rule space is divided according to them. The significance function is defined for these intervals. As with other methods of rule reduction, our approach can also be applied to rule sets generated from decision trees or frequency counting algorithms, in an independent way and after the rule set has been created. Three simulated data sets are used to carry out a computational experiment. Other standard data sets from UCI repository (UCI Machine Learning Repository) and two particular data sets with expert interpretation are used too, in order to obtain a greater consistency. The proposed method offers a more reduced and more easily understandable rule set than the original sets, and highlights the most significant attribute correlations quantifying their influence on consequent attribute.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号