首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
A seventh degree rule of the non-product type has been constructed for numerical evaluation of double integrals of an analytic function of two complex variables by choosing a set of 17 points from the set of 25 points needed in the product Birkhoff-Young rule of fifth degree. An asymptotic error estimate for this rule has been determined and the rule has been numerically tested.  相似文献   

2.
A generalized N-point Birkhoff-Young quadrature of interpolatory type, with the Chebyshev weight, for numerical integration of analytic functions is considered. The nodes of such a quadrature are characterized by an orthogonality relation. Some special cases of this quadrature formula are derived.  相似文献   

3.
Interpretability is one of the key concepts in many of the applications using the fuzzy rule-based approach. It is well known that there are many different criteria around this concept, the complexity being one of them. In this paper, we focus our efforts in reducing the complexity of the fuzzy rule sets. One of the most interesting approaches for learning fuzzy rules is the iterative rule learning approach. It is mainly characterized by obtaining rules covering few examples in final stages, being in most cases useless to represent the knowledge. This behavior is due to the specificity of the extracted rules, which eventually creates more complex set of rules. Thus, we propose a modified version of the iterative rule learning algorithm in order to extract simple rules relaxing this natural trend. The main idea is to change the rule extraction process to be able to obtain more general rules, using pruned searching spaces together with a knowledge simplification scheme able to replace learned rules. The experimental results prove that this purpose is achieved. The new proposal reduces the complexity at both, the rule and rule base levels, maintaining the accuracy regarding to previous versions of the algorithm.  相似文献   

4.
The paper presents a discussion on evaluation methods in decision analysis. The presentation begins with the discussion of the expected value rule for selection amongst a number of available courses of action. Then a number of other evaluation rules to either replace or supplement the expected value are presented. They are discussed from a choice rather than preference view. To improve the expected value rule (or any other similar rule), it is suggested that it should be supplemented with other, qualitative rules rather than engaging in further modifications in pursuit of the perfect rule. A characteristic of qualitative rules is that they do not rely on multiplying probabilities and values but treat them as separate numeric entities. Once a rule has been agreed upon, it can be applied to all the alternatives, provided there is a computational procedure for evaluating the alternatives under that rule. Delta dominance is introduced as a unifying concept for many of the dominance rules in current use. Dominance and threshold methods are discussed and the kinship between them is pointed out.  相似文献   

5.
A learning process for fuzzy control rules using genetic algorithms   总被引:10,自引:0,他引:10  
The purpose of this paper is to present a genetic learning process for learning fuzzy control rules from examples. It is developed in three stages: the first one is a fuzzy rule genetic generating process based on a rule learning iterative approach, the second one combines two kinds of rules, experts rules if there are and the previously generated fuzzy control rules, removing the redundant fuzzy rules, and the thrid one is a tuning process for adjusting the membership functions of the fuzzy rules. The three components of the learning process are developed formulating suitable genetic algorithms.  相似文献   

6.
This work promotes a novel point of view in rough set applications: rough sets rule learning for ordinal prediction is based on rough graphical representation of the rules. Our approach tackles two barriers of rule learning. Unlike in typical rule learning, we construct ordinal prediction with a mathematical approach, rough sets, rather than purely rule quality measures. This construction results in few but significant rules. Moreover, the rules are given in terms of ordinal predictions rather than as unique values. This study also focuses on advancing rough sets theory in favor of soft-computing. Both theoretical and a designed architecture are presented. The features of our proposed approach are illustrated using an experiment in survival analysis. A case study has been performed on melanoma data. The results demonstrate that this innovative system provides an improvement of rule learning both in computing performance for finding the rules and the usefulness of the derived rules.  相似文献   

7.
The evaluation of performance of a design for complex discrete event systems through simulation is usually very time consuming. Optimizing the system performance becomes even more computationally infeasible. Ordinal optimization (OO) is a technique introduced to attack this difficulty in system design by looking at “order” in performances among designs instead of “value” and providing a probability guarantee for a good enough solution instead of the best for sure. The selection rule, known as the rule to decide which subset of designs to select as the OO solution, is a key step in applying the OO method. Pairwise elimination and round robin comparison are two selection rule examples. Many other selection rules are also frequently used in the ordinal optimization literature. To compare selection rules, we first identify some general facts about selection rules. Then we use regression functions to quantify the efficiency of a group of selection rules, including some frequently used rules. A procedure to predict good selection rules is proposed and verified by simulation and by examples. Selection rules that work well most of the time are recommended.  相似文献   

8.
论证了"40进16"选拔赛规则的合理性,将"40进16"选拔赛规则推广到了"一般形式",使之成为适于大型赛事的晋级规则.提出了该项晋级规则施用于大型选拔赛全赛程的积极意义."一般形式"的比赛规则对今后赛事规则的制定具有重要参考价值.  相似文献   

9.
Rough set theory is a new data mining approach to manage vagueness. It is capable to discover important facts hidden in the data. Literature indicate the current rough set based approaches can’t guarantee that classification of a decision table is credible and it is not able to generate robust decision rules when new attributes are incrementally added in. In this study, an incremental attribute oriented rule-extraction algorithm is proposed to solve this deficiency commonly observed in the literature related to decision rule induction. The proposed approach considers incremental attributes based on the alternative rule extraction algorithm (AREA), which was presented for discovering preference-based rules according to the reducts with the maximum of strength index (SI), specifically the case that the desired reducts are not necessarily unique since several reducts could include the same value of SI. Using the AREA, an alternative rule can be defined as the rule which holds identical preference to the original decision rule and may be more attractive to a decision-maker than the original one. Through implementing the proposed approach, it can be effectively operating with new attributes to be added in the database/information systems. It is not required to re-compute the updated data set similar to the first step at the initial stage. The proposed algorithm also excludes these repetitive rules during the solution search stage since most of the rule induction approaches generate the repetitive rules. The proposed approach is capable to efficiently and effectively generate the complete, robust and non-repetitive decision rules. The rules derived from the data set provide an indication of how to effectively study this problem in further investigations.  相似文献   

10.
We consider informational requirements of social choice rules satisfying anonymity, neutrality, monotonicity, and efficiency, and never choosing the Condorcet loser. Among such rules, we establish the existence of a rule operating on the minimal informational requirement. Depending on the number of agents and the number of alternatives, either the plurality rule or the plurality with a runoff is characterized. In some cases, the plurality rule is the most selective rule among the rules operating on the minimal informational requirement. In the other cases, each rule operating on the minimal informational requirement is a two-stage rule, and among them, the plurality with a runoff is the rule whose choice at the first stage is most selective. These results not only clarify properties of the plurality rule and the plurality with a runoff, but also explain why they are widely used in real societies.  相似文献   

11.
The required amount of information to make a social choice is the cost of information processing, and it is a practically important feature of social choice rules. We introduce informational aspects into the analysis of social choice rules and prove that (i) if an anonymous, neutral, and monotonic social choice rule operates on minimal informational requirements, then it is a supercorrespondence of either the plurality rule or the antiplurality rule, and (ii) if the social choice rule is furthermore Pareto efficient, then it is a supercorrespondence of the plurality rule.  相似文献   

12.
Classification and rule induction are two important tasks to extract knowledge from data. In rule induction, the representation of knowledge is defined as IF-THEN rules which are easily understandable and applicable by problem-domain experts. In this paper, a new chromosome representation and solution technique based on Multi-Expression Programming (MEP) which is named as MEPAR-miner (Multi-Expression Programming for Association Rule Mining) for rule induction is proposed. Multi-Expression Programming (MEP) is a relatively new technique in evolutionary programming that is first introduced in 2002 by Oltean and Dumitrescu. MEP uses linear chromosome structure. In MEP, multiple logical expressions which have different sizes are used to represent different logical rules. MEP expressions can be encoded and implemented in a flexible and efficient manner. MEP is generally applied to prediction problems; in this paper a new algorithm is presented which enables MEP to discover classification rules. The performance of the developed algorithm is tested on nine publicly available binary and n-ary classification data sets. Extensive experiments are performed to demonstrate that MEPAR-miner can discover effective classification rules that are as good as (or better than) the ones obtained by the traditional rule induction methods. It is also shown that effective gene encoding structure directly improves the predictive accuracy of logical IF-THEN rules.  相似文献   

13.
An Introduction to Lattice Rules and their Generator Matrices   总被引:4,自引:0,他引:4  
For the one-dimensional quadrature of a naturally periodic functionover its period, the trapezoidal rule is an excellent choice,its efficiency being predicted theoretically and confirmed inpractice. However, for s-dimensional quadrature over a hypercube,the s-dimensional product trapezoidal rule is not generallycost effective even for naturally periodic functions. The searchfor more effective rules has led first to number theoretic rulesand then more recently to lattice rules. This survey outlinesthe motivation for and present results of this theory. It isparticularly designed to introduce the reader to lattice rules.  相似文献   

14.
When combining classifiers in the Dempster-Shafer framework, Dempster’s rule is generally used. However, this rule assumes the classifiers to be independent. This paper investigates the use of other operators for combining non independent classifiers, including the cautious rule and, more generally, t-norm based rules with behavior ranging between Dempster’s rule and the cautious rule. Two strategies are investigated for learning an optimal combination scheme, based on a parameterized family of t-norms. The first one learns a single rule by minimizing an error criterion. The second strategy is a two-step procedure, in which groups of classifiers with similar outputs are first identified using a clustering algorithm. Then, within- and between-cluster rules are determined by minimizing an error criterion. Experiments with various synthetic and real data sets demonstrate the effectiveness of both the single rule and two-step strategies. Overall, optimizing a single t-norm based rule yields better results than using a fixed rule, including Dempster’s rule, and the two-step strategy brings further improvements.  相似文献   

15.
16.
A distance-based comparison of basic voting rules   总被引:1,自引:0,他引:1  
In this paper we provide a comparison of different voting rules in a distance-based framework with the help of computer simulations. Taking into account the informational requirements to operate such voting rules and the outcomes of two well-known reference rules, we identify the Copeland rule as a good compromise between these two reference rules. It will be shown that the outcome of the Copeland rule is “close” to the outcomes of the reference rules, but it requires less informational input and has lower computational complexity.  相似文献   

17.
Exact closed form relations are obtained for the Condorcet efficiencies of the four constant scoring rules on three element rankings when all profiles of rankings are assumed to be equally likely to occur. The Condorcet efficiencies of the two stage constant rules are shown to be substantially greater than those of single stage constant rules. The single stage scoring rule that picks the element that is ranked first most often is shown to have a much greater efficiency than the single stage scoring rule that selects the element that has the fewest last place rankings.  相似文献   

18.
This research examines the performance of due date, resource allocation, project release, and activity scheduling rules in a multiproject environment. The results show that workload sensitive due date rules always provide better due date estimates than workload insensitive due date rules. In contrast, the performance of due date sensitive resource allocation rules is severely affected by due date nervousness. When due date nervousness is not mitigated, the due date insensitive First In System First Served (FISFS) resource allocation rule performs better than the due date sensitive resource allocation rules. Project release rules can, however, mitigate the effect of due date nervousness. Using a simple project release rule, the results show that the due date sensitive Minimum Project Due Date resource allocation rule performs better than FISFS and two other due date sensitive resource allocation rules in many project environments.  相似文献   

19.
Dispatching rules are simple scheduling heuristics that are widely applied in industrial practice. Their popularity can be attributed to their ability to flexibly react to shop floor disruptions that are prevalent in many real-world manufacturing environments. However, it is a challenging and time-consuming task to design local, decentralised dispatching rules that result in a good global performance of a complex shop.An evolutionary algorithm is developed to generate job shop problem instances for which an examined dispatching rule fails to achieve a good solution due to a single suboptimal decision. These instances can be easily analysed to reveal limitations of that rule which helps with the design of better rules. The method is applied to a job shop problem from the literature, resulting in new best dispatching rules for the mean flow time measure.  相似文献   

20.
In this paper, we propose a genetic programming (GP) based approach to evolve fuzzy rule based classifiers. For a c-class problem, a classifier consists of c trees. Each tree, T i , of the multi-tree classifier represents a set of rules for class i. During the evolutionary process, the inaccurate/inactive rules of the initial set of rules are removed by a cleaning scheme. This allows good rules to sustain and that eventually determines the number of rules. In the beginning, our GP scheme uses a randomly selected subset of features and then evolves the features to be used in each rule. The initial rules are constructed using prototypes, which are generated randomly as well as by the fuzzy k-means (FKM) algorithm. Besides, experiments are conducted in three different ways: Using only randomly generated rules, using a mixture of randomly generated rules and FKM prototype based rules, and with exclusively FKM prototype based rules. The performance of the classifiers is comparable irrespective of the type of initial rules. This emphasizes the novelty of the proposed evolutionary scheme. In this context, we propose a new mutation operation to alter the rule parameters. The GP scheme optimizes the structure of rules as well as the parameters involved. The method is validated on six benchmark data sets and the performance of the proposed scheme is found to be satisfactory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号