首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
There is extensive theoretical work on measures of inconsistency for arbitrary formulae in knowledge bases. Many of these are defined in terms of the set of minimal inconsistent subsets (MISes) of the base. However, few have been implemented or experimentally evaluated to support their viability, since computing all MISes is intractable in the worst case. Fortunately, recent work on a related problem of minimal unsatisfiable sets of clauses (MUSes) offers a viable solution in many cases. In this paper, we begin by drawing connections between MISes and MUSes through algorithms based on a MUS generalization approach and a new optimized MUS transformation approach to finding MISes. We implement these algorithms, along with a selection of existing measures for flat and stratified knowledge bases, in a tool called mimus. We then carry out an extensive experimental evaluation of mimus using randomly generated arbitrary knowledge bases. We conclude that these measures are viable for many large and complex random instances. Moreover, they represent a practical and intuitive tool for inconsistency handling.  相似文献   

2.
Computing with words (CWW) relies on linguistic representation of knowledge that is processed by operating at the semantical level defined through fuzzy sets. Linguistic representation of knowledge is a major issue when fuzzy rule based models are acquired from data by some form of empirical learning. Indeed, these models are often requested to exhibit interpretability, which is normally evaluated in terms of structural features, such as rule complexity, properties on fuzzy sets and partitions. In this paper we propose a different approach for evaluating interpretability that is based on the notion of cointension. The interpretability of a fuzzy rule-based model is measured in terms of cointension degree between the explicit semantics, defined by the formal parameter settings of the model, and the implicit semantics conveyed to the reader by the linguistic representation of knowledge. Implicit semantics calls for a representation of user’s knowledge which is difficult to externalise. Nevertheless, we identify a set of properties - which we call “logical view” - that is expected to hold in the implicit semantics and is used in our approach to evaluate the cointension between explicit and implicit semantics. In practice, a new fuzzy rule base is obtained by minimising the fuzzy rule base through logical properties. Semantic comparison is made by evaluating the performances of the two rule bases, which are supposed to be similar when the two semantics are almost equivalent. If this is the case, we deduce that the logical view is applicable to the model, which can be tagged as interpretable from the cointension viewpoint. These ideas are then used to define a strategy for assessing interpretability of fuzzy rule-based classifiers (FRBCs). The strategy has been evaluated on a set of pre-existent FRBCs, acquired by different learning processes from a well-known benchmark dataset. Our analysis highlighted that some of them are not cointensive with user’s knowledge, hence their linguistic representation is not appropriate, even though they can be tagged as interpretable from a structural point of view.  相似文献   

3.
Set-based granular computing plays an important role in human reasoning and problem solving. Its three key issues constitute information granulation, information granularity and granular operation. To address these issues, several methods have been developed in the literature, but no unified framework has been formulated for them, which could be inefficient to some extent. To facilitate further research on the topic, through consistently representing granular structures induced by information granulation, we introduce a concept of knowledge distance to differentiate any two granular structures. Based on the knowledge distance, we propose a unified framework for set-based granular computing, which is named a lattice model. Its application leads to desired answers to two key questions: (1) what is the essence of information granularity, and (2) how to perform granular operation. Through using the knowledge distance, a new axiomatic definition to information granularity, called generalized information granularity is developed and its corresponding lattice model is established, which reveal the essence of information granularity in set-based granular computing. Moreover, four operators are defined on granular structures, under which the algebraic structure of granular structures forms a complementary lattice. These operators can effectively accomplish composition, decomposition and transformation of granular structures. These results show that the knowledge distance and the lattice model are powerful mechanisms for studying set-based granular computing.  相似文献   

4.
In this paper we present a family of measures aimed at determining the amount of inconsistency in probabilistic knowledge bases. Our approach to measuring inconsistency is graded in the sense that we consider minimal adjustments in the degrees of certainty (i.e., probabilities in this paper) of the statements necessary to make the knowledge base consistent. The computation of the family of measures we present here, in as much as it yields an adjustment in the probability of each statement that restores consistency, provides the modeler with possible repairs of the knowledge base. The case example that motivates our work and on which we test our approach is the knowledge base of CADIAG-2, a well-known medical expert system.  相似文献   

5.
A number of proposals have been proposed for measuring inconsistency for knowledge bases. However, it is rarely investigated how to incorporate preference information into inconsistency measures. This paper presents two approaches to measuring inconsistency for stratified knowledge bases. The first approach, termed the multi-section inconsistency measure (MSIM for short), provides a framework for characterizing the inconsistency at each stratum of a stratified knowledge base. Two instances of MSIM are defined: the naive MSIM and the stratum-centric MSIM. The second approach, termed the preference-based approach, aims to articulate the inconsistency in a stratified knowledge base from a global perspective. This approach allows us to define measures by taking into account the number of formulas involved in inconsistencies as well as the preference levels of these formulas. A set of desirable properties are introduced for inconsistency measures of stratified knowledge bases and studied with respect to the inconsistency measures introduced in the paper. Computational complexity results for these measures are presented. In addition, a simple but explanatory example is given to illustrate the application of the proposed approaches to requirements engineering.  相似文献   

6.
Inconsistency measures have been proposed to assess the severity of inconsistencies in knowledge bases of classical logic in a quantitative way. In general, computing the value of inconsistency is a computationally hard task as it is based on the satisfiability problem which is itself NP-complete. In this work, we address the problem of measuring inconsistency in knowledge bases that are accessed in a stream of propositional formulæ. That is, the formulæ of a knowledge base cannot be accessed directly but only once through processing of the stream. This work is a first step towards practicable inconsistency measurement for applications such as Linked Open Data, where huge amounts of information is distributed across the web and a direct assessment of the quality or inconsistency of this information is infeasible due to its size. Here we discuss the problem of stream-based inconsistency measurement on classical logic, in order to make use of existing measures for classical logic. However, it turns out that inconsistency measures defined on the notion of minimal inconsistent subsets are usually not apt to be used in the streaming scenario. In order to address this issue, we adapt measures defined on paraconsistent logics and also present a novel inconsistency measure based on the notion of a hitting set. We conduct an extensive empirical analysis on the behavior of these different inconsistency measures in the streaming scenario, in terms of runtime, accuracy, and scalability. We conclude that for two of these measures, the stream-based variant of the new inconsistency measure and the stream-based variant of the contension inconsistency measure, large-scale inconsistency measurement in streaming scenarios is feasible.  相似文献   

7.
This paper addresses the important and somewhat contentious matter of how knowledge accrues in a system. The matter has at its heart the establishment of a scaling function for knowledge (as distinct from the scaling used for information) which is related to the density of the knowledge structure at any point in the system. We commence with a discussion of whether it is possible at all to scale knowledge, dispensing with any concepts of knowledge as a simple finite resource and making a distinction between the establishment of a metric and the act of measurement itself. First, we draw on the Shannon–Weaver (H) measure to establish how knowledge can be seen as contributing to the partitioning of message sets under the H-measure. This establishes how knowledge contributes to the quantity of information held within a system when viewed as a meta-structure for that information. Second, we build on the idea of knowledge as an endemic property of a structure of interconnections between concepts. We observe that knowledge content can be dense both in structures that are highly interconnected deploying a modest number of concepts and in those where the interconnections are more sparse but where the number of concepts deployed is high. A scaling function exhibiting appropriate properties is then proposed. It can be seen that the scaling associated with knowledge as meta-information and the scaling deriving from the interconnectivity point of view are connected. This scaling function is particularly useful in three ways. Firstly, it outlines the properties of knowledge itself which can be used as criteria for future knowledge-based research. Its application in practice creates the ability to identify areas of knowledge concentration within a system. Finally, this identification of knowledge ‘hotspots’ can be used to direct the investment of resources for the management of knowledge and it provides an indication of the appropriate approach for the management of this knowledge. We make some observations on the limitations of the approach, on its potential as a basis for managerial action (particularly in Knowledge Management) and on its relevance and applicability to OR practice (particularly in respect of systems approaches to knowledge mapping). Lastly, we offer a view on the likely line of research which may result from this work.  相似文献   

8.
The article describes a knowledge model, oriented to the asynchronous mode of distance learning. The formalization of the knowledge model for a given domain, the operations on the knowledge and the algorithm of the knowledge model creation are submitted. All received decisions can be realized in a program environment compatible with the SCORM standard. The described methodology, based on a generalized knowledge model, enables to develop a distance learning course mainly for the fundamental knowledge. In this paper we describe the methodology and illustrate its use through a project to develop a distance learning course for a queuing system. Moreover, a practical application is proposed based on the eQuality project.  相似文献   

9.
A method of introducing knowledge processing technology which will develop advanced CAD/CAM systems for engineering design programs is discussed. To achieve this objective it is necessary to establish the concept of an object model and a methodology for building it in computers.Trying engineering design and knowledge processing together is not an easy task. The reasons are two-fold: first, knowledge processing technology is still making rapid progress and we do not understand it yet completely, and second, in order to introduce knowledge processing technology into CAD/CAM we need to analyze the design-and-manufacturing process in detail and to find the best method to combine these two technologies. The task is further complicated because it can be done by those who have enough knowledge of both technologies only, and also because it may result in reorganization of the traditional design-and-manufacturing process.This paper describes the current state of knowledge processing technology as well as its limitations in achieving intelligent functions, and analyzes the manner of combining these two technologies.  相似文献   

10.
The attribute-oriented induction (AOI for short) method is one of the most important data mining methods. The input of the AOI method contains a relational table and a concept tree (concept hierarchy) for each attribute, and the output is a small relation summarizing the general characteristics of the task-relevant data. Although AOI is very useful for inducing general characteristics, it has the limitation that it can only be applied to relational data, where there is no order among the data items. If the data are ordered, the existing AOI methods are unable to find the generalized knowledge. In view of this weakness, this paper proposes a dynamic programming algorithm, based on AOI techniques, to find generalized knowledge from an ordered list of data. By using the algorithm, we can discover a sequence of K generalized tuples describing the general characteristics of different segments of data along the list, where K is a parameter specified by users.  相似文献   

11.
We extend the use of knowledge trees as a means for questioning knowledge bases with linguistic information. Using Zadeh's theory of approximate reasoning as a tool we provide means for questioning large knowledge bases which have relational, implicational and data type information. We provide a means for answering questions of truth as well as questions of value.  相似文献   

12.
Asetofgeneratingelements(see[1,2J)anditsdualconcept,anordergeneratingset(seeL3]),areimportantconceptsinlatticetheory.Theconceptofasetofgeneratingelementforcontlnuouspartiallyordersetswasalsoproposedin[2J-Inthispaper,theconceptofageneratingsystemisintroducedatfirst-ThentheconceptofpbaseforthegeneratingsystemFofapartiallyorderset(P0Sforshort)isprop0sed-Finally,theminimumpbasesf0rPOSarestudied.Definition1LetTbeasetofsubsetsofaPOSL.Iff0ranyaeL,thereexistsAep,suchthata=supA,then9iscalledage…  相似文献   

13.
Possibilistic logic bases and possibilistic graphs are two different frameworks of interest for representing knowledge. The former ranks the pieces of knowledge (expressed by logical formulas) according to their level of certainty, while the latter exhibits relationships between variables. The two types of representation are semantically equivalent when they lead to the same possibility distribution (which rank-orders the possible interpretations). A possibility distribution can be decomposed using a chain rule which may be based on two different kinds of conditioning that exist in possibility theory (one based on the product in a numerical setting, one based on the minimum operation in a qualitative setting). These two types of conditioning induce two kinds of possibilistic graphs. This article deals with the links between the logical and the graphical frameworks in both numerical and quantitative settings. In both cases, a translation of these graphs into possibilistic bases is provided. The converse translation from a possibilistic knowledge base into a min-based graph is also described.  相似文献   

14.
A central challenge for research on how we should prepare students to manage crossing boundaries between different knowledge settings in life long learning processes is to identify those forms of knowledge that are particularly relevant here. In this paper, we develop by philosophical means the concept of adialectical system as a general framework to describe the development of knowledge networks that mark the starting point for learning processes, and we use semiotics to discuss (a) the epistemological thesis that any cognitive access to our world of objects is mediated by signs and (b)diagrammatic reasoning andabduction as those forms of practical knowledge that are crucial for the development of knowledge networks. The richness of this theoretical approach becomes evident by applying it to an example of learning in a biological research context. At the same time, we take a new look at the role of mathematical knowledge in this process.  相似文献   

15.
In this research report we examine knowledge other than content knowledge needed by a mathematician in his first use of an inquiry-oriented curriculum for teaching an undergraduate course in differential equations. Collaboratively, the mathematician and two mathematics education researchers identified the challenges faced by the mathematician as he began to adopt reform-minded teaching practices. Our analysis reveals that responding to those challenges entailed formulating and addressing particular instructional goals, previously unfamiliar to the instructor. From a cognitive analytical perspective, we argue that the instructor's knowledge — or lack of knowledge — influenced his ability to set and accomplish his instructional goals as he planned for, reflected on, and enacted instruction. By studying the teaching practices of a professional mathematician, we identify forms of knowledge apart from mathematical content knowledge that are essential to reform-oriented teaching, and we highlight how knowledge acquired through more traditional instructional practices may fail to support research-based forms of student-centered teaching.  相似文献   

16.
Empirical studies in several industries have verified that unit costs decline as organizations gain experience or knowledge in production, which is referred to as the learning curve effect. In the past two decades, there has also been analytical work on the relationship between a firm's learning curve effects and its pricing and output decisions. Learning rates differ significantly across firms in the same industry and recent empirical evidence has shown that knowledge depreciation may be an important reason for these differences. We propose and analyze a learning curve model with knowledge depreciation and provide several new insights. First, we show that there exists a steady state where knowledge level and unit cost remain constant over time and there exists an optimal path to this steady state. Many empirical researchers have observed this ‘plateau’ phenomenon, whereby unit costs decline but reach saturation after some time. While this has been traditionally modeled exogenously in the learning curve literature by assuming that cost reduction stops at some level of knowledge through a convex, decreasing unit cost function, we provide an alternative endogenous explanation. We are also able to show that, unlike in the model without knowledge depreciation, the production rate along the optimal path to the steady state may decrease over time. Also, the knowledge level along the optimal path may actually decline over time. Finally, we show that the optimal production rate decreases at higher interest rates and increases at higher knowledge depreciation rates. In turn, this implies that a high interest rate environment discourages firms from achieving high knowledge levels and results in higher prices. On the other hand, higher knowledge depreciation rates result in higher production rates and lower prices.  相似文献   

17.
We consider matroidal structures on convex geometries, which we call cg-matroids. The concept of a cg-matroid is closely related to but different from that of a supermatroid introduced by Dunstan, Ingleton, and Welsh in 1972. Distributive supermatroids or poset matroids are supermatroids defined on distributive lattices or sets of order ideals of posets. The class of cg-matroids includes distributive supermatroids (or poset matroids). We also introduce the concept of a strict cg-matroid, which turns out to be exactly a cg-matroid that is also a supermatroid. We show characterizations of cg-matroids and strict cg-matroids by means of the exchange property for bases and the augmentation property for independent sets. We also examine submodularity structures of strict cg-matroids.  相似文献   

18.
Following the success of a study on the method of fundamental solutions using an image concept [13], we extend to solve the three-dimensional Laplace problems containing spherical boundaries by using the three approaches. The case of eccentric sphere for the Laplace problem is considered. The optimal locations for the source distribution to include the foci in the MFS are also examined by using the image concept in the 3D problems. Whether a free constant is required or not in the MFS is also studied. The error distribution is discussed after comparing with the analytical solution derived by using the bispherical coordinates. Besides, the relationship between the Trefftz bases and the singularity in the MFS for the three-dimensional Laplace problems is also addressed. It is found that one source of the MFS contains several interior and exterior Trefftz sets through a degenerate kernel. On the contrary, one single Trefftz base can be superimposed by some lumped sources in the MFS through an indirect BIEM. Based on this finding, the relationship between the fictitious boundary densities of the indirect BIEM and the singularity strength in the MFS can be constructed due to the fact that the MFS is a lumped version of an indirect BIEM.  相似文献   

19.
In this article identification of skill multi map based knowledge structures is studied. It is shown how some possible modifications to the skill map may lead the probabilistic knowledge structure to be not identifiable. More specifically, we refer to particular modifications such as adding skills, or adding competencies for an item q. We demonstrate how these changes in the skill map lead the derived knowledge structure to be backward-graded or forward-graded respectively, and how these two particular kinds of structures are not identifiable.  相似文献   

20.
For a knowledge-based system that fails to provide the correct answer, it is important to be able to tune the system while minimizing overall change in the knowledge-base. There are a variety of reasons why the answer is incorrect ranging from incorrect knowledge to information vagueness to incompleteness. Still, in all these situations, it is typically the case that most of the knowledge in the system is likely to be correct as specified by the expert (s) and/or knowledge engineer (s). In this paper, we propose a method to identify the possible changes by understanding the contribution of parameters on the outputs of concern. Our approach is based on Bayesian Knowledge Bases for modeling uncertainties. We start with single parameter changes and then extend to multiple parameters. In order to identify the optimal solution that can minimize the change to the model as specified by the domain experts, we define and evaluate the sensitivity values of the results with respect to the parameters. We discuss the computational complexities of determining the solution and show that the problem of multiple parameters changes can be transformed into Linear Programming problems, and thus, efficiently solvable. Our work can also be applied towards validating the knowledge base such that the updated model can satisfy all test-cases collected from the domain experts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号