首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
During automated problem solving it may happen that some knowledge that is known at the user level is lost in the formal model. As this knowledge might be important for efficient problem solving, it seems useful to re-discover it in order to improve the efficiency of the solving procedure. This paper compares three methods for discovering certain implied constraints in the constraint models describing manufacturing (and other) processes with serial, parallel, and alternative operations. In particular, we focus on identifying equivalent nodes in the precedence graph with parallel and alternative branches. Equivalent nodes correspond to operations that either must be all simultaneously present or none of them can be present in the schedule. Such information is frequently known at the user level, but it is lost in the formal model. The paper shows that identifying equivalent nodes is an NP-hard problem in general, but it is tractable if the graph has a nested structure. As the nested structure is typical for real-life processes and workflows, we use the nested graphs to experimentally compare the proposed methods.  相似文献   

2.
Mining association rules is a popular and well researched method for discovering interesting relations between variables in large databases. A practical problem is that at medium to low support values often a large number of frequent itemsets and an even larger number of association rules are found in a database. A widely used approach is to gradually increase minimum support and minimum confidence or to filter the found rules using increasingly strict constraints on additional measures of interestingness until the set of rules found is reduced to a manageable size. In this paper we describe a different approach which is based on the idea to first define a set of “interesting” itemsets (e.g., by a mixture of mining and expert knowledge) and then, in a second step to selectively generate rules for only these itemsets. The main advantage of this approach over increasing thresholds or filtering rules is that the number of rules found is significantly reduced while at the same time it is not necessary to increase the support and confidence thresholds which might lead to missing important information in the database.  相似文献   

3.
基于形式概念在属性集上建立逻辑语言系统,证明基于形式概念的基本对象粒描述定理,讨论合取原子属性逻辑公式所描述对象粒的性质,提出一个求解描述对象粒的属性逻辑公式的算法。  相似文献   

4.
In this paper, we investigate the advantages of using case-based reasoning (CBR) to solve personnel rostering problems. Constraints for personnel rostering problems are commonly categorized as either ‘hard’ or ‘soft’. Hard constraints are those that must be satisfied and a roster that violates none of these constraints is considered to be ‘feasible’. Soft constraints are more flexible and are often used to measure roster quality in terms of staff satisfaction. We introduce a method for repairing hard constraint violations using CBR. CBR is an artificial intelligence paradigm whereby new problems are solved by considering the solutions to previous similar problems. A history of hard constraint violations and their corresponding repairs, which is captured from human rostering experts, is stored and used to solve similar violations in new rosters. The soft constraints are not defined explicitly. Their treatment is captured implicitly during the repair of hard constraint violations. The knowledge in the case-base is combined with selected tabu search concepts in a hybrid meta-heuristic algorithm. Experiments on real-world data from a UK hospital are presented. The results show that CBR can guide a meta-heuristic algorithm towards feasible solutions with high staff satisfaction, without the need to explicitly define soft constraint objectives.  相似文献   

5.
知识的属性扰动与属性扰动定理   总被引:1,自引:0,他引:1  
By using the dynamic characteristic of one direction S-rough sets(one direction singular rough sets) and dual of one direction S-rough sets(dual of one direction singular rough sets),the concepts of attribute disturbance of knowledge,the attribute disturbance degree of knowledge,and the disturbance coefficient of knowledge are given.By employing these concepts,the cardinal order theorem of the attribute disturbance knowledge,the unit circle theorem of the attribute disturbance knowledge,and the discernib...  相似文献   

6.
We present a clustering method for collections of graphs based on the assumptions that graphs in the same cluster have a similar role structure and that the respective roles can be founded on implicit vertex types. Given a network ensemble (a collection of attributed graphs with some substantive commonality), we start by partitioning the set of all vertices based on attribute similarity. Projection of each graph onto the resulting vertex types yields feature vectors of equal dimensionality, irrespective of the original graph sizes. These feature vectors are then subjected to standard clustering methods. This approach is motivated by social network concepts, and we demonstrate its utility on an ensemble of personal networks of migrants, where we extract structurally similar groups and show their resemblance to predicted acculturation strategies.  相似文献   

7.
8.
本文首先研究了基本的面向属性归纳,提出了从网站用户登录信息中挖掘用户特征规则的思路。问题从解决多属性归纳着手,特点是首先建立概念树,然后用条件属性组合与约束的方法进行挖掘。实验表明,本文设计的模型能客观地描述客户行为,为从Web信息挖掘客户特征规则提出了一条有效的途径。  相似文献   

9.
Dialog-controlled rule systems were introduced as a tool to describe the way in which theWimdas system for knowledge-based analysis of marketing data manages its dialog with the user. In this paper we shall discuss how dialog-controlled rule systems can be used to specify a formal language aiding a knowledge engineer in maintaining a system's knowledge base. Although this language is finite, it must be defined generically, being too extensive to be enumerated. In contrast to the well-known traditional methods for defining formal languages — using finite automata, regular expressions or grammars — our method can be applied by a user who need not be an expert in theoretical computer science.Research for this paper was supported by the Deutsche Forschungsgemeinschaft.  相似文献   

10.
This paper presents a possible representation of multicriteria analysis by means of artificial intelligence techniques. The decision process activities characterized by the existence of formal and technical knowledge were identified and attention was focused on the area of multicriteria outranking methods. The knowledge characteristics suggested the use of artificial intelligence techniques, based on a conceptualization in which the domain of discourse is the set of the multicriteria methodology concepts used in the analysed area of activities, and the relational set is the union of the admissible relations among the concepts and the relations elicited from experience. The suitable AI techniques were tested by implementing a knowledge-based interface between the outranking methods and a user who was not very familiar with this approach.  相似文献   

11.
Attribute reduction is viewed as an important issue in data mining and knowledge representation. This paper studies attribute reduction in fuzzy decision systems based on generalized fuzzy evidence theory. The definitions of several kinds of attribute reducts are introduced. The relationships among these reducts are then investigated. In a fuzzy decision system, it is proved that the concepts of fuzzy positive region reduct, lower approximation reduct and generalized fuzzy belief reduct are all equivalent, the concepts of fuzzy upper approximation reduct and generalized fuzzy plausibility reduct are equivalent, and a generalized fuzzy plausibility consistent set must be a generalized fuzzy belief consistent set. In a consistent fuzzy decision system, an attribute set is a generalized fuzzy belief reduct if and only if it is a generalized fuzzy plausibility reduct. But in an inconsistent fuzzy decision system, a generalized fuzzy belief reduct is not a generalized fuzzy plausibility reduct in general.  相似文献   

12.
In order to regulate different circumstances over an extensive period of time, norms in institutions are stated in a vague and often ambiguous manner, thereby abstracting from concrete aspects which become instead relevant for the actual functioning of the institutions. If agent-based electronic institutions, which adhere to a set of abstract requirements, are to be built, how can those requirements be translated into more concrete constraints, the impact of which can be described directly in the institution? We address this issue considering institutions as normative systems based on articulate ontologies of the agent domain they regulate. Ontologies, we hold, are used by institutions to relate the abstract concepts in which their norms are formulated, to their concrete application domain. In this view, different institutions can implement the same set of norms in different ways as far as they presuppose divergent ontologies of the concepts in which that set of norms is formulated. In this paper we analyse this phenomenon introducing a notion of contextual ontology. We will focus on the formal machinery necessary to characterise it as well.  相似文献   

13.
Under study are the automorphism groups of computable formal contexts. We give a general method to transform results on the automorphisms of computable structures into results on the automorphisms of formal contexts. Using this method, we prove that the computable formal contexts and computable structures actually have the same automorphism groups and groups of computable automorphisms. We construct some examples of formal contexts and concept lattices that have nontrivial automorphisms but none of them could be hyperarithmetical in any hyperarithmetical presentation of these structures. We also show that it could be happen that two formal concepts are automorphic but they are not hyperarithmetically automorphic in any hyperarithmetical presentation.  相似文献   

14.
Formal concept analysis (FCA) is a discipline that studied the hierarchical structares induced by a binary relation between a pair of sets,and applies in data analysis,information retrieval,knowledge discovery,etc.In this paper,it is shown that a formal context T is equivalent to a set-valued mapping S :G →P(M),and formal concepts could be defined in the set-valued mapping S.It is known that the topology and set-valued mapping are linked.Hence,the advantage of this paper is that the conclusion make us to construct formal concept lattice based on the topology.  相似文献   

15.
《Optimization》2012,61(7):1099-1116
In this article we study support vector machine (SVM) classifiers in the face of uncertain knowledge sets and show how data uncertainty in knowledge sets can be treated in SVM classification by employing robust optimization. We present knowledge-based SVM classifiers with uncertain knowledge sets using convex quadratic optimization duality. We show that the knowledge-based SVM, where prior knowledge is in the form of uncertain linear constraints, results in an uncertain convex optimization problem with a set containment constraint. Using a new extension of Farkas' lemma, we reformulate the robust counterpart of the uncertain convex optimization problem in the case of interval uncertainty as a convex quadratic optimization problem. We then reformulate the resulting convex optimization problems as a simple quadratic optimization problem with non-negativity constraints using the Lagrange duality. We obtain the solution of the converted problem by a fixed point iterative algorithm and establish the convergence of the algorithm. We finally present some preliminary results of our computational experiments of the method.  相似文献   

16.
Shape constrained smoothing using smoothing splines   总被引:1,自引:0,他引:1  
Summary  In some regression settings one would like to combine the flexibility of nonparametric smoothing with some prior knowledge about the regression curve. Such prior knowledge may come from a physical or economic theory, leading to shape constraints such as the underlying regression curve being positive, monotone, convex or concave. We propose a new method for calculating smoothing splines that fulfill these kinds of constraints. Our approach leads to a quadratic programming problem and the infinite number of constraints are replaced by a finite number of constraints that are chosen adaptively. We show that the resulting problem can be solved using the algorithm of Goldfarb and Idnani (1982, 1983) and illustrate our method on several real data sets.  相似文献   

17.
We introduce and study the notions of computable formal context and computable formal concept. We give some examples of computable formal contexts in which the computable formal concepts fail to form a lattice and study the complexity aspects of formal concepts in computable contexts. In particular, we give some sufficient conditions under which the computability or noncomputability of a formal concept could be recognized from its lattice-theoretic properties. We prove the density theorem showing that in a Cantor-like topology every formal concept can be approximated by computable ones. We also show that not all formal concepts have lattice-theoretic approximations as suprema or infima of families of computable formal concepts.  相似文献   

18.
Logical theories for representing knowledge are often plagued by the so-called Logical Omniscience Problem. The problem stems from the clash between the desire to model rational agents, which should be capable of simple logical inferences, and the fact that any logical inference, however complex, almost inevitably consists of inference steps that are simple enough. This contradiction points to the fruitlessness of trying to solve the Logical Omniscience Problem qualitatively if the rationality of agents is to be maintained. We provide a quantitative solution to the problem compatible with the two important facets of the reasoning agent: rationality and resource boundedness. More precisely, we provide a test for the logical omniscience problem in a given formal theory of knowledge. The quantitative measures we use are inspired by the complexity theory. We illustrate our framework with a number of examples ranging from the traditional implicit representation of knowledge in modal logic to the language of justification logic, which is capable of spelling out the internal inference process. We use these examples to divide representations of knowledge into logically omniscient and not logically omniscient, thus trying to determine how much information about the reasoning process needs to be present in a theory to avoid logical omniscience.  相似文献   

19.
We study infinite sets of convex functional constraints, with possibly a set constraint, under general background hypotheses which require closed functions and a closed set, but otherwise do not require a Slater point. For example, when the set constraint is not present, only the consistency of the conditions is needed. We provide hypotheses, which are necessary as well as sufficient, for the overall set of constraints to have the property that there is no gap in Lagrangean duality for every convex objective function defined on ℝn. The sums considered for our Lagrangean dual are those involving only finitely many nonzero multipliers. In particular, we recover the usual sufficient condition when only finitely many functional constraints are present. We show that a certain compactness condition in function space plays the role of finiteness, when there are an infinite number of functional constraints. The author's research has been partially supported by Grant ECS8001763 of the National Science Foundation.  相似文献   

20.
Based on the theory of integrable boundary conditions (BCs) developed by Sklyanin, we provide a direct method for computing soliton solutions of the focusing nonlinear Schrödinger equation on the half‐line. The integrable BCs at the origin are represented by constraints of the Lax pair, and our method lies on dressing the Lax pair by preserving those constraints in the Darboux‐dressing process. The method is applied to two classes of solutions: solitons vanishing at infinity and self‐modulated solitons on a constant background. Half‐line solitons in both cases are explicitly computed. In particular, the boundary‐bound solitons, which are static solitons bounded at the origin, are also constructed. We give a natural inverse scattering transform interpretation of the method as evolution of the scattering data determined by the integrable BCs in space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号