首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
Measuring the degree of inconsistency of a belief base is an important issue in many real-world applications. It has been increasingly recognized that deriving syntax sensitive inconsistency measures for a belief base from its minimal inconsistent subsets is a natural way forward. Most of the current proposals along this line do not take the impact of the size of each minimal inconsistent subset into account. However, as illustrated by the well-known Lottery Paradox, as the size of a minimal inconsistent subset increases, the degree of its inconsistency decreases. Another lack in current studies in this area is about the role of free formulas of a belief base in measuring the degree of inconsistency. This has not yet been characterized well. Adding free formulas to a belief base can enlarge the set of consistent subsets of that base. However, consistent subsets of a belief base also have an impact on the syntax sensitive normalized measures of the degree of inconsistency, the reason for this is that each consistent subset can be considered as a distinctive plausible perspective reflected by that belief base, whilst each minimal inconsistent subset projects a distinctive view of the inconsistency. To address these two issues, we propose a normalized framework for measuring the degree of inconsistency of a belief base which unifies the impact of both consistent subsets and minimal inconsistent subsets. We also show that this normalized framework satisfies all the properties deemed necessary by common consent to characterize an intuitively satisfactory measure of the degree of inconsistency for belief bases. Finally, we use a simple but explanatory example in requirements engineering to illustrate the application of the normalized framework.  相似文献   

2.
Real physical systems with reflective and rotational symmetries such as viruses, fullerenes and quasicrystals have recently been modeled successfully in terms of three-dimensional (affine) Coxeter groups. Motivated by this progress, we explore here the benefits of performing the relevant computations in a Geometric Algebra framework, which is particularly suited to describing reflections. Starting from the Coxeter generators of the reflections, we describe how the relevant chiral (rotational), full (Coxeter) and binary polyhedral groups can be easily generated and treated in a unified way in a versor formalism. In particular, this yields a simple construction of the binary polyhedral groups as discrete spinor groups. These in turn are known to generate Lie and Coxeter groups in dimension four, notably the exceptional groups D 4, F 4 and H 4. A Clifford algebra approach thus reveals an unexpected connection between Coxeter groups of ranks 3 and 4. We discuss how to extend these considerations and computations to the Conformal Geometric Algebra setup, in particular for the non-crystallographic groups, and construct root systems and quasicrystalline point arrays. We finally show how a Clifford versor framework sheds light on the geometry of the Coxeter element and the Coxeter plane for the examples of the twodimensional non-crystallographic Coxeter groups I 2(n) and the threedimensional groups A 3, B 3, as well as the icosahedral group H 3. IPPP/12/49, DCPT/12/98  相似文献   

3.
The traditional four-step model has been widely used in travel demand forecasting by considering trip generation, trip distribution, modal split and traffic assignment sequentially in a fixed order. However, this sequential approach suffers from the inconsistency among the level-of-service and flow values in each step of the procedure. In the last two decades, this problem has been addressed by many researchers who have sought to develop combined (or integrated) models that can consider travelers’ choice on different stages simultaneously and give consistent results. In this paper, alternative formulations, including mathematical programming (MP) formulation and variational inequality (VI) formulations, are provided for a combined travel demand model that integrates trip generation, trip distribution, modal split, and traffic assignment using the random utility theory framework. Thus, the proposed alternative formulations not only allow a systematic and consistent treatment of travel choice over different dimensions but also have behavioral richness. Qualitative properties of the formulations are also given to ensure the existence and uniqueness of the solution. Particularly, the model is analyzed for a special but useful case where the probabilistic travel choices are assumed to be a hierarchical logit model. Furthermore, a self-adaptive Goldstein–Levitin–Polyak (GLP) projection algorithm is adopted for solving this special case.  相似文献   

4.
This paper investigates the effects of intransitive judgments on the consistency of pairwise comparison matrices. Statistical evidence regarding the occurrence of intransitive judgements in pairwise matrices of acceptable consistency is gathered by using a Monte-Carlo simulation, which confirms that relatively high percentage of comparison matrices, satisfying Saaty’s CR criterion are ordinally inconsistent. It is also shown that ordinal inconsistency does not necessarily decrease in the group aggregation process, in contrast with cardinal inconsistency. A heuristic algorithm is proposed to improve ordinal consistency by identifying and eliminating intransitivities in pairwise comparison matrices. The proposed algorithm generates near-optimal solutions and outperforms other tested approaches with respect to computation time.  相似文献   

5.
Though inconsistency management in databases and AI has been studied extensively for years, it does not allow the user to specify how he wants to resolve inconsistencies. In real-world applications, users may want to manage or resolve inconsistencies based not only on the data, but their own knowledge of the risks involved in decision making based on faulty data. Each user should be empowered to use reasonable policies to deal with his data and his mission needs. In this paper, we start by providing an axiomatic definition of inconsistency management policies (IMPs) that puts this power in the hands of users. Any function satisfying these axioms is an IMP. We then define three broad families of IMPs, and derive several results that show (i) how these policies relate to postulates for the revision of belief bases and to recent research in the area of consistent query answering, and (ii) how they interact with standard relational algebra operators. Finally, we present several approaches to efficiently implement an IMP-based framework.  相似文献   

6.
A multiple-interval pseudospectral scheme is developed for solving nonlinear optimal control problems with time-varying delays, which employs collocation at the shifted flipped Jacobi-Gauss–Radau points. The new pseudospectral scheme has the following distinctive features/abilities: (i) it can directly and flexibly solve nonlinear optimal control problems with time-varying delays without the tedious quasilinearization procedure and the uniform mesh restriction on time domain decomposition, and (ii) it provides a smart approach to compute the values of state delay efficiently and stably, and a unified framework for solving standard and delay optimal control problems. Numerical results on benchmark delay optimal control problems including challenging practical engineering problems demonstrate that the proposed pseudospectral scheme is highly accurate, efficient and flexible.  相似文献   

7.
The ranking of MBA programmes by newspapers and magazines is common and usually controversial. This paper discusses the use of the most popular method of making these rankings via a multicriteria model which uses the weighted sum of a number of performance measures to give an overall score on which selection or ranking may be based. The weights are a quantitative model of the preferences of those making the evaluation. Many methods are available to obtain weights from preference statements so that for any set of preferences a number of different weight sets can be found depending on the method used. Cognitive limits lead to inconsistency in preference judgements so that weights may be subject both to uncertainty and to bias. It is proposed that choosing weights to minimize discrimination between alternatives (not weights) guards against unjustified discrimination between alternatives. Applying the method to data collected by the Financial Times shows the effect of varying the level of discrimination between weights and also the effect of using a reduced data set made necessary by the partial publication of information.  相似文献   

8.
A general framework for modeling median type locational decisions, where (i) travel costs and demands may be stochastic, (ii) multiple services or commodities need to be considered, and/or (iii) multiple median type objectives might exist, is presented—using the concept of “multidimensional networks”. The classical m-median problem, the stochastic m-median problem, the multicommodity m-median problem and and multiobjective m-median problem are defined within this framework.By an appropriate transformation of variables, the multidimensional m-median problem simplifies to the classical m-median problem but with a K-fold increase in the number of nodes, where K is the number of dimensions of the network. A nested dual approach to solve the resulting classical m-median problem, that uses Erlenkotter's facility location scheme as a subroutine, is presented. Computational results indicate that the procedure may perhaps be the best available one to solve the m-median problem exactly.  相似文献   

9.
In the framework of nonparametric multivariate function estimation we are interested in structural adaptation. We assume that the function to be estimated has the “single-index” structure where neither the link function nor the index vector is known. This article suggests a novel procedure that adapts simultaneously to the unknown index and the smoothness of the link function. For the proposed procedure, we prove a “local” oracle inequality (described by the pointwise seminorm), which is then used to obtain the upper bound on the maximal risk of the adaptive estimator under assumption that the link function belongs to a scale of Hölder classes. The lower bound on the minimax risk shows that in the case of estimating at a given point the constructed estimator is optimally rate adaptive over the considered range of classes. For the same procedure we also establish a “global” oracle inequality (under the L r norm, r < ∞) and examine its performance over the Nikol’skii classes. This study shows that the proposed method can be applied to estimating functions of inhomogeneous smoothness, that is whose smoothness may vary from point to point.  相似文献   

10.
We investigate the construction of stable models of general propositional logic programs. We show that a forward-chaining technique, supplemented by a properly chosen safeguard can be used to construct stable models of logic programs. Moreover, the proposed method has the advantage that if a program has no stable model, the result of the construction is a stable model of a subprogram. Further, in such a case the proposed method “isolates the inconsistency” of the program, that is it points to the part of the program responsible for the inconsistency. The results of computations are called stable submodels. We prove that every stable model of a program is a stable submodel. We investigate the complexity issues associated with stable submodels. The number of steps required to construct a stable submodel is polynomial in the sum of the lengths of the rules of the program. In the infinite case the outputs of the forward chaining procedure have much simpler complexity than those for general stable models. We show how to incorporate other techniques for finding models (e.g. Fitting operator, Van Gelder-Ross-Schlipf operator) into our construction.  相似文献   

11.
Inconsistency measures have been proposed to assess the severity of inconsistencies in knowledge bases of classical logic in a quantitative way. In general, computing the value of inconsistency is a computationally hard task as it is based on the satisfiability problem which is itself NP-complete. In this work, we address the problem of measuring inconsistency in knowledge bases that are accessed in a stream of propositional formulæ. That is, the formulæ of a knowledge base cannot be accessed directly but only once through processing of the stream. This work is a first step towards practicable inconsistency measurement for applications such as Linked Open Data, where huge amounts of information is distributed across the web and a direct assessment of the quality or inconsistency of this information is infeasible due to its size. Here we discuss the problem of stream-based inconsistency measurement on classical logic, in order to make use of existing measures for classical logic. However, it turns out that inconsistency measures defined on the notion of minimal inconsistent subsets are usually not apt to be used in the streaming scenario. In order to address this issue, we adapt measures defined on paraconsistent logics and also present a novel inconsistency measure based on the notion of a hitting set. We conduct an extensive empirical analysis on the behavior of these different inconsistency measures in the streaming scenario, in terms of runtime, accuracy, and scalability. We conclude that for two of these measures, the stream-based variant of the new inconsistency measure and the stream-based variant of the contension inconsistency measure, large-scale inconsistency measurement in streaming scenarios is feasible.  相似文献   

12.
We study the efect of"ghost forces"for a quasicontinuum method in three dimension with a planar interface."Ghost forces"are the inconsistency of the quasicontinuum method across the interface between the atomistic region and the continuum region.Numerical results suggest that"ghost forces"may lead to a negilible error on the solution,while lead to a fnite size error on the gradient of the solution.The error has a layer-like profle,and the interfacial layer width is of O(ε).The error in certain component of the displacement gradient decays algebraically from O(1)to O(ε)away from the interface.A surrogate model is proposed and analyzed,which suggests the same scenario for the efect of"ghost forces".Our analysis is based on the explicit solution of the surrogate model.  相似文献   

13.
A natural one-parameter family of Kähler quantizations of the cotangent bundle TK of a compact Lie group K, taking into account the half-form correction, was studied in [C. Florentino, P. Matias, J. Mourão, J.P. Nunes, Geometric quantization, complex structures and the coherent state transform, J. Funct. Anal. 221 (2005) 303-322]. In the present paper, it is shown that the associated Blattner-Kostant-Sternberg (BKS) pairing map is unitary and coincides with the parallel transport of the quantum connection introduced in our previous work, from the point of view of [S. Axelrod, S. Della Pietra, E. Witten, Geometric quantization of Chern-Simons gauge theory, J. Differential Geom. 33 (1991) 787-902]. The BKS pairing map is a composition of (unitary) coherent state transforms of K, introduced in [B.C. Hall, The Segal-Bargmann coherent state transform for compact Lie groups, J. Funct. Anal. 122 (1994) 103-151]. Continuity of the Hermitian structure on the quantum bundle, in the limit when one of the Kähler polarizations degenerates to the vertical real polarization, leads to the unitarity of the corresponding BKS pairing map. This is in agreement with the unitarity up to scaling (with respect to a rescaled inner product) of this pairing map, established by Hall.  相似文献   

14.
The clusterwise regression model is used to perform cluster analysis within a regression framework. While the traditional regression model assumes the regression coefficient (β) to be identical for all subjects in the sample, the clusterwise regression model allows β to vary with subjects of different clusters. Since the cluster membership is unknown, the estimation of the clusterwise regression is a tough combinatorial optimization problem. In this research, we propose a “Generalized Clusterwise Regression Model” which is formulated as a mathematical programming (MP) problem. A nonlinear programming procedure (with linear constraints) is proposed to solve the combinatorial problem and to estimate the cluster membership and β simultaneously. Moreover, by integrating the cluster analysis with the discriminant analysis, a clusterwise discriminant model is developed to incorporate parameter heterogeneity into the traditional discriminant analysis. The cluster membership and discriminant parameters are estimated simultaneously by another nonlinear programming model.  相似文献   

15.
In practice, parallel-machine job-shop scheduling (PMJSS) is very useful in the development of standard modelling approaches and generic solution techniques for many real-world scheduling problems. In this paper, based on the analysis of structural properties in an extended disjunctive graph model, a hybrid shifting bottleneck procedure (HSBP) algorithm combined with Tabu Search (TS) metaheuristic algorithm is developed to deal with the PMJSS problem. The original-version shifting bottleneck procedure (SBP) algorithm for the job-shop scheduling (JSS) has been significantly improved to solve the PMJSS problem with four novelties: (i) a topological-sequence algorithm is proposed to decompose the PMJSS problem in a set of single-machine scheduling (SMS) and/or parallel-machine scheduling subproblems; (ii) a modified Carlier algorithm based on the proposed lemmas and the proofs is developed to solve the SMS subproblem; (iii) the Jackson rule is extended to solve the PMS subproblem; (iv) a TS metaheuristic algorithm is embedded under the framework of SBP to optimise the JSS and PMJSS cases. The computational experiments show that the proposed HSBP is very efficient in solving the JSS and PMJSS problems.  相似文献   

16.
In the Capacitated Clustering Problem (CCP), a given set of n weighted points is to be partitioned into p clusters such that, the total weight of the points in each cluster does not exceed a given cluster capacity. The objective is to find a set of p centers that minimises total scatter of points allocated to them. In this paper a new constructive method, a general framework to improve the performance of greedy constructive heuristics, and a problem space search procedure for the CCP are proposed. The constructive heuristic finds patterns of natural subgrouping in the input data using concept of density of points. Elements of adaptive computation and periodic construction–deconstruction concepts are implemented within the constructive heuristic to develop a general framework for building efficient heuristics. The problem-space search procedure is based on perturbations of input data for which a controlled perturbation strategy, intensification and diversification strategies are developed. The implemented algorithms are compared with existing methods on a standard set of bench-marks and on new sets of large-sized instances. The results illustrate the strengths of our algorithms in terms of solution quality and computational efficiency.  相似文献   

17.
Entropy solutions have been widely accepted as the suitable solution framework for systems of conservation laws in several space dimensions. However, recent results in De Lellis and Székelyhidi Jr (Ann Math 170(3):1417–1436, 2009) and Chiodaroli et al. (2013) have demonstrated that entropy solutions may not be unique. In this paper, we present numerical evidence that state-of-the-art numerical schemes need not converge to an entropy solution of systems of conservation laws as the mesh is refined. Combining these two facts, we argue that entropy solutions may not be suitable as a solution framework for systems of conservation laws, particularly in several space dimensions. We advocate entropy measure-valued solutions, first proposed by DiPerna, as the appropriate solution paradigm for systems of conservation laws. To this end, we present a detailed numerical procedure which constructs stable approximations to entropy measure-valued solutions, and provide sufficient conditions that guarantee that these approximations converge to an entropy measure-valued solution as the mesh is refined, thus providing a viable numerical framework for systems of conservation laws in several space dimensions. A large number of numerical experiments that illustrate the proposed paradigm are presented and are utilized to examine several interesting properties of the computed entropy measure-valued solutions.  相似文献   

18.
In this paper we deal with an n-job, single-machine scheduling problem. All jobs are available from the start, and the objective is to minimize the variance of job flow-times. A heuristic procedure which is based on the complementary pair-exchange principle is proposed. It has been concluded that this heuristic procedure provides improved results (in terms of objective-function value) when compared with other heuristics. Our heuristic procedure has the complexity of O(n log n).  相似文献   

19.
Judgement aggregation is a model of social choice where the space of social alternatives is the set of consistent truth-valuations (‘judgements’) on a family of logically interconnected propositions. It is well known that propositionwise majority voting can yield logically inconsistent judgements. We show that, for a variety of spaces, propositionwise majority voting can yield any possible judgement. By considering the geometry of sub-polytopes of the Hamming cube, we also estimate the number of voters required to achieve all possible judgements. These results generalize the classic results of McGarvey (1953) [13] and Stearns (1959) [22].  相似文献   

20.
The probabilistic traveling salesman problem (PTSP) is a topic of theoretical and practical importance in the study of stochastic network problems. It provides researchers with a modeling framework for exploring the stochastic effects in routing problems. This paper proposed three initial solution generators (NN1, NN2, RAN) under a genetic algorithm (GA) framework for solving the PTSP. A set of numerical experiments based on heterogeneous and homogeneous PTSP instances were conducted to test the effectiveness and efficiency of the proposed algorithms. The results from the heterogeneous PTSP show that the average E[τ] values obtained by the three generators under a GA framework are similar to those obtained by the “Previous Best,” but with an average computation time saving of 50.2%. As for the homogeneous PTSP instances, NN1 is a relatively better generator among the three examined, while RAN consistently performs worse than the other two generators in terms of average E[τ] values. Additionally, as compared to previously reported studies, no one single algorithm consistently outperformed the others across all homogeneous PTSP instances in terms of the best E[τ] values. The fact that no one initial solution generator consistently performs best in terms of the E[τ] value obtained across all instances in heterogeneous cases, and that the performance of each examined algorithm is dependent on the number of nodes (n) and probability (p) for homogeneous cases, suggest the possibility of context-dependent phenomenon. Finally, to obtain valid results, researchers are advised to include at least a certain amount of test instances with the same combination of n and p when conducting PTSP experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号