首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Vector logic is a mathematical model of the propositional calculus in which the logical variables are represented by vectors and the logical operations by matrices. In this framework, many tautologies of classical logic are intrinsic identities between operators and, consequently, they are valid beyond the bivalued domain. The operators can be expressed as Kronecker polynomials. These polynomials allow us to show that many important tautologies of classical logic are generated from basic operators via the operations called Type I and Type II products. Finally, it is described a matrix version of the Fredkin gate that extends its properties to the many-valued domain, and it is proved that the filtered Fredkin operators are second degree Kronecker polynomials that cannot be generated by Type I or Type II products. Mathematics Subject Classification: 03B05, 03B50.  相似文献   

2.
A family of logical systems, which may be regarded as extending equational logic, is studied. The equationsf=g of equational logic are generalized to congruence equivalence formulasf≡g (modx), wheref andg are terms interpreted as elements of an algebraV of some specified type. and termx is interpreted as a member of ann-permutable lattice of congruences forV. Formal concepts of proof and derivability from systems of hypotheses are developed. These proofs, like those of equational logic. require only finite algebraic processes, without manipulation of logical quantifiers or connectives. The logical systems are shown to be correct and complete: a well-formed statement is derivable from a system of hypotheses if and only if it is valid in all models of these hypotheses.  相似文献   

3.
During the last years a number of approaches to information modeling have been presented. An information model is then assumed to be expressed in some formalism based on a set of basic concepts and construction rules. Some approaches also include inference rules, but few include consistency criteria for information models. Two different approaches to information modeling have been analyzed within the framework of first-order predicate logic. In particular, their consistency criteria are compared with that of predicate logic. The approaches are completely expressible in predicate logic and the consistency criteria have a logical counterpart only when a set of implicit assumptions is stated explicitly.This work is supported by the National Swedish Board for Technical Development.  相似文献   

4.
The perplex number system is a generalization of the abstract logical relationships among electrical particles. The inferential logic of the new number system is homologous to the inferential logic of the progression of the atomic numbers. An electrical progression is defined categorically as a sequence of objects with teridentities. Each identity infers corresponding values of an integer, units and a correspondence relation between each unit and its integer. Thus, in this logical system, each perplex numeral contains an exact internal representational structure; it carries an internal message. This structure is a labeled bipartite graph that is homologous to the internal electrical structure of a chemical atom. The formal logical operations are conjunctions and disjunctions. Combinations of conjunctions and disjunctions compose the spatiality of objects. Conjunctions may include the middle term of pairs of propositions with a common term, thereby creating new information. The perplex numerals are used as a universal source of diagrams.The perplex number system, as an abstract generalization of concrete objects and processes, constitutes a new exact notation for chemistry without invoking alchemical symbols. Practical applications of the number system to concrete objects (chemical elements, simple ions and molecules, and the perplex isomers, ethanol and dimethyl ether) are given. In conjunction with the real number system, the relationships between the perplex number system and scientific theories of concrete systems (thermodynamics, intra-molecular dynamics, molecular biology and individual medicine) are described.  相似文献   

5.
We study hidden-variable models from quantum mechanics and their abstractions in purely probabilistic and relational frameworks by means of logics of dependence and independence, which are based on team semantics. We show that common desirable properties of hidden-variable models can be defined in an elegant and concise way in dependence and independence logic. The relationship between different properties and their simultaneous realisability can thus be formulated and proven on a purely logical level, as problems of entailment and satisfiability of logical formulae. Connections between probabilistic and relational entailment in dependence and independence logic allow us to simplify proofs. In many cases, we can establish results on both probabilistic and relational hidden-variable models by a single proof, because one case implies the other, depending on purely syntactic criteria. We also discuss the ‘no-go’ theorems by Bell and Kochen-Specker and provide a purely logical variant of the latter, introducing non-contextual choice as a team-semantical property.  相似文献   

6.
Whilst supported by compelling arguments, the representation of uncertainty by means of (subjective) probability does not enjoy a unanimous consensus. A substantial part of the relevant criticisms point to its alleged inadequacy for representing ignorance as opposed to uncertainty. The purpose of this paper is to show how a strong justification for taking belief as probability, namely the Dutch Book argument, can be extended naturally so as to provide a logical characterization of coherence for imprecise probability, a framework which is widely believed to accommodate some fundamental features of reasoning under ignorance. The appropriate logic for our purposes is an algebraizable logic whose equivalent algebraic semantics is a variety of MV-algebras with an additional internal unary operation representing upper probability (these algebras will be called UMV-algebras).  相似文献   

7.
The logic CD is an intermediate logic (stronger than intuitionistic logic and weaker than classical logic) which exactly corresponds to the Kripke models with constant domains. It is known that the logic CD has a Gentzen-type formulation called LD (which is same as LK except that (→) and (?–) rules are replaced by the corresponding intuitionistic rules) and that the cut-elimination theorem does not hold for LD . In this paper we present a modification of LD and prove the cut-elimination theorem for it. Moreover we prove a “weak” version of cut-elimination theorem for LD , saying that all “cuts” except some special forms can be eliminated from a proof in LD . From these cut-elimination theorems we obtain some corollaries on syntactical properties of CD : fragments collapsing into intuitionistic logic. Harrop disjunction and existence properties, and a fact on the number of logical symbols in the axiom of CD . Mathematics Subject Classification : 03B55. 03F05.  相似文献   

8.
The study of long-run equilibrium processes is a significant component of economic and finance theory. The Johansen technique for identifying the existence of such long-run stationary equilibrium conditions among financial time series allows the identification of all potential linearly independent cointegrating vectors within a given system of eligible financial time series. The practical application of the technique may be restricted, however, by the pre-condition that the underlying data generating process fits a finite-order vector autoregression (VAR) model with white noise. This paper studies an alternative method for determining cointegrating relationships without such a precondition. The method is simple to implement through commonly available statistical packages. This 'residual-based cointegration' (RBC) technique uses the relationship between cointegration and univariate Box-Jenkins ARIMA models to identify cointegrating vectors through the rank of the covariance matrix of the residual processes which result from the fitting of univariate ARIMA models. The RBC approach for identifying multivariate cointegrating vectors is explained and then demonstrated through simulated examples. The RBC and Johansen techniques are then both implemented using several real-life financial time series.  相似文献   

9.
Chords are not pure sets of tones or notes. They are mainly characterized by their matrices. A chord matrix is the pattern of all the lengths of intervals given without further context. Chords are well-structured invariants. They show their inner logical form. This opens up the possibility to develop a molecular logic of chords. Chords are our primitive, but, nevertheless, already interrelated expressions. The logical space of internal harmony is our well-known chromatic scale represented by an infinite line of integers. Internal harmony is nothing more than the pure interrelatedness of two or more chords. We consider three cases: (a) chords inferentially related to subchords, (b) pairs of chords in the space of major–minor tonality and (c) arbitrary chords as arguments of unary chord operators in relation to their outputs. One interesting result is that chord negation transforms any pure major chord into its pure minor chord and vice versa. Another one is the fact that the negation of chords with symmetric matrices does not change anything. A molecular logic of chords is mainly characterized by combining general rules for chord operators with the inner logical form of their arguments.  相似文献   

10.
We construct a countable family of extensions of the logic of finite chains (the Dummett logic) in the language containing the standard logical connectives and a new connective (irreflexive modality), each of which determines in the Dummett logic a new logical connective in the sense of Novikov. Two arbitrary logics on this list are incompatible over the Dummett logic; i.e., their union contains a formula absent from the Dummett logic.  相似文献   

11.
Is logic, feasibly, a product of natural selection? In this paper we treat this question as dependent upon the prior question of where logic is founded. After excluding other possibilities, we conclude that logic resides in our language, in the shape of inferential rules governing the logical vocabulary of the language. This means that knowledge of (the laws of) logic is inseparable from the possession of the logical constants they govern. In this sense, logic may be seen as a product of natural selection: the emergence of logic requires the development of creatures who can wield structured languages of a specific complexity, and who are capable of putting the languages to use within specific discursive practices.  相似文献   

12.
How should a scientist argue when the data are insufficient to allow him to reason by classical or statistical models? After all, in most real world situations - in business or in war - that is the unhappy norm. In such cases the ordinary man instinctively argues by analogy, as Leibniz long ago showed; indeed if time presses, there is no alternative. The trouble, however, is that if we then include such arguments in our scientific reasoning, then, as we all know, this can lead to false conclusions. To escape from this dilemma, is there any alternative logical basis from which we can start our reasoning? What is proposed here is that instead of the well tried three valued logic of true, false or probable, we should adopt the three valued logic of true, false or possible. A rational system for analogue arguments can then be developed by these means, and with it the advantages brought by the use of symbols and so on. Such a method, however, includes many necessary changes as to how to structure our problems and how to apply new criteria; and it is some of these changes that are outlined in this note. For instance, it outlines the meaning of ‘causal relationships’ in analogue arguments, as well as how to define ‘rational choice’ in terms of analogue propositions. The advantage throughout is that this allows us to argue with less rather than more data.  相似文献   

13.
Given a square matrix and single right and left starting vectors, the classical nonsymmetric Lanczos process generates two sequences of biorthogonal basis vectors for the right and left Krylov subspaces induced by the given matrix and vectors. In this paper, we propose a Lanczos-type algorithm that extends the classical Lanczos process for single starting vectors to multiple starting vectors. Given a square matrix and two blocks of right and left starting vectors, the algorithm generates two sequences of biorthogonal basis vectors for the right and left block Krylov subspaces induced by the given data. The algorithm can handle the most general case of right and left starting blocks of arbitrary sizes, while all previously proposed extensions of the Lanczos process are restricted to right and left starting blocks of identical sizes. Other features of our algorithm include a built-in deflation procedure to detect and delete linearly dependent vectors in the block Krylov sequences, and the option to employ look-ahead to remedy the potential breakdowns that may occur in nonsymmetric Lanczos-type methods.

  相似文献   


14.
The Laguerre transform, introduced by Keilson and Nunn (1979) and Keilson, Nunn, Sumita (1981), provides an algorithmic basis for the computation of multiple convolutions in conjunction with other algebraic and summation operations. The methods enable one to evaluate numerically a variety of results in applied probability and statistics that have been available only formally. For certain more complicated models, the formulation must be extended. In this paper we establish the matrix Laguerre transform, appropriate for the study of semi-Markov processes and Markov renewal processes, as an extension of the scalar Laguerre transform. The new formalism enables one to calculate matrix convolutions and other algebraic operations in matrix form. As an application, a matrix renewal function is evaluated and its limit theorem is numerically exhibited.  相似文献   

15.
In this paper we discuss some practical aspects of using type theory as a programming and specification language, where the viewpoint is to use it not only as a basis for program synthesis but also as a programming language with a programming logic allowing us to do ordinary verification.The subset type has been added to type theory in order to avoid irrelevant information in programs. We give an example of a proof which illustrates the problems that may occur if the subset type is used in specifications when we have the standard interpretation of propositions as types. Harrop-formulas and Squash are then discussed as solutions to these problems. It is argued that they are not acceptable from a practical point of view.An extension of the theory to include the two new judgment forms:A is a proposition, andA is true, is then given and explained in terms of the old theory. The logical constants are no longer identified with the corresponding type theoretical constants, but propositions are interpreted as Gödel formulas, which allow us to introduce and justify logical rules similar to rules for classical logic. The interpretation is extended to include predicates defined by using reflections of the ordinary definition of Gödel formulas in a type of small propositions.The programming example is then revisited and stronger elimination rules are discussed.  相似文献   

16.
This paper deals with questions raised by R.A. Brualdi concerning the structure matrix of (0,1)-matrices with fixed row and column sum vectors; namely, determining its rank and—in case the matrices are square—its eigenvalues. It turns out that the trace of the structure matrix has some interesting properties. The rank of the structure matrix has the values 1,2, or 3; this yields a classification of econometric models.  相似文献   

17.
On a structuralist account of logic, the logical operators, as well as modal operators are defined by the specific ways that they interact with respect to implication. As a consequence, the same logical operator (conjunction, negation etc.) can appear to be very different with a variation in the implication relation of a structure. We illustrate this idea by showing that certain operators that are usually regarded as extra-logical concepts (Tarskian algebraic operations on theories, mereological sum, products and negates of individuals, intuitionistic operations on mathematical problems, epistemic operations on certain belief states) are simply the logical operators that are deployed in different implication structures. That makes certain logical notions more omnipresent than one would think. Mathematics Subject Classification (2000): Primary 03B22; Secondary 03B20, 03B42, 03B60  相似文献   

18.
“Setting” n-Opposition   总被引:1,自引:1,他引:0  
Our aim is to show that translating the modal graphs of Moretti’s “n-opposition theory” (2004) into set theory by a suited device, through identifying logical modal formulas with appropriate subsets of a characteristic set, one can, in a constructive and exhaustive way, by means of a simple recurring combinatory, exhibit all so-called “logical bi-simplexes of dimension n” (or n-oppositional figures, that is the logical squares, logical hexagons, logical cubes, etc.) contained in the logic produced by any given modal graph (an exhaustiveness which was not possible before). In this paper we shall handle explicitly the classical case of the so-called 3(3)-modal graph (which is, among others, the one of S5), getting to a very elegant tetraicosahedronal geometrisation of this logic.   相似文献   

19.
基于逻辑关系的数学模型—逻辑模型的理论与分析   总被引:1,自引:1,他引:0  
用数学模型研究实际问题是现代科学研究的常用方法.通常采用的数学模型是各种方程.但是使用方程作为研究手段也存在着许多问题,例如无法应用于不可计算的或者不具有数量概念的实际情况中,这样许多问题无法加以讨论.以命题为基础,通过数理逻辑的概念和方法,建立了具有实际意义的逻辑模型的一般理论,分析了逻辑模型的一些基本性质.逻辑模型可以看成传统模型的一种推广.  相似文献   

20.
We describe an implementation of Conjugate Gradient-type iterative algorithms for problems with general sparsity patterns on a vector processor with a hierarchy of memories, such as the IBM 3090/VF. The implementation relies on the wavefront approach to vectorize the solution of the two sparse triangular systems that arise when using ILU type preconditioners. The data structure is the key to an effective implementation of sparse computational kernels on a vector processor. A data structure is a combination of a layout of the matrix coefficients and ordering schemes for the vectors to increase data locality. With the data structure we describe, we achieve comparable performance on both the matrix-vector product and the solution of the sparse triangular systems on a variety of real problems, such as those arising from large scale reservoir simulation or structural analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号