首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
This expository paper on Aristotle’s prototype underlying logic is intended for a broad audience that includes non-specialists. It requires as background a discussion of Aristotle’s demonstrative logic. Demonstrative logic or apodictics is the study of demonstration as opposed to persuasion. It is the subject of Aristotle’s two-volume Analytics, as its first sentence says. Many of Aristotle’s examples are geometrical. A typical geometrical demonstration requires a theorem that is to be demonstrated, known premises from which the theorem is to be deduced, and a deductive logic by which the steps of the deduction proceed. Every demonstration produces (or confirms) knowledge of (the truth of) its conclusion for every person who comprehends the demonstration. Aristotle presented a general truth-and-consequence theory of demonstration meant to apply to all demonstrations: a demonstration is an extended argumentation that begins with premises known to be truths and that involves a chain of reasoning showing by deductively evident steps that its conclusion is a consequence of its premises. In short, a demonstration is a deduction whose premises are known to be true. Aristotle’s general theory of demonstration required a prior general theory of deduction presented in the Prior Analytics. His general immediate-deduction-chaining theory of deduction was meant to apply to all deductions: any deduction that is not immediately evident is an extended argumentation that involves a chaining of immediately evident steps that shows its final conclusion to follow logically from its premises. His deductions, both direct and indirect, were rule-based and not tautology-based. The idea of tautology-based deduction, which dominated modern logic in the early years of the 1900s, is nowhere to be found in Analytics. Rule-based (or “natural”) deduction was rediscovered by modern logicians. To illustrate his general theory of deduction, Aristotle presented a prototype: an ingeniously simple and mathematically precise special case traditionally known as the categorical syllogistic. With reference only to propositions of the four so-called categorical forms, he painstakingly worked out exactly what those immediately evident deductive steps are and how they are chained to complete deductions. In his specialized prototype theory, Aristotle explained how to deduce from a given categorical premise set, no matter how large, any categorical conclusion implied by the given set. He did not extend this treatment to non-categorical deductions, thus setting a program for future logicians. The prototype, categorical syllogistic, was seen by Boole as a “first approximation” to a comprehensive logic. Today, however it appears more as the first of the dozens of logics already created and as the first exemplification of a family that continues to expand.  相似文献   

2.
In this exploratory paper we propose a framework for the deduction apparatus of multi-valued logics based on the idea that a deduction apparatus has to be a tool to manage information on truth values and not directly truth values of the formulas. This is obtained by embedding the algebraic structure V defined by the set of truth values into a bilattice B. The intended interpretation is that the elements of B are pieces of information on the elements of V. The resulting formalisms are particularized in the framework of fuzzy logic programming. Since we see fuzzy control as a chapter of multi-valued logic programming, this suggests a new and powerful approach to fuzzy control based on positive and negative conditions.  相似文献   

3.
In this paper, we introduce the notion of dual Post’s negation and an infinite class of Dual Post’s finitely-valued logics which differ from Post’s ones with respect to the definitions of negation and the sets of designated truth values. We present adequate natural deduction systems for all Post’s k-valued (\(k\geqslant 3\)) logics as well as for all Dual Post’s k-valued logics.  相似文献   

4.
Abstract—Let χ (mod q), q > 1, be a primitive Dirichlet character. We first present a detailed account of Linnik’s deduction of the functional equation of L(s, χ) from the functional equation of ζ(s). Then we show that the opposite deduction can be obtained by a suitable modification of the method, involving finer arithmetic arguments.  相似文献   

5.
The article investigates a system of polymorphically typed combinatory logic which is equivalent to G?del’s T. A notion of (strong) reduction is defined over terms of this system and it is proved that the class of well-formed terms is closed under both bracket abstraction and reduction. The main new result is that the number of contractions needed to reduce a term to normal form is computed by an ε 0-recursive function. The ordinal assignments used to obtain this result are also used to prove that the system under consideration is indeed equivalent to G?del’s T. It is hoped that the methods used here can be extended so as to obtain similar results for stronger systems of polymorphically typed combinatory terms. An interesting corollary of such results is that they yield ordinally informative proofs of normalizability for sub-systems of second-order intuitionist logic, in natural deduction style.  相似文献   

6.
In this Note we present a formal scaling method that allows for the deduction from three-dimensional linearized elasticity of the equations of shearable structures such as Reissner–Mindlin's equations for plates and Timoshenko's equations for rods, as well as other models of thin structures. This method is based on the requirement that a scaled energy functional possibly including second-gradient terms stay bounded in the limit of vanishing ‘thinness’. To cite this article: B. Miara, P. Podio-Guidugli, C. R. Acad. Sci. Paris, Ser. I 343 (2006).  相似文献   

7.
We give a simple proof-theoretic argument showing that Glivenko’s theorem for propositional logic and its version for predicate logic follow as an easy consequence of the deduction theorem, which also proves some Glivenko type theorems relating intermediate predicate logics between intuitionistic and classical logic. We consider two schemata, the double negation shift (DNS) and the one consisting of instances of the principle of excluded middle for sentences (REM). We prove that both schemata combined derive classical logic, while each one of them provides a strictly weaker intermediate logic, and neither of them is derivable from the other. We show that over every intermediate logic there exists a maximal intermediate logic for which Glivenko’s theorem holds. We deduce as well a characterization of DNS, as the weakest (with respect to derivability) scheme that added to REM derives classical logic.  相似文献   

8.
Anne Patel  Maxine Pfannkuch 《ZDM》2018,50(7):1197-1212
Some researchers advocate a statistical modeling approach to inference that draws on students’ intuitions about factors influencing phenomena and that requires students to build models. Such a modeling approach to inference became possible with the creation of TinkerPlots Sampler technology. However, little is known about what statistical modeling reasoning students need to acquire. Drawing and building on previous research, this study aims to uncover the statistical modeling reasoning students need to develop. A design-based research methodology employing Model Eliciting Activities was used. The focus of this paper is on two 11-year-old students as they engaged with a bag weight task using TinkerPlots. Findings indicate that these students seem to be developing the ability to build models, investigate and posit factors, consider variation and make decisions based on simulated data. From the analysis an initial statistical modeling framework is proposed. Implications of the findings are discussed.  相似文献   

9.
Information spreading in DTNs (Delay Tolerant Networks) adopts a store–carry–forward method, and nodes receive the message from others directly. However, it is hard to judge whether the information is safe in this communication mode. In this case, a node may observe other nodes’ behaviors. At present, there is no theoretical model to describe the varying rule of the nodes’ trusting level. In addition, due to the uncertainty of the connectivity in DTN, a node is hard to get the global state of the network. Therefore, a rational model about the node’s trusting level should be a function of the node’s own observing result. For example, if a node finds k nodes carrying a message, it may trust the information with probability p(k). This paper does not explore the real distribution of p(k), but instead presents a unifying theoretical framework to evaluate the performance of the information spreading in above case. This framework is an extension of the traditional SI (susceptible-infected) model, and is useful when p(k) conforms to any distribution. Simulations based on both synthetic and real motion traces show the accuracy of the framework. Finally, we explore the impact of the nodes’ behaviors based on certain special distributions through numerical results.  相似文献   

10.
David O. Tall 《ZDM》2007,39(1-2):145-154
In this paper I formulate a basic theoretical framework for the ways in which mathematical thinking grows as the child develops and matures into an adult. There is an essential need to focus on important phenomena, to name them and reflect on them to build rich concepts that are both powerful in use and yet simple to connect to other concepts. The child begins with human perception and action, linking them together in a coherent way. Symbols are introduced to denote mathematical processes (such as addition) that can be compressed as mathematical concepts (such as sum) to give symbols that operate flexibly as process and concept (procept). Knowledge becomes more sophisticated through building on experiences met before, focussing on relationships between properties, leading eventually to the advanced mathematics of concept definition and deduction. This gives a theoretical framework in which three modes of operation develop and grow in sophistication from conceptual-embodiment using thought experiments, to proceptual-symbolism using computation and symbol manipulation, then on to axiomatic-formalism based on concept definitions and formal proof.  相似文献   

11.
Qin and Lawless (1994) established the statistical inference theory for the empirical likelihood of the general estimating equations. However, in many practical problems, some unknown functional parts h(t) appear in the corresponding estimating equations EFG(X, h(T), β) = 0. In this paper, the empirical likelihood inference of combining information about unknown parameters and distribution function through the semiparametric estimating equations are developed, and the corresponding Wilk’s theorem is established. The simulations of several useful models are conducted to compare the finite-sample performance of the proposed method and that of the normal approximation based method. An illustrated real example is also presented.  相似文献   

12.
This article attacks ‘open systems’ arguments that because constant conjunctions are not generally observed in the real world of open systems we should be highly skeptical that universal laws exist. This work differs from other critiques of open system arguments against laws of nature by not focusing on laws themselves, but rather on the inference from open systems. We argue that open system arguments fail for two related reasons: 1) because they cannot account for the ‘systems’ central to their argument (nor the implied systems labeled ‘exogenous factors’ in relation to the system of interest) and 2) they are nomocentric, fixated on laws while ignoring initial and antecedent conditions that are able to account for systems and exogenous factors within a fundamentalist framework.  相似文献   

13.
Correspondence analysis is Kooi and Tamminga’s universal approach which generates in one go sound and complete natural deduction systems with independent inference rules for tabular extensions of many-valued functionally incomplete logics. Originally, this method was applied to Asenjo–Priest’s paraconsistent logic of paradox LP. As a result, one has natural deduction systems for all the logics obtainable from the basic three-valued connectives of LP (which is built in the \( \{\vee ,\wedge ,\lnot \} \)-language) by the addition of unary and binary connectives. Tamminga has also applied this technique to the paracomplete analogue of LP, strong Kleene logic \( \mathbf K_3 \). In this paper, we generalize these results for the negative fragments of LP and \( \mathbf K_3 \), respectively. Thus, the method of correspondence analysis works for the logics which have the same negations as LP or \( \mathbf K_3 \), but either have different conjunctions or disjunctions or even don’t have them as well at all. Besides, we show that correspondence analyses for the negative fragments of \( \mathbf K_3 \) and LP, respectively, are also suitable without any changes for the negative fragments of Heyting’s logic \( \mathbf G_3 \) and its dual \( \mathbf DG_3 \) (which have different interpretations of negation than \( \mathbf K_3 \) and LP).  相似文献   

14.
In any subject concerned with rational intervention in human affairs, theory must lead to practice; but practice is the source of theory: neither theory nor practice is prime. We can examine this ‘groundless’ relation by asking what intellectual framework F is applied in what methodology M to what area of application A? If we do this for O.R., systems analysis, systems engineering etc., we see that F and M have changed dramatically between the 1950s and the 1980s, yielding the ‘hard’ and ‘soft’ traditions of systems thinking. The ‘hard’ tradition, based on goal seeking, is examined in the work of Simon and contrasted with the ‘soft’ tradition, based on learning, as exemplified in the work of Vickers and the development of soft systems methodology. The two are complementary, but the relation between them is that the ‘hard’ is a special case of ‘soft’ systems thinking. This analysis makes sense of the recent history of management science and helps to prepare us for the 1990s.  相似文献   

15.
A new application-oriented notion of relatively A-maximal monotonicity (RMM) framework is introduced, and then it is applied to the approximation solvability of a general class of inclusion problems, while generalizing other existing results on linear convergence, including Rockafellar’s theorem (1976) on linear convergence using the proximal point algorithm in a real Hilbert space setting. The obtained results not only generalize most of the existing investigations, but also reduce smoothly to the case of the results on maximal monotone mappings and corresponding classical resolvent operators. Furthermore, our proof approach differs significantly to that of Rockafellar’s celebrated work, where the Lipschitz continuity of M ?1, the inverse of M:X→2 X , at zero is assumed to achieve a linear convergence of the proximal point algorithm. Note that the notion of relatively A-maximal monotonicity framework seems to be used to generalize the classical Yosida approximation (which is applied and studied mostly based on the classical resolvent operator in the literature) that in turn can be applied to first-order evolution equations as well as evolution inclusions.  相似文献   

16.
We introduce a new class of dynamic point process models with simple and intuitive dynamics that are based on the Voronoi tessellations generated by the processes. Under broad conditions, these processes prove to be ergodic and produce, on stabilisation, a wide range of clustering patterns. In the paper, we present results of simulation studies of three statistical measures (Thiel’s redundancy, van Lieshout and Baddeley’s J-function and the empirical distribution of the Voronoi nearest neighbours’ numbers) for inference on these models from the clustering behaviour in the stationary regime. In particular, we make comparisons with the area-interaction processes of Baddeley and van Lieshout.  相似文献   

17.
Both in Majid's double-bosonization theory and in Rosso's quantum shuffle theory, the rankinductive and type-crossing construction for U_q(g)'s is still a remaining open question. In this paper, working in Majid's framework, based on the generalized double-bosonization theorem we proved before, we further describe explicitly the type-crossing construction of U_q(g)'s for(BCD)_n series directly from type An-1via adding a pair of dual braided groups determined by a pair of(R, R′)-matrices of type A derived from the respective suitably chosen representations. Combining with our results of the first three papers of this series, this solves Majid's conjecture, i.e., any quantum group U_q(g) associated to a simple Lie algebra g can be grown out of U_q(sl_2)recursively by a series of suitably chosen double-bosonization procedures.  相似文献   

18.
Guershon Harel 《ZDM》2008,40(5):893-907
Two questions are on the mind of many mathematics educators; namely: What is the mathematics that we should teach in school? and how should we teach it? This is the second in a series of two papers addressing these fundamental questions. The first paper (Harel, 2008a) focuses on the first question and this paper on the second. Collectively, the two papers articulate a pedagogical stance oriented within a theoretical framework called DNR-based instruction in mathematics. The relation of this paper to the topic of this Special Issue is that it defines the concept of teacher’s knowledge base and illustrates with authentic teaching episodes an approach to its development with mathematics teachers. This approach is entailed from DNR’s premises, concepts, and instructional principles, which are also discussed in this paper.  相似文献   

19.
David Hilbert’s solvability criterion for polynomial systems in n variables from the 1890s was linked by Emmy Noether in the 1920s to the decomposition of ideals in commutative rings, which in turn led Garret Birkhoff in the 1940s to his subdirect representation theorem for general algebras. The Hilbert-Noether-Birkhoff linkage was brought to light in the late 1990s in talks by Bill Lawvere. The aim of this article is to analyze this linkage in the most elementary terms and then, based on our work of the 1980s, to present a general categorical framework for Birkhoff’s theorem.  相似文献   

20.
There are many factors that may contribute to the successful delivery of a simulation project. To provide a structured approach to assessing the impact various factors have on project success, we propose a top-down framework whereby 15 Key Performance Indicators (KPI) are developed that represent the level of successfulness of simulation projects from various perspectives. They are linked to a set of Critical Success Factors (CSF) as reported in the simulation literature. A single measure called Project’s Success Measure (PSM), which represents the project’s total success level, is proposed. The framework is tested against 9 simulation exemplar cases in healthcare and this provides support for its reliability. The results suggest that responsiveness to the customer’s needs and expectations, when compared with other factors, holds the strongest association with the overall success of simulation projects. The findings highlight some patterns about the significance of individual CSFs, and how the KPIs are used to identify problem areas in simulation projects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号