首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two-dimensional semantics aims to eliminate the puzzle of necessary a posteriori and contingent a priori truths. Recently many argue that even assuming two-dimensional semantics we are left with the puzzle of necessary and a posteriori propositions. Stephen Yablo (Pacific Philosophical Quarterly, 81, 98–122, 2000) and Penelope Mackie (Analysis, 62(3), 225–236, 2002) argue that a plausible sense of “knowing which” lets us know the object of such a proposition, and yet its necessity is “hidden” and thus a posteriori. This paper answers this objection; I argue that given two-dimensional semantics you cannot know a necessary proposition without knowing that it is true.
Hagit BenbajiEmail:
  相似文献   

2.
It is argued, on the basis of new counterexamples, that neither knowledge nor epistemic justification (or “epistemic rationality”) can reasonably be thought to be closed under logical implication. The argument includes an attempt to reconcile the fundamental intuitions of the opposing parties in the debate.
Claudio de AlmeidaEmail:
  相似文献   

3.
Some entities, such as fictional characters, propositions, properties, events and numbers are prima facie promising candidates for owing their existence to our linguistic and conceptual practices. However, it is notoriously hard to pin down just what sets such allegedly “language-created” entities apart from ordinary entities. The present paper considers some of the features that are supposed to distinguish between entities of the two kinds and argues that, on an independently plausible account of what it takes to individuate objects, the criteria let in more than friends of the strategy might be happy with.
Iris EinheuserEmail:
  相似文献   

4.
In this paper I discuss the claim that believing at will is ‘conceptually impossible’ or, to use a formulation encountered in the debate, “that nothing could be a belief and be willed directly”. I argue that such a claim is only plausible if directed against the claim that believing itself is an action-type. However, in the debate, the claim has been univocally directed against the position that forming a belief is an action-type. I argue that the many arguments offered in favor of the ‘conceptual impossibility’ of performing such actions fail without exception. If we are to argue against doxastic voluntarism we are better off by resorting to more modest means.
Nikolaj NottelmanEmail:
  相似文献   

5.
In their recent book Philosophical Foundations of Neuroscience, Max Bennett and Peter Hacker attack neural materialism (NM), the view, roughly, that mental states (events, processes, etc.) are identical with neural states or material properties of neural states (events, processes, etc.). Specifically, in the penultimate chapter entitled “Reductionism,” they argue that NM is unintelligible, that “there is no sense to literally identifying neural states and configurations with psychological attributes.” This is a provocative claim indeed. If Bennett and Hacker are right, then a sizeable number of philosophers, cognitive scientists, neuroscientists, etc., subscribe to a view that is not merely false, but strictly meaningless. In this article I show that Bennett and Hacker's arguments against NM, whether construed as arguments for the meaninglessness of or the falsity of the thesis, cannot withstand scrutiny: when laid bare they are found to rest upon highly dubious assumptions that either seriously mischaracterize or underestimate the resources of the thesis.
Greg JanzenEmail:
  相似文献   

6.
A strong, strictly virtue-based, and at the same time truth-centered framework for virtue epistemology (VE) is proposed that bases VE upon a clearly motivating epistemic virtue, inquisitiveness or curiosity in a very wide sense, characterizes the purely executive capacities-virtues as a means for the truth-goal set by the former, and, finally, situates the remaining, partly motivating and partly executive virtues in relation to this central stock of virtues. Character-trait epistemic virtues are presented as hybrids, partly moral, partly purely epistemic. In order to make the approach virtue-based, it is argued that the central virtue (inquisitiveness or curiosity) is responsible for the value of truth: truth is valuable to cognizers because they are inquisitive, and most other virtues are a means for satisfying inquisitiveness. On can usefully combine this virtue-based account of the motivation for acquiring knowledge with a Sosa-style analysis of the concept “knowledge”, which brings to the forefront virtues-capacities, in order to obtain a full-blooded, “strong” VE.
Nenad MiscevicEmail:
  相似文献   

7.
The author takes up three metaphysical conceptions of morality — realism, projectivism, constructivism — and the account of justification or reason that makes these pictures possible. It is argued that the right meta-ethical conception should be the one that entails the most plausible conception of reason-giving, rather than by any other consideration. Realism and projectivism, when understood in ways consistent with their fundamental commitments, generate unsatisfactory models of justification; constructivism alone does not. The author also argues for a particular interpretation of how “objective moral obligation” is to be understood within constructivism.
Steven RossEmail:
  相似文献   

8.
Previous models have applied evolving networks based on node-level “copy and rewire” rules to simple two player games (e.g. the Prisoner’s Dilemma). It was found that such models tended to evolve toward socially optimal behavior. Here we apply a similar technique to a more tricky co-ordination game (the weakest link game) requiring interactions from several players (nodes) that may play several strategies. We define a variant of the game with several equilibria—each offering increasing social benefit. We found that the evolving network functions to select and spread more optimal equilibria while resisting invasion by lower ones. Hence the network acts as a kind of “social ratchet” selecting for increasing social benefit. Such networks have applications in peer-to-peer computing and may have implications for understanding social systems.
David HalesEmail:
  相似文献   

9.
The small object argument is a transfinite construction which, starting from a set of maps in a category, generates a weak factorisation system on that category. As useful as it is, the small object argument has some problematic aspects: it possesses no universal property; it does not converge; and it does not seem to be related to other transfinite constructions occurring in categorical algebra. In this paper, we give an “algebraic” refinement of the small object argument, cast in terms of Grandis and Tholen’s natural weak factorisation systems, which rectifies each of these three deficiencies.
Richard GarnerEmail:
  相似文献   

10.
Tim Black 《Acta Analytica》2008,23(3):187-205
According to a Moorean response to skepticism, the standards for knowledge are invariantly comparatively low, and we can know across contexts all that we ordinarily take ourselves to know. It is incumbent upon the Moorean to defend his position by explaining how, in contexts in which S seems to lack knowledge, S can nevertheless have knowledge. The explanation proposed here relies on a warranted-assertability maneuver: Because we are warranted in asserting that S doesn’t know that p, it can seem that S does in fact lack that piece of knowledge. Moreover, this warranted-assertability maneuver is unique and better than similar maneuvers because it makes use of H. P. Grice’s general conversational rule of Quantity—“Do not make your contribution more informative than is required”—in explaining why we are warranted in asserting that S doesn’t know that p.
Tim BlackEmail:
  相似文献   

11.
12.
This paper addresses the relative errors associated with simple versus realistic (or science-based) models. We take the perspective of trying to predict what the model will predict as we begin to build the model. Any model building process can get the model “wrong” to a greater or lesser extent by making a theoretical mistake in constructing the model. In addition, every model needs data of some sort, whether it be obtained by experiments, surveys or expert judgment, and the data collection process is filled with error sources. This paper suggests a hypothesis that
1.  simple models have a larger variance in their predication of a result than do more realistic models (something most people intuitively agree to), and
2.  more realistic models still have a significant probability of an error because the errors in the model building process will result in a probability distribution that ought to be bimodal, trimodal, or higher multimodal.
The paper provides evidence to support these statements and draws conclusions about what types of models to generate and when.
Dennis BuedeEmail:
  相似文献   

13.
We propose an approach to epistemic justification that incorporates elements of both reliabilism and evidentialism, while also transforming these elements in significant ways. After briefly describing and motivating the non-standard version of reliabilism that Henderson and Horgan call “transglobal” reliabilism, we harness some of Henderson and Horgan’s conceptual machinery to provide a non-reliabilist account of propositional justification (i.e., evidential support). We then invoke this account, together with the notion of a transglobally reliable belief-forming process, to give an account of doxastic justification.
Terry HorganEmail:
  相似文献   

14.
Electricity is regarded as one of the most challenging topics for students of all ages. Several researchers have suggested that na?ve misconceptions about electricity stem from a deep incommensurability (Slotta and Chi 2006; Chi 2005) or incompatibility (Chi et al. 1994) between na?ve and expert knowledge structures. In this paper we argue that adopting an emergent levels-based perspective as proposed by Wilensky and Resnick (1999), allows us to reconceive commonly noted misconceptions in electricity as behavioral evidences of “slippage between levels,” i.e., these misconceptions appear when otherwise productive knowledge elements are sometimes activated inappropriately due to certain macro-level phenomenological cues only. We then introduce NIELS (NetLogo Investigations In Electromagnetism), a curriculum of emergent multi-agent-based computational models. NIELS models represent phenomena such as electric current and resistance as emergent from simple, body-syntonic interactions between electrons and other charges in a circuit. We discuss results from a pilot implementation of NIELS in an undergraduate physics course, that highlight the ability of an emergent levels-based approach to provide students with a deep, expert-like understanding of the relevant phenomena by bootstrapping, rather than discarding their existing repertoire of intuitive knowledge.
Pratim SenguptaEmail:
  相似文献   

15.
In this paper, we defend and extend a (simple) mathematical model of akrasia.
Joseph S. FuldaEmail:
  相似文献   

16.
True Antecedents     
In this note I discuss what seems to be a new kind of counterexample to Lewis’s account of counterfactuals. A coin is to be tossed twice. I bet on ‘Two heads’, and I win. Common sense says that (1) is false. But Lewis’s theory says that it is true. (1) If at least one head had come up, I would have won.
Michael McDermottEmail:
  相似文献   

17.
In this paper I argue against Armstrong’s recent truthmaking account of possibility. I show that the truthmaking account presupposes modality in a number of different ways, and consequently that it is incapable of underwriting a genuine reduction of modality. I also argue that Armstrong’s account faces serious difficulties irrespective of the question of reduction; in particular, I argue that his Entailment and Possibility Principles are both false.
Javier KalhatEmail:
  相似文献   

18.
 Let and be the von Mangoldt function and M?bius function, respectively, x real and y“small” compared with x. This paper gives, for the first time, a non-trivial estimate of the sum
for all whenever . Correspondingly, it is also proved that
  相似文献   

19.
Timothy Williamson has provided damaging counterexamples to Robert Nozick’s sensitivity principle. The examples are based on Williamson’s anti-luminosity arguments, and they show how knowledge requires a margin for error that appears to be incompatible with sensitivity. I explain how Nozick can rescue sensitivity from Williamson’s counterexamples by appeal to a specific conception of the methods by which an agent forms a belief. I also defend the proposed conception of methods against Williamson’s criticisms.
Kelly BeckerEmail:
  相似文献   

20.
Lynne Rudder Baker’s Constitution View of human persons has come under much recent scrutiny. Baker argues that each human person is constituted by, but not identical to, a human animal. Much of the critical discussion of Baker’s Constitution View has focused upon this aspect of her account. Less has been said about the positive diachronic account of personal identity offered by Baker. Baker argues that it is sameness of what she labels ‘first-person perspective’ that is essential to understanding personal identity over time. Baker claims that her account avoids the commitment to indeterminacy of personal identity entailed by the psychological account. Further, the psychological account, but not her account, is plagued by what Baker labels the ‘duplication problem’. In the end, I argue that neither of these considerations forces us to renounce the psychological account and adopt Baker’s favored account.
Christopher BufordEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号