首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In 2015, I wrote a book with the same title as this article. The book’s subtitle is: “What we know and what we do not know.” On the book’s dedication page, I wrote: “This book is dedicated to readers of popular science books who are baffled, perplexed, puzzled, astonished, confused, and discombobulated by reading about Information, Entropy, Life and the Universe.” In the first part of this article, I will present the definitions of two central concepts: the “Shannon measure of information” (SMI), in Information Theory, and “Entropy”, in Thermodynamics. Following these definitions, I will discuss the framework of their applicability. In the second part of the article, I will examine the question of whether living systems and the entire universe are, or are not within the framework of applicability of the concepts of SMI and Entropy. I will show that much of the confusion that exists in the literature arises because of people’s ignorance about the framework of applicability of these concepts.  相似文献   

2.
We present a summary of research that we have conducted employing AI to better understand human morality. This summary adumbrates theoretical fundamentals and considers how to regulate development of powerful new AI technologies. The latter research aim is benevolent AI, with fair distribution of benefits associated with the development of these and related technologies, avoiding disparities of power and wealth due to unregulated competition. Our approach avoids statistical models employed in other approaches to solve moral dilemmas, because these are “blind” to natural constraints on moral agents, and risk perpetuating mistakes. Instead, our approach employs, for instance, psychologically realistic counterfactual reasoning in group dynamics. The present paper reviews studies involving factors fundamental to human moral motivation, including egoism vs. altruism, commitment vs. defaulting, guilt vs. non-guilt, apology plus forgiveness, counterfactual collaboration, among other factors fundamental in the motivation of moral action. These being basic elements in most moral systems, our studies deliver generalizable conclusions that inform efforts to achieve greater sustainability and global benefit, regardless of cultural specificities in constituents.  相似文献   

3.
Endgame studies have long served as a tool for testing human creativity and intelligence. We find that they can serve as a tool for testing machine ability as well. Two of the leading chess engines, Stockfish and Leela Chess Zero (LCZero), employ significantly different methods during play. We use Plaskett’s Puzzle, a famous endgame study from the late 1970s, to compare the two engines. Our experiments show that Stockfish outperforms LCZero on the puzzle. We examine the algorithmic differences between the engines and use our observations as a basis for carefully interpreting the test results. Drawing inspiration from how humans solve chess problems, we ask whether machines can possess a form of imagination. On the theoretical side, we describe how Bellman’s equation may be applied to optimize the probability of winning. To conclude, we discuss the implications of our work on artificial intelligence (AI) and artificial general intelligence (AGI), suggesting possible avenues for future research.  相似文献   

4.
What information-processing strategies and general principles are sufficient to enable self-organized morphogenesis in embryogenesis and regeneration? We designed and analyzed a minimal model of self-scaling axial patterning consisting of a cellular network that develops activity patterns within implicitly set bounds. The properties of the cells are determined by internal ‘genetic’ networks with an architecture shared across all cells. We used machine-learning to identify models that enable this virtual mini-embryo to pattern a typical axial gradient while simultaneously sensing the set boundaries within which to develop it from homogeneous conditions—a setting that captures the essence of early embryogenesis. Interestingly, the model revealed several features (such as planar polarity and regenerative re-scaling capacity) for which it was not directly selected, showing how these common biological design principles can emerge as a consequence of simple patterning modes. A novel “causal network” analysis of the best model furthermore revealed that the originally symmetric model dynamically integrates into intercellular causal networks characterized by broken-symmetry, long-range influence and modularity, offering an interpretable macroscale-circuit-based explanation for phenotypic patterning. This work shows how computation could occur in biological development and how machine learning approaches can generate hypotheses and deepen our understanding of how featureless tissues might develop sophisticated patterns—an essential step towards predictive control of morphogenesis in regenerative medicine or synthetic bioengineering contexts. The tools developed here also have the potential to benefit machine learning via new forms of backpropagation and by leveraging the novel distributed self-representation mechanisms to improve robustness and generalization.  相似文献   

5.
Representation and abstraction are two of the fundamental concepts of computer science. Together they enable “high-level” programming: without abstraction programming would be tied to machine code; without a machine representation, it would be a pure mathematical exercise. Representation begins with an abstract structure and seeks to find a more concrete one. Abstraction does the reverse: it starts with concrete structures and abstracts away. While formal accounts of representation are easy to find, abstraction is a different matter. In this paper, we provide an analysis of data abstraction based upon some contemporary work in the philosophy of mathematics. The paper contains a mathematical account of how Frege’s approach to abstraction may be interpreted, modified, extended and imported into type theory. We argue that representation and abstraction, while mathematical siblings, are philosophically quite different. A case of special interest concerns the abstract/physical interface which houses both the physical representation of abstract structures and the abstraction of physical systems.  相似文献   

6.
This paper starts from Schrödinger’s famous question “what is life” and elucidates answers that invoke, in particular, Friston’s free energy principle and its relation to the method of Bayesian inference and to Synergetics 2nd foundation that utilizes Jaynes’ maximum entropy principle. Our presentation reflects the shift from the emphasis on physical principles to principles of information theory and Synergetics. In view of the expected general audience of this issue, we have chosen a somewhat tutorial style that does not require special knowledge on physics but familiarizes the reader with concepts rooted in information theory and Synergetics.  相似文献   

7.
In recent years, the task of translating from one language to another has attracted wide attention from researchers due to numerous practical uses, ranging from the translation of various texts and speeches, including the so-called “machine” translation, to the dubbing of films and numerous other video materials. To study this problem, we propose to use the information-theoretic method for assessing the quality of translations. We based our approach on the classification of sources of text variability proposed by A.N. Kolmogorov: information content, form, and unconscious author’s style. It is clear that the unconscious “author’s” style is influenced by the translator. So researchers need special methods to determine how accurately the author’s style is conveyed, because it, in a sense, determines the quality of the translation. In this paper, we propose a method that allows us to estimate the quality of translation from different translators. The method is used to study translations of classical English-language works into Russian and, conversely, Russian classics into English. We successfully used this method to determine the attribution of literary texts.  相似文献   

8.
9.
Assessing where and how information is stored in biological networks (such as neuronal and genetic networks) is a central task both in neuroscience and in molecular genetics, but most available tools focus on the network’s structure as opposed to its function. Here, we introduce a new information-theoretic tool—information fragmentation analysis—that, given full phenotypic data, allows us to localize information in complex networks, determine how fragmented (across multiple nodes of the network) the information is, and assess the level of encryption of that information. Using information fragmentation matrices we can also create information flow graphs that illustrate how information propagates through these networks. We illustrate the use of this tool by analyzing how artificial brains that evolved in silico solve particular tasks, and show how information fragmentation analysis provides deeper insights into how these brains process information and “think”. The measures of information fragmentation and encryption that result from our methods also quantify complexity of information processing in these networks and how this processing complexity differs between primary exposure to sensory data (early in the lifetime) and later routine processing.  相似文献   

10.
With the increasing number of connected devices, complex systems such as smart homes record a multitude of events of various types, magnitude and characteristics. Current systems struggle to identify which events can be considered more memorable than others. In contrast, humans are able to quickly categorize some events as being more “memorable” than others. They do so without relying on knowledge of the system’s inner working or large previous datasets. Having this ability would allow the system to: (i) identify and summarize a situation to the user by presenting only memorable events; (ii) suggest the most memorable events as possible hypotheses in an abductive inference process. Our proposal is to use Algorithmic Information Theory to define a “memorability” score by retrieving events using predicative filters. We use smart-home examples to illustrate how our theoretical approach can be implemented in practice.  相似文献   

11.
We consider the negotiation problem, in which an agent negotiates on behalf of a principal. Our considerations are focused on the Inspire negotiation support system in which the principal’s preferences are visualised by circles. In this way, the principal describes the importance of each negotiation issue and the relative utility of each considered option. The paper proposes how this preference information may be implemented by the agent for determining a scoring function used to support decisions throughout the negotiation process. The starting point of our considerations is a discussion regarding the visualisation of the principal’s preferences. We assume here that the importance of each issue and the utility of each option increases with the size of the circle representing them. The imprecise meaning of the notion of “circle size” implies that in a considered case, the utility of an option should be evaluated by a fuzzy number. The proposed utility fuzzification is justified by a simple analysis of results obtained from the empirical prenegotiation experiment. A novel method is proposed to determine trapezoidal fuzzy numbers, which evaluates an option’s utility using a series of answers given by the participants of the experiment. The utilities obtained this way are applied to determine the fuzzy scoring function for an agent. By determining such a common generalised fuzzy scoring system, our approach helps agents handle the differences in human cognitive processes associated with understanding the principal’s preferences. This work is the first approach to fuzzification of the preferences in the Inspire negotiation support system.  相似文献   

12.
In 1976 we reported our first autopsied case with diffuse Lewy body disease (DLBD), the term of which we proposed in 1984. We also proposed the term “Lewy body disease” (LBD) in1980. Subsequently, we classified LBD into three types according to the distribution pattern of Lewy bodies: a brain stem type, a transitional type and a diffuse type. Later, we added the cerebral type. As we have proposed since 1980, LBD has recently been used as a generic term to include Parkinson’s disease (PD), Parkinson’s disease with dementia (PDD) and dementia with Lewy bodies (DLB), which was proposed in 1996 on the basis of our reports of DLBD.DLB is now known to be the second most frequent dementia following Alzheimer’s disease (AD).In this paper we introduce our studies of DLBD and LBD.  相似文献   

13.
This paper is our attempt, on the basis of physical theory, to bring more clarification on the question “What is life?” formulated in the well-known book of Schrödinger in 1944. According to Schrödinger, the main distinguishing feature of a biosystem’s functioning is the ability to preserve its order structure or, in mathematical terms, to prevent increasing of entropy. However, Schrödinger’s analysis shows that the classical theory is not able to adequately describe the order-stability in a biosystem. Schrödinger also appealed to the ambiguous notion of negative entropy. We apply quantum theory. As is well-known, behaviour of the quantum von Neumann entropy crucially differs from behaviour of classical entropy. We consider a complex biosystem S composed of many subsystems, say proteins, cells, or neural networks in the brain, that is, S=(Si). We study the following problem: whether the compound system S can maintain “global order” in the situation of an increase of local disorder and if S can preserve the low entropy while other Si increase their entropies (may be essentially). We show that the entropy of a system as a whole can be constant, while the entropies of its parts rising. For classical systems, this is impossible, because the entropy of S cannot be less than the entropy of its subsystem Si. And if a subsystems’s entropy increases, then a system’s entropy should also increase, by at least the same amount. However, within the quantum information theory, the answer is positive. The significant role is played by the entanglement of a subsystems’ states. In the absence of entanglement, the increasing of local disorder implies an increasing disorder in the compound system S (as in the classical regime). In this note, we proceed within a quantum-like approach to mathematical modeling of information processing by biosystems—respecting the quantum laws need not be based on genuine quantum physical processes in biosystems. Recently, such modeling found numerous applications in molecular biology, genetics, evolution theory, cognition, psychology and decision making. The quantum-like model of order stability can be applied not only in biology, but also in social science and artificial intelligence.  相似文献   

14.
Influenza A virus (IAV) causes significant morbidity and mortality. The knowledge gained within the last decade on the pandemic IAV(H1N1)2009 improved our understanding not only of the viral pathogenicity but also the host cellular factors involved in the pathogenicity of multiorgan failure (MOF), such as cellular trypsin-type hemagglutinin (HA0) processing proteases for viral multiplication, cytokine storm, metabolic disorders and energy crisis. The HA processing proteases in the airway and organs for all IAV known to date have been identified. Recently, a new concept on the pathogenicity of MOF, the “influenza virus–cytokine–trypsin” cycle, has been proposed involving up-regulation of trypsin through pro-inflammatory cytokines, and potentiation of viral multiplication in various organs. Furthermore, the relationship between causative factors has been summarized as the “influenza virus–cytokine–trypsin” cycle interconnected with the “metabolic disorders–cytokine” cycle. These cycles provide new treatment concepts for ATP crisis and MOF. This review discusses IAV pathogenicity on cellular proteases, cytokines, metabolites and therapeutic options.  相似文献   

15.
The consensus regarding quantum measurements rests on two statements: (i) von Neumann’s standard quantum measurement theory leaves undetermined the basis in which observables are measured, and (ii) the environmental decoherence of the measuring device (the “meter”) unambiguously determines the measuring (“pointer”) basis. The latter statement means that the environment monitors (measures) selected observables of the meter and (indirectly) of the system. Equivalently, a measured quantum state must end up in one of the “pointer states” that persist in the presence of the environment. We find that, unless we restrict ourselves to projective measurements, decoherence does not necessarily determine the pointer basis of the meter. Namely, generalized measurements commonly allow the observer to choose from a multitude of alternative pointer bases that provide the same information on the observables, regardless of decoherence. By contrast, the measured observable does not depend on the pointer basis, whether in the presence or in the absence of decoherence. These results grant further support to our notion of Quantum Lamarckism, whereby the observer’s choices play an indispensable role in quantum mechanics.  相似文献   

16.
This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and how information theory has informed the development of the ideas of each field. A key theme is expected utility theory, its connection to information theory, the Bayesian approach to decision-making and forms of (bounded) rationality. What emerges from this review is a broadly unified formal perspective derived from three very different starting points that reflect the unique principles of each field. Each of the three approaches reviewed can, in principle at least, be implemented in a computational model in such a way that, with sufficient computational power, they could be compared with human abilities in complex tasks. However, a central critique that can be applied to all three approaches was first put forward by Savage in The Foundations of Statistics and recently brought to the fore by the economist Binmore: Bayesian approaches to decision-making work in what Savage called ‘small worlds’ but cannot work in ‘large worlds’. This point, in various different guises, is central to some of the current debates about the power of artificial intelligence and its relationship to human-like learning and decision-making. Recent work on artificial intelligence has gone some way to bridging this gap but significant questions remain to be answered in all three fields in order to make progress in producing realistic models of human decision-making in the real world in which we live in.  相似文献   

17.
In this paper, I investigate a connection between a common characterisation of freedom and how uncertainty is managed in a Bayesian hierarchical model. To do this, I consider a distributed factorization of a group’s optimization of free energy, in which each agent is attempting to align with the group and with its own model. I show how this can lead to equilibria for groups, defined by the capacity of the model being used, essentially how many different datasets it can handle. In particular, I show that there is a “sweet spot” in the capacity of a normal model in each agent’s decentralized optimization, and that this “sweet spot” corresponds to minimal free energy for the group. At the sweet spot, an agent can predict what the group will do and the group is not surprised by the agent. However, there is an asymmetry. A higher capacity model for an agent makes it harder for the individual to learn, as there are more parameters. Simultaneously, a higher capacity model for the group, implemented as a higher capacity model for each member agent, makes it easier for a group to integrate a new member. To optimize for a group of agents then requires one to make a trade-off in capacity, as each individual agent seeks to decrease capacity, but there is pressure from the group to increase capacity of all members. This pressure exists because as individual agent’s capacities are reduced, so too are their abilities to model other agents, and thereby to establish pro-social behavioural patterns. I then consider a basic two-level (dual process) Bayesian model of social reasoning and a set of three parameters of capacity that are required to implement such a model. Considering these three capacities as dependent elements in a free energy minimization for a group leads to a “sweet surface” in a three-dimensional space defining the triplet of parameters that each agent must use should they hope to minimize free energy as a group. Finally, I relate these three parameters to three notions of freedom and equality in human social organization, and postulate a correspondence between freedom and model capacity. That is, models with higher capacity, have more freedom as they can interact with more datasets.  相似文献   

18.
“Morphological computation” is an increasingly important concept in robotics, artificial intelligence, and philosophy of the mind. It is used to understand how the body contributes to cognition and control of behavior. Its understanding in terms of “offloading” computation from the brain to the body has been criticized as misleading, and it has been suggested that the use of the concept conflates three classes of distinct processes. In fact, these criticisms implicitly hang on accepting a semantic definition of what constitutes computation. Here, I argue that an alternative, mechanistic view on computation offers a significantly different understanding of what morphological computation is. These theoretical considerations are then used to analyze the existing research program in developmental biology, which understands morphogenesis, the process of development of shape in biological systems, as a computational process. This important line of research shows that cognition and intelligence can be found across all scales of life, as the proponents of the basal cognition research program propose. Hence, clarifying the connection between morphological computation and morphogenesis allows for strengthening the role of the former concept in this emerging research field.  相似文献   

19.
It is known that “quantum non locality”, leading to the violation of Bell’s inequality and more generally of classical local realism, can be attributed to the conjunction of two properties, which we call here elementary locality and predictive completeness. Taking this point of view, we show again that quantum mechanics violates predictive completeness, allowing the making of contextual inferences, which can, in turn, explain why quantum non locality does not contradict relativistic causality. An important question remains: if the usual quantum state ψ is predictively incomplete, how do we complete it? We give here a set of new arguments to show that ψ should be completed indeed, not by looking for any “hidden variables”, but rather by specifying the measurement context, which is required to define actual probabilities over a set of mutually exclusive physical events.  相似文献   

20.
Wigner’s friend scenarios involve an Observer, or Observers, measuring a Friend, or Friends, who themselves make quantum measurements. In recent discussions, it has been suggested that quantum mechanics may not always be able to provide a consistent account of a situation involving two Observers and two Friends. We investigate this problem by invoking the basic rules of quantum mechanics as outlined by Feynman in the well-known “Feynman Lectures on Physics”. We show here that these “Feynman rules” constrain the a priori assumptions which can be made in generalised Wigner’s friend scenarios, because the existence of the probabilities of interest ultimately depends on the availability of physical evidence (material records) of the system’s past. With these constraints obeyed, a non-ambiguous and consistent account of all measurement outcomes is obtained for all agents, taking part in various Wigner’s Friend scenarios.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号