首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Intelligence is a central feature of human beings’ primary and interpersonal experience. Understanding how intelligence originated and scaled during evolution is a key challenge for modern biology. Some of the most important approaches to understanding intelligence are the ongoing efforts to build new intelligences in computer science (AI) and bioengineering. However, progress has been stymied by a lack of multidisciplinary consensus on what is central about intelligence regardless of the details of its material composition or origin (evolved vs. engineered). We show that Buddhist concepts offer a unique perspective and facilitate a consilience of biology, cognitive science, and computer science toward understanding intelligence in truly diverse embodiments. In coming decades, chimeric and bioengineering technologies will produce a wide variety of novel beings that look nothing like familiar natural life forms; how shall we gauge their moral responsibility and our own moral obligations toward them, without the familiar touchstones of standard evolved forms as comparison? Such decisions cannot be based on what the agent is made of or how much design vs. natural evolution was involved in their origin. We propose that the scope of our potential relationship with, and so also our moral duty toward, any being can be considered in the light of Care—a robust, practical, and dynamic lynchpin that formalizes the concepts of goal-directedness, stress, and the scaling of intelligence; it provides a rubric that, unlike other current concepts, is likely to not only survive but thrive in the coming advances of AI and bioengineering. We review relevant concepts in basal cognition and Buddhist thought, focusing on the size of an agent’s goal space (its cognitive light cone) as an invariant that tightly links intelligence and compassion. Implications range across interpersonal psychology, regenerative medicine, and machine learning. The Bodhisattva’s vow (“for the sake of all sentient life, I shall achieve awakening”) is a practical design principle for advancing intelligence in our novel creations and in ourselves.  相似文献   

2.
Assessing where and how information is stored in biological networks (such as neuronal and genetic networks) is a central task both in neuroscience and in molecular genetics, but most available tools focus on the network’s structure as opposed to its function. Here, we introduce a new information-theoretic tool—information fragmentation analysis—that, given full phenotypic data, allows us to localize information in complex networks, determine how fragmented (across multiple nodes of the network) the information is, and assess the level of encryption of that information. Using information fragmentation matrices we can also create information flow graphs that illustrate how information propagates through these networks. We illustrate the use of this tool by analyzing how artificial brains that evolved in silico solve particular tasks, and show how information fragmentation analysis provides deeper insights into how these brains process information and “think”. The measures of information fragmentation and encryption that result from our methods also quantify complexity of information processing in these networks and how this processing complexity differs between primary exposure to sensory data (early in the lifetime) and later routine processing.  相似文献   

3.
In previous research, we showed that ‘texts that tell a story’ exhibit a statistical structure that is not Maxwell–Boltzmann but Bose–Einstein. Our explanation is that this is due to the presence of ‘indistinguishability’ in human language as a result of the same words in different parts of the story being indistinguishable from one another, in much the same way that ’indistinguishability’ occurs in quantum mechanics, also there leading to the presence of Bose–Einstein rather than Maxwell–Boltzmann as a statistical structure. In the current article, we set out to provide an explanation for this Bose–Einstein statistics in human language. We show that it is the presence of ‘meaning’ in ‘texts that tell a story’ that gives rise to the lack of independence characteristic of Bose–Einstein, and provides conclusive evidence that ‘words can be considered the quanta of human language’, structurally similar to how ‘photons are the quanta of electromagnetic radiation’. Using several studies on entanglement from our Brussels research group, we also show, by introducing the von Neumann entropy for human language, that it is also the presence of ‘meaning’ in texts that makes the entropy of a total text smaller relative to the entropy of the words composing it. We explain how the new insights in this article fit in with the research domain called ‘quantum cognition’, where quantum probability models and quantum vector spaces are used in human cognition, and are also relevant to the use of quantum structures in information retrieval and natural language processing, and how they introduce ‘quantization’ and ‘Bose–Einstein statistics’ as relevant quantum effects there. Inspired by the conceptuality interpretation of quantum mechanics, and relying on the new insights, we put forward hypotheses about the nature of physical reality. In doing so, we note how this new type of decrease in entropy, and its explanation, may be important for the development of quantum thermodynamics. We likewise note how it can also give rise to an original explanatory picture of the nature of physical reality on the surface of planet Earth, in which human culture emerges as a reinforcing continuation of life.  相似文献   

4.
We provide a new formulation of the Local Friendliness no-go theorem of Bong et al. [Nat. Phys. 16, 1199 (2020)] from fundamental causal principles, providing another perspective on how it puts strictly stronger bounds on quantum reality than Bell’s theorem. In particular, quantum causal models have been proposed as a way to maintain a peaceful coexistence between quantum mechanics and relativistic causality while respecting Leibniz’s methodological principle. This works for Bell’s theorem but does not work for the Local Friendliness no-go theorem, which considers an extended Wigner’s Friend scenario. More radical conceptual renewal is required; we suggest that cleaving to Leibniz’s principle requires extending relativity to events themselves.  相似文献   

5.
Information theory is a well-established method for the study of many phenomena and more than 70 years after Claude Shannon first described it in A Mathematical Theory of Communication it has been extended well beyond Shannon’s initial vision. It is now an interdisciplinary tool that is used from ‘causal’ information flow to inferring complex computational processes and it is common to see it play an important role in fields as diverse as neuroscience, artificial intelligence, quantum mechanics, and astrophysics. In this article, I provide a selective review of a specific aspect of information theory that has received less attention than many of the others: as a tool for understanding, modelling, and detecting non-linear phenomena in finance and economics. Although some progress has been made in this area, it is still an under-developed area that I argue has considerable scope for further development.  相似文献   

6.
Neurofeedback training (NFT) has shown promising results in recent years as a tool to address the effects of age-related cognitive decline in the elderly. Since previous studies have linked reduced complexity of electroencephalography (EEG) signal to the process of cognitive decline, we propose the use of non-linear methods to characterise changes in EEG complexity induced by NFT. In this study, we analyse the pre- and post-training EEG from 11 elderly subjects who performed an NFT based on motor imagery (MI–NFT). Spectral changes were studied using relative power (RP) from classical frequency bands (delta, theta, alpha, and beta), whilst multiscale entropy (MSE) was applied to assess EEG-induced complexity changes. Furthermore, we analysed the subject’s scores from Luria tests performed before and after MI–NFT. We found that MI–NFT induced a power shift towards rapid frequencies, as well as an increase of EEG complexity in all channels, except for C3. These improvements were most evident in frontal channels. Moreover, results from cognitive tests showed significant enhancement in intellectual and memory functions. Therefore, our findings suggest the usefulness of MI–NFT to improve cognitive functions in the elderly and encourage future studies to use MSE as a metric to characterise EEG changes induced by MI–NFT.  相似文献   

7.
以物理学原理为基础,结合机械运动原理与电子控制技术。制作了智能型实物棋盘人机对弈象棋机器人。利用霍尔元件感知磁场、电磁铁吸引棋子和落放棋子、三维机械臂移动棋子、单片机程序控制三维机械臂的运动,完成人与机器人对弈象棋的整个过程。结果表明。设计制作的智能型实物棋盘人机对弈象棋机器人完全能与下棋者在实物棋盘下象棋.该机器人下棋能力强,动作自如。  相似文献   

8.
Among the existing bearing faults, ball ones are known to be the most difficult to detect and classify. In this work, we propose a diagnosis methodology for these incipient faults’ classification using time series of vibration signals and their decomposition. Firstly, the vibration signals were decomposed using empirical mode decomposition (EMD). Time series of intrinsic mode functions (IMFs) were then obtained. Through analysing the energy content and the components’ sensitivity to the operating point variation, only the most relevant IMFs were retained. Secondly, a statistical analysis based on statistical moments and the Kullback–Leibler divergence (KLD) was computed allowing the extraction of the most relevant and sensitive features for the fault information. Thirdly, these features were used as inputs for the statistical clustering techniques to perform the classification. In the framework of this paper, the efficiency of several family of techniques were investigated and compared including linear, kernel-based nonlinear, systematic deterministic tree-based, and probabilistic techniques. The methodology’s performance was evaluated through the training accuracy rate (TrA), testing accuracy rate (TsA), training time (Trt) and testing time (Tst). The diagnosis methodology has been applied to the Case Western Reserve University (CWRU) dataset. Using our proposed method, the initial EMD decomposition into eighteen IMFs was reduced to four and the most relevant features identified via the IMFs’ variance and the KLD were extracted. Classification results showed that the linear classifiers were inefficient, and that kernel or data-mining classifiers achieved 100% classification rates through the feature fusion. For comparison purposes, our proposed method demonstrated a certain superiority over the multiscale permutation entropy. Finally, the results also showed that the training and testing times for all the classifiers were lower than 2 s, and 0.2 s, respectively, and thus compatible with real-time applications.  相似文献   

9.
Estimates based on expert judgements of quantities of interest are commonly used to supplement or replace measurements when the latter are too expensive or impossible to obtain. Such estimates are commonly accompanied by information about the uncertainty of the estimate, such as a credible interval. To be considered well-calibrated, an expert’s credible intervals should cover the true (but unknown) values a certain percentage of time, equal to the percentage specified by the expert. To assess expert calibration, so-called calibration questions may be asked in an expert elicitation exercise; these are questions with known answers used to assess and compare experts’ performance. An approach that is commonly applied to assess experts’ performance by using these questions is to directly compare the stated percentage cover with the actual coverage. We show that this approach has statistical drawbacks when considered in a rigorous hypothesis testing framework. We generalize the test to an equivalence testing framework and discuss the properties of this new proposal. We show that comparisons made on even a modest number of calibration questions have poor power, which suggests that the formal testing of the calibration of experts in an experimental setting may be prohibitively expensive. We contextualise the theoretical findings with a couple of applications and discuss the implications of our findings.  相似文献   

10.
Bell inequalities were created with the goal of improving the understanding of foundational questions in quantum mechanics. To this end, they are typically applied to measurement results generated from entangled systems of particles. They can, however, also be used as a statistical tool for macroscopic systems, where they can describe the connection strength between two components of a system under a causal model. We show that, in principle, data from macroscopic observations analyzed with Bell’ s approach can invalidate certain causal models. To illustrate this use, we describe a macroscopic game setting, without a quantum mechanical measurement process, and analyze it using the framework of Bell experiments. In the macroscopic game, violations of the inequalities can be created by cheating with classically defined strategies. In the physical context, the meaning of violations is less clear and is still vigorously debated. We discuss two measures for optimal strategies to generate a given statistic that violates the inequalities. We show their mathematical equivalence and how they can be computed from CHSH-quantities alone, if non-signaling applies. As a macroscopic example from the financial world, we show how the unfair use of insider knowledge could be picked up using Bell statistics. Finally, in the discussion of realist interpretations of quantum mechanical Bell experiments, cheating strategies are often expressed through the ideas of free choice and locality. In this regard, violations of free choice and locality can be interpreted as two sides of the same coin, which underscores the view that the meaning these terms are given in Bell’s approach should not be confused with their everyday use. In general, we conclude that Bell’s approach also carries lessons for understanding macroscopic systems of which the connectedness conforms to different causal structures.  相似文献   

11.
This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and how information theory has informed the development of the ideas of each field. A key theme is expected utility theory, its connection to information theory, the Bayesian approach to decision-making and forms of (bounded) rationality. What emerges from this review is a broadly unified formal perspective derived from three very different starting points that reflect the unique principles of each field. Each of the three approaches reviewed can, in principle at least, be implemented in a computational model in such a way that, with sufficient computational power, they could be compared with human abilities in complex tasks. However, a central critique that can be applied to all three approaches was first put forward by Savage in The Foundations of Statistics and recently brought to the fore by the economist Binmore: Bayesian approaches to decision-making work in what Savage called ‘small worlds’ but cannot work in ‘large worlds’. This point, in various different guises, is central to some of the current debates about the power of artificial intelligence and its relationship to human-like learning and decision-making. Recent work on artificial intelligence has gone some way to bridging this gap but significant questions remain to be answered in all three fields in order to make progress in producing realistic models of human decision-making in the real world in which we live in.  相似文献   

12.
Uncovering causal interdependencies from observational data is one of the great challenges of a nonlinear time series analysis. In this paper, we discuss this topic with the help of an information-theoretic concept known as Rényi’s information measure. In particular, we tackle the directional information flow between bivariate time series in terms of Rényi’s transfer entropy. We show that by choosing Rényi’s parameter α, we can appropriately control information that is transferred only between selected parts of the underlying distributions. This, in turn, is a particularly potent tool for quantifying causal interdependencies in time series, where the knowledge of “black swan” events, such as spikes or sudden jumps, are of key importance. In this connection, we first prove that for Gaussian variables, Granger causality and Rényi transfer entropy are entirely equivalent. Moreover, we also partially extend these results to heavy-tailed α-Gaussian variables. These results allow establishing a connection between autoregressive and Rényi entropy-based information-theoretic approaches to data-driven causal inference. To aid our intuition, we employed the Leonenko et al. entropy estimator and analyzed Rényi’s information flow between bivariate time series generated from two unidirectionally coupled Rössler systems. Notably, we find that Rényi’s transfer entropy not only allows us to detect a threshold of synchronization but it also provides non-trivial insight into the structure of a transient regime that exists between the region of chaotic correlations and synchronization threshold. In addition, from Rényi’s transfer entropy, we could reliably infer the direction of coupling and, hence, causality, only for coupling strengths smaller than the onset value of the transient regime, i.e., when two Rössler systems are coupled but have not yet entered synchronization.  相似文献   

13.
Various mathematical frameworks play an essential role in understanding the economic systems and the emergence of crises in them. Understanding the relation between the structure of connections between the system’s constituents and the emergence of a crisis is of great importance. In this paper, we propose a novel method for the inference of economic systems’ structures based on complex networks theory utilizing the time series of prices. Our network is obtained from the correlation matrix between the time series of companies’ prices by imposing a threshold on the values of the correlation coefficients. The optimal value of the threshold is determined by comparing the spectral properties of the threshold network and the correlation matrix. We analyze the community structure of the obtained networks and the relation between communities’ inter and intra-connectivity as indicators of systemic risk. Our results show how an economic system’s behavior is related to its structure and how the crisis is reflected in changes in the structure. We show how regulation and deregulation affect the structure of the system. We demonstrate that our method can identify high systemic risks and measure the impact of the actions taken to increase the system’s stability.  相似文献   

14.
15.
In 2016, Steve Gull has outlined has outlined a proof of Bell’s theorem using Fourier theory. Gull’s philosophy is that Bell’s theorem (or perhaps a key lemma in its proof) can be seen as a no-go theorem for a project in distributed computing with classical, not quantum, computers. We present his argument, correcting misprints and filling gaps. In his argument, there were two completely separated computers in the network. We need three in order to fill all the gaps in his proof: a third computer supplies a stream of random numbers to the two computers representing the two measurement stations in Bell’s work. One could also imagine that computer replaced by a cloned, virtual computer, generating the same pseudo-random numbers within each of Alice and Bob’s computers. Either way, we need an assumption of the presence of shared i.i.d. randomness in the form of a synchronised sequence of realisations of i.i.d. hidden variables underlying the otherwise deterministic physics of the sequence of trials. Gull’s proof then just needs a third step: rewriting an expectation as the expectation of a conditional expectation given the hidden variables.  相似文献   

16.
Psychotherapy involves the modification of a client’s worldview to reduce distress and enhance well-being. We take a human dynamical systems approach to modeling this process, using Reflexively Autocatalytic foodset-derived (RAF) networks. RAFs have been used to model the self-organization of adaptive networks associated with the origin and early evolution of both biological life, as well as the evolution and development of the kind of cognitive structure necessary for cultural evolution. The RAF approach is applicable in these seemingly disparate cases because it provides a theoretical framework for formally describing under what conditions systems composed of elements that interact and ‘catalyze’ the formation of new elements collectively become integrated wholes. In our application, the elements are mental representations, and the whole is a conceptual network. The initial components—referred to as foodset items—are mental representations that are innate, or were acquired through social learning or individual learning (of pre-existing information). The new elements—referred to as foodset-derived items—are mental representations that result from creative thought (resulting in new information). In clinical psychology, a client’s distress may be due to, or exacerbated by, one or more beliefs that diminish self-esteem. Such beliefs may be formed and sustained through distorted thinking, and the tendency to interpret ambiguous events as confirmation of these beliefs. We view psychotherapy as a creative collaborative process between therapist and client, in which the output is not an artwork or invention but a more well-adapted worldview and approach to life on the part of the client. In this paper, we model a hypothetical albeit representative example of the formation and dissolution of such beliefs over the course of a therapist–client interaction using RAF networks. We show how the therapist is able to elicit this worldview from the client and create a conceptualization of the client’s concerns. We then formally demonstrate four distinct ways in which the therapist is able to facilitate change in the client’s worldview: (1) challenging the client’s negative interpretations of events, (2) providing direct evidence that runs contrary to and counteracts the client’s distressing beliefs, (3) using self-disclosure to provide examples of strategies one can use to diffuse a negative conclusion, and (4) reinforcing the client’s attempts to assimilate such strategies into their own ways of thinking. We then discuss the implications of such an approach to expanding our knowledge of the development of mental health concerns and the trajectory of the therapeutic change.  相似文献   

17.
The power output of Stirling engines can be optimized by several means. In this study, the focus is on potential performance improvements that can be achieved by optimizing the piston motion of an alpha-Stirling engine in the presence of dissipative processes, in particular mechanical friction. We use a low-effort endoreversible Stirling engine model, which allows for the incorporation of finite heat and mass transfer as well as the friction caused by the piston motion. Instead of performing a parameterization of the piston motion and optimizing these parameters, we here use an indirect iterative gradient method that is based on Pontryagin’s maximum principle. For the varying friction coefficient, the optimization results are compared to both, a harmonic piston motion and optimization results found in a previous study, where a parameterized piston motion had been used. Thus we show how much performance can be improved by using the more sophisticated and numerically more expensive iterative gradient method.  相似文献   

18.
We study Arrow’s Impossibility Theorem in the quantum setting. Our work is based on the work of Bao and Halpern, in which it is proved that the quantum analogue of Arrow’s Impossibility Theorem is not valid. However, we feel unsatisfied about the proof presented in Bao and Halpern’s work. Moreover, the definition of Quantum Independence of Irrelevant Alternatives (QIIA) in Bao and Halpern’s work seems not appropriate to us. We give a better definition of QIIA, which properly captures the idea of the independence of irrelevant alternatives, and a detailed proof of the violation of Arrow’s Impossibility Theorem in the quantum setting with the modified definition.  相似文献   

19.
In this paper, we generalize the notion of Shannon’s entropy power to the Rényi-entropy setting. With this, we propose generalizations of the de Bruijn identity, isoperimetric inequality, or Stam inequality. This framework not only allows for finding new estimation inequalities, but it also provides a convenient technical framework for the derivation of a one-parameter family of Rényi-entropy-power-based quantum-mechanical uncertainty relations. To illustrate the usefulness of the Rényi entropy power obtained, we show how the information probability distribution associated with a quantum state can be reconstructed in a process that is akin to quantum-state tomography. We illustrate the inner workings of this with the so-called “cat states”, which are of fundamental interest and practical use in schemes such as quantum metrology. Salient issues, including the extension of the notion of entropy power to Tsallis entropy and ensuing implications in estimation theory, are also briefly discussed.  相似文献   

20.
For the science of autonomous human–machine systems, traditional causal-time interpretations of reality in known contexts are sufficient for rational decisions and actions to be taken, but not for uncertain or dynamic contexts, nor for building the best teams. First, unlike game theory where the contexts are constructed for players, or machine learning where contexts must be stable, when facing uncertainty or conflict, a rational process is insufficient for decisions or actions to be taken; second, as supported by the literature, rational explanations cannot disaggregate human–machine teams. In the first case, interdependent humans facing uncertainty spontaneously engage in debate over complementary tradeoffs in a search for the best path forward, characterized by maximum entropy production (MEP); however, in the second case, signified by a reduction in structural entropy production (SEP), interdependent team structures make it rationally impossible to discern what creates better teams. In our review of evidence for SEP–MEP complementarity for teams, we found that structural redundancy for top global oil producers, replicated for top global militaries, impedes interdependence and promotes corruption. Next, using UN data for Middle Eastern North African nations plus Israel, we found that a nation’s structure of education is significantly associated with MEP by the number of patents it produces; this conflicts with our earlier finding that a U.S. Air Force education in air combat maneuvering was not associated with the best performance in air combat, but air combat flight training was. These last two results exemplify that SEP–MEP interactions by the team’s best members are made by orthogonal contributions. We extend our theory to find that competition between teams hinges on vulnerability, a complementary excess of SEP and reduced MEP, which generalizes to autonomous human–machine systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号