首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17篇
  免费   1篇
化学   1篇
力学   1篇
数学   3篇
物理学   13篇
  2022年   3篇
  2021年   6篇
  2020年   1篇
  2014年   4篇
  2011年   1篇
  2010年   1篇
  2008年   1篇
  1997年   1篇
排序方式: 共有18条查询结果,搜索用时 15 毫秒
1.
周栋焯 《计算数学》2021,43(2):133-161
计算神经科学是近三十年来出现的一个新兴交叉学科,它强调采用数学定量的方法,如数学建模、理论分析和数值模拟等来研究和解决神经科学中的重要科学问题,一方面神经科学实验现象为发展新的数学模型、理论和算法提供了基础,另一方面通过数学定量,能反过来揭示神经科学实验现象背后的数理机制、发现新的科学规律.随着欧盟、美国、日本和我国脑...  相似文献   
2.
In this paper it will be shown that in neural systems with a recurrent architecture, the traditional concepts of knowledge representation cannot be applied any more; no stable representational relationship of reference can be found. That is why a redefinition of the relationship between the states of the environment and the internal representational states is proposed. Studying the dynamics of recurrent neural systems reveals that the goal of representation is no longer to map the environment as accurately as possible to the representation system (e.g., to symbols). It is suggested that it is more appropriate to look at neural systems as physical dynamical devices embodying the (transformation) knowledge for sensorimotor integration and for generating adequate behavior enabling the organism's survival. As an implication the representation is determined not only by the environment, but highly depends on the organization, structure, and constraints of the representation system as well as the sensory/motor systems which are embedded in a particular body structure. This leads to a system relative concept of representation. By transforming recurrent neural networks into the domain of finite automata, the dynamics as well as the epistemological implications become more clear. In recurrent neural systems a type of balance between the autonomy of the representation and the environmental dependence/influence emerges. This not only affects the traditional concept of knowledge representation, but has also implications for the understanding of semantics, language, communication, and even science.  相似文献   
3.
Light is a powerful investigational tool in biomedicine, at all levels of structural organization. Its multitude of features (intensity, wavelength, polarization, interference, coherence, timing, non-linear absorption, and even interactions with itself) able to create contrast, and thus images that detail the makeup and functioning of the living state can and should be combined for maximum effect, especially if one seeks simultaneously high spatiotemporal resolution and discrimination ability within a living organism. The resulting high relevance should be directed towards a better understanding, detection of abnormalities, and ultimately cogent, precise, and effective intervention. The new optical methods and their combinations needed to address modern surgery in the operating room of the future, and major diseases such as cancer and neurodegeneration are reviewed here, with emphasis on our own work and highlighting selected applications focusing on quantitation, early detection, treatment assessment, and clinical relevance, and more generally matching the quality of the optical detection approach to the complexity of the disease. This should provide guidance for future advanced theranostics, emphasizing a tighter coupling—spatially and temporally—between detection, diagnosis, and treatment, in the hope that technologic sophistication such as that of a Mars rover can be translationally deployed in the clinic, for saving and improving lives.  相似文献   
4.
This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and how information theory has informed the development of the ideas of each field. A key theme is expected utility theory, its connection to information theory, the Bayesian approach to decision-making and forms of (bounded) rationality. What emerges from this review is a broadly unified formal perspective derived from three very different starting points that reflect the unique principles of each field. Each of the three approaches reviewed can, in principle at least, be implemented in a computational model in such a way that, with sufficient computational power, they could be compared with human abilities in complex tasks. However, a central critique that can be applied to all three approaches was first put forward by Savage in The Foundations of Statistics and recently brought to the fore by the economist Binmore: Bayesian approaches to decision-making work in what Savage called ‘small worlds’ but cannot work in ‘large worlds’. This point, in various different guises, is central to some of the current debates about the power of artificial intelligence and its relationship to human-like learning and decision-making. Recent work on artificial intelligence has gone some way to bridging this gap but significant questions remain to be answered in all three fields in order to make progress in producing realistic models of human decision-making in the real world in which we live in.  相似文献   
5.
The increasing sophistication of the tools and results of cellular and molecular neuroscience would appear to suggest that explanatory force in neuroscience is defined by reduction to molecular biology. This view, however, is mistaken in that it loses sight of the goal of neuroscience proper: the characterization of the information content of biophysical variables and the transformation of these variables that lead to behaviors. Neuroscience is thus distinguished from applied molecular and cellular biology by the notion of computation. In this commentary, I will show how the notion of computation in neuroscience differs from that of other fields that investigate complex systems and argue that computation at various levels of neural organization is the paramount goal of neuroscientific explanation and not the so‐called reduction of “mind to molecules.” © 2009 Wiley Periodicals, Inc. Complexity, 16,10–19, 2010  相似文献   
6.
7.
8.
Commonly used rating scales and tests have been found lacking reliability and validity, for example in neurodegenerative diseases studies, owing to not making recourse to the inherent ordinality of human responses, nor acknowledging the separability of person ability and item difficulty parameters according to the well-known Rasch model. Here, we adopt an information theory approach, particularly extending deployment of the classic Brillouin entropy expression when explaining the difficulty of recalling non-verbal sequences in memory tests (i.e., Corsi Block Test and Digit Span Test): a more ordered task, of less entropy, will generally be easier to perform. Construct specification equations (CSEs) as a part of a methodological development, with entropy-based variables dominating, are found experimentally to explain (r =R2 = 0.98) and predict the construct of task difficulty for short-term memory tests using data from the NeuroMET (n = 88) and Gothenburg MCI (n = 257) studies. We propose entropy-based equivalence criteria, whereby different tasks (in the form of items) from different tests can be combined, enabling new memory tests to be formed by choosing a bespoke selection of items, leading to more efficient testing, improved reliability (reduced uncertainties) and validity. This provides opportunities for more practical and accurate measurement in clinical practice, research and trials.  相似文献   
9.
In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of 2Ω(n1ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.  相似文献   
10.
The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the “higher-order” information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure–function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号