首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
训练可以引起大脑功能和行为表现的显著变化。训练的结果之一是改变脑激活的模式。珠心算训练可使得珠算专家在进行算术任务时具有不同寻常的速度和较高的准确性。然而,经过珠心算训练的儿童进行数字记忆加工的神经机制仍然未知。在本文中,我们主要研究长期殊心算训练对儿童数字工作记忆的影响。我们选取了经过珠心算训练的儿童和匹配的对照组儿童各17名。采集了他们在进行数字和算珠两类工作记忆任务中的功能磁共振成像数据和静息态数据。功能磁共振成像显示相对于对照组儿童,珠心算组儿童在这两个任务中都显著激活了右侧顶上小叶后部/枕上回,右侧辅助运动区。然后,我们选取这些组间显著差异脑区为感兴趣区。对静息态数据进行全脑的功能连接分析,组间分析结果显示,相对于对照组儿童.珠心算组右侧的辅助运动区与右侧额下回之间功能连接显著增强。该结果表明长期的珠心算训练可能使得视空间网络在数量信息记忆任务中的参与程度增加。右侧额下回被看做是注意相关脑区,功能连接结果显示长期的珠心算训练可能促进视空间网络和注意网络之间的功能整合性。  相似文献   

2.
训练可以引起大脑功能和行为表现的显著变化。训练的结果之一是改变大脑激活的模式。珠心算训练可使得珠算专家在进行算术任务时具有不同寻常的速度和较高的准确性。然而,经过珠心算训练的儿童在进行数字记忆加工的神经机制仍然未知。在本文中,我们主要研究长期珠心算训练对儿童数字工作记忆的影响。我们选取了经过珠心算训练的儿童和匹配的对照组儿童各17名,采集了他们在进行数字和算珠两类工作记忆任务中的功能性磁共振成像数据和静息态数据。功能核磁共振成像显示珠心算组儿童相对于控制组儿童在这两个任务中都显著激活了右侧顶上小叶后部/枕上回,右侧辅助运动区。然后,选取这些组间显著差异脑区为感兴趣区,对静息态数据进行全脑的功能连接分析,组间分析结果显示,珠心算组相对于对照组儿童右侧的辅助运动区与右侧额下回之间功能连接显著增强。结果表明长期的珠心算训练可能使得视空间网络在数量信息记忆任务中的参与程度增加。右侧额下回被看做是注意相关脑区,因此功能连接结果表明长期的珠心算训练可能促进视空间网络和注意网络之间的功能整合性。  相似文献   

3.
金灏  陈林  张裕恒 《中国科学A辑》2000,30(9):828-841
对高温超导体磁弛豫尚未达到平衡态和达到平衡态的情况,考虑了磁通热激活的正、反向跃迁(反向跃迁即为从低势能向高势能跃迁),在Bean模型下,严格求解了外加磁场与外加电流时的高温超导板的E-j关系.理论指出电流产生的磁场方向与外加磁场方向相同的板的一侧lnE-lnj只有正曲率;相反的一侧在特定的磁场范围lnE-lnj在同一条曲线上可以呈现正、负曲率,并给以理论证明.还讨论了出现正、负曲率对临界电流密度、外加磁场和温度的要求.  相似文献   

4.
耳蜗毛细胞活动的神经动力学分析   总被引:1,自引:1,他引:0       下载免费PDF全文
为了更深刻地了解耳蜗毛细胞活动的神经动力学机制,建立了基于Hodgkin-Huxley方程的毛细胞模型,通过数值模拟对不同声音频率刺激时的毛细胞膜电位、功率和能量消耗进行了神经动力学分析.研究结果表明:声音频率在0.1~20 kHz范围内,外毛细胞膜电位的衰减低于内毛细胞,而外毛细胞功率和能量消耗的增益远高于内毛细胞.外毛细胞膜电位的低衰减、功率和能量消耗的高增益支持了外毛细胞的放大作用是由电致运动驱动的.对耳蜗毛细胞膜电位、功率和能量消耗的研究结果有助于深刻了解毛细胞活动的神经动力学性质.  相似文献   

5.
考虑了不同复杂网络结构(小世界、无标度和随机网络)条件下的耦合神经元系统,针对其进入相同步的同步化路径进行了建模与仿真,发现系统呈现出非聚类相同步现象,并对其形成原因进行了定性分析.结果表明:复杂网络上的耦合神经元系统与其在规则网络下有相同的同步行为,系统均不出现通常耦合相振子中的聚类成群现象,而表现为随着耦合强度的增加所有神经元渐进趋于同步.另外,随着放电尖峰的插入与弥合,最终导致系统个体平均频率先增强后衰减的变化.这些结果将丰富对于网络动力学行为(尤其是相同步)的认识,对理解神经认知科学具有一定意义.  相似文献   

6.
金灏  陈林  许小军  张裕恒 《中国科学A辑》1999,29(11):1037-1043
在高温超导体中 ,磁弛豫接近平衡态情况下 ,外加电流时 ,考虑了磁通热激活的正、反向跃迁 (反向跃迁即为从低势能向高势能跃迁 ) ,严格求解了在外加磁场与外加电流下的高温超导板的E-j关系 ,指出lnE-lnj曲线存在正曲率 ,并与其他几种理论模型进行了比较 .还对j→0时 ,ρ是趋向于零还是有限值的问题进行了讨论.  相似文献   

7.
网络计划图的工序关系及其复杂性研究   总被引:1,自引:0,他引:1  
本文研究了将原始的施工工序关系表转换为规范网络计划图的活动关系的算法.在理论上讨论了网络图中活动与紧前活动的关系以及与节点的关系,研究了网络图中添加虚活动的规律,进一步提出了生成网络计划图的按先行工序类生成算法的补充研究.  相似文献   

8.
Hilbert空间上凸函数的上、下指数的共轭性质   总被引:1,自引:0,他引:1  
本引入Hilbert空间上非负凸函数具有一般性的上下指数的概念,得到相互共轭凸函数的上下指数的共轭性质。也讨论了由内积范数导出的正p(p>1)次齐状函数与正q(q>1,1/p 1/q=1)之间的关系。  相似文献   

9.
目的探讨珠心算训练开发儿童智能的脑机制。方法用横向配伍和纵向配对设计方案,分别对珠心算训练组和未训练组学龄儿童进行瑞文推理智力测验、基本认知能力测验;并采用ERP(事件相关电位)、FMRI(功能性磁共振成像)、ET(脑电超慢涨落图)技术,测定两组儿童心算过程中大脑神经元激活的部位、区域范围大小和激活时间的先后,分析脑内神经化学物质的变化。采用多因素重复测量的方差分析和t检验等方法进行统计学处理。结果(1)训练组儿童智商明显高于未训练组(P<0.01);(2)训练组儿童数物感知觉主要激活枕叶,未训练组主要激活额叶;(3)在350ms-950ms时程内,两组被试心算过程涉及的脑区基本上相同,主要集中在额区、中央区、顶区,但训练组的正慢电位出现早,且波幅变化明显;(4)随着运算难度增加,与简单运算相比,部分原有激活脑区的范围和强度增大,而且出现新的特定脑区的激活;(5)心算过程中脑涨落图显示多个神经递质系统参与了珠心算运算过程,这些活动呈现抑制递质系统激活大于兴奋递质系统激活的模式,并呈现一定的空间分布特征。(6)两组儿童小脑半球都有激活。结论珠心算训练能促进儿童基本认知能力和智力的发展;珠心算训练使儿童对信息加工...  相似文献   

10.
方格网络上用户均衡行为效率损失研究   总被引:1,自引:0,他引:1  
针对用户出行时追求费用最短的路径,而不考虑其它用户如何选择路径,在一般网络中导致系统费用较高的问题,从实际出发,本文重点讨论特殊网络-方格网络上纳什均衡流与系统最优流之间的关系,研究了用户均衡行为的效率损失.研究结果表明,在方格网络上,当路阻函数是系数非负的线性函数和二次函数时,用户均衡行为的效率损失分别是0和0.35,而一般网络中用户均衡行为的效率损失分别是1/3和0.626,说明方格网络具有较好传送流量的功能.该研究为道路建设部门进行道路改造和增加新道路设计提供理论依据.  相似文献   

11.
In the past decades, various neural network models have been developed for modeling the behavior of human brain or performing problem-solving through simulating the behavior of human brain. The recurrent neural networks are the type of neural networks to model or simulate associative memory behavior of human being. A recurrent neural network (RNN) can be generally formalized as a dynamic system associated with two fundamental operators: one is the nonlinear activation operator deduced from the input-output properties of the involved neurons, and the other is the synaptic connections (a matrix) among the neurons. Through carefully examining properties of various activation functions used, we introduce a novel type of monotone operators, the uniformly pseudo-projectionanti-monotone (UPPAM) operators, to unify the various RNN models appeared in the literature. We develop a unified encoding and stability theory for the UPPAM network model when the time is discrete. The established model and theory not only unify but also jointly generalize the most known results of RNNs. The approach has lunched a visible step towards establishment of a unified mathematical theory of recurrent neural networks.  相似文献   

12.
This paper is concerned with new results on ‐type stability criteria in division regions for competitive neural networks with different time scales. Under the decomposition of state space, both the neural activity levels (the short‐term memory) and the synaptic modifications (the long‐term memory), are taken into account in constructing division regions that allow the coexistence of equilibrium points. Meanwhile, novel delay‐dependent multistability and monostability criteria are established in division regions that depend on divisions in index set of neurons and boundedness of unsupervised synaptic variables. The attained results show the effects of self‐interactions of neurons and Hebbian learning behavior on the multistable convergence of the networks. Finally, numerical simulations will illustrate multistable neuron activity and synaptic dynamics of multitime‐scale competitive networks. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.  相似文献   

14.
Convergence dynamics of bi-directional associative memory (BAM) neural networks with continuously distributed delays and impulses are discussed. Without assuming the differentiability and the monotonicity of the activation functions and symmetry of synaptic interconnection weights, sufficient conditions to guarantee the existence and global exponential stability of a unique equilibrium are given.  相似文献   

15.
Many kinds of complex systems exhibit characteristic patterns of temporal correlations that emerge as the result of functional interactions within a structured network. One such complex system is the brain, composed of numerous neuronal units linked by synaptic connections. The activity of these neuronal units gives rise to dynamic states that are characterized by specific patterns of neuronal activation and co‐activation. These patterns, called functional connectivity, are possible neural correlates of perceptual and cognitive processes. Which functional connectivity patterns arise depends on the anatomical structure of the underlying network, which in turn is modified by a broad range of activity‐dependent processes. Given this intricate relationship between structure and function, the question of how patterns of anatomical connectivity constrain or determine dynamical patterns is of considerable theoretical importance. The present study develops computational tools to analyze networks in terms of their structure and dynamics. We identify different classes of network, including networks that are characterized by high complexity. These highly complex networks have distinct structural characteristics such as clustered connectivity and short wiring length similar to those of large‐scale networks of the cerebral cortex. © 2002 Wiley Periodicals, Inc.  相似文献   

16.
研究球面神经网络的构造与逼近问题.利用球面广义的de la Vallee Poussin平均、球面求积公式及改进的单变量Cardaliaguet-Euvrard神经网络算子,构造具logistic激活函数的单隐层前向网络,并给出了Jackson型误差估计.  相似文献   

17.
This paper studies approximation capability to L2(Rd) functions of incremental constructive feedforward neural networks(FNN) with random hidden units.Two kinds of therelayered feedforward neural networks are considered:radial basis function(RBF) neural networks and translation and dilation invariant(TDI) neural networks.In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks,we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in L2(Rd) to any accuracy.Our result shows given any non-zero activation function g :R+→R and g(x Rd) ∈ L2(Rd) for RBF hidden units,or any non-zero activation function g(x) ∈ L2(Rd) for TDI hidden units,the incremental network function fn with randomly generated hidden units converges to any target function in L2(Rd) with probability one as the number of hidden units n→∞,if one only properly adjusts the weights between the hidden units and output unit.  相似文献   

18.
在没有假定关联函数的光滑性,单调性和有界性的条件下,应用Liapunov泛函方法和矩阵代数技术,得到具有常数传输时滞的双向联想记忆(BAM)的神经网络模型平衡点存在性和全局指数稳定性的一些新的充分条件,这些条件可以由网络参数,连接矩阵和关联函数的Lipschitz常数所表示的M矩阵来刻化.这些结果不仅是简单和实用的,而且相对于已有文献的结果具有较少的限制和更易于验证.  相似文献   

19.
The synchronization of oscillatory activity in neural networks is usually implemented by coupling the state variables describing neuronal dynamics. Here we study another, but complementary mechanism based on a learning process with memory. A driver network, acting as a teacher, exhibits winner-less competition (WLC) dynamics, while a driven network, a learner, tunes its internal couplings according to the oscillations observed in the teacher. We show that under appropriate training the learner can “copy” the coupling structure and thus synchronize oscillations with the teacher. The replication of the WLC dynamics occurs for intermediate memory lengths only, consequently, the learner network exhibits a phenomenon of learning resonance.  相似文献   

20.
A model of the nonlinear dynamics of reverberating on-center off-surround networks of nerve cells, or of cell populations, is analysed. The on-center off-surround anatomy allows patterns to be processed across populations without saturating the populations' response to large inputs. The signals between populations are made sigmoid functions of population activity in order to quench network noise, and yet store sufficiently intense patterns in short term memory (STM). There exists a quenching threshold: a population's activity will be quenched along with network' noise if it falls below the threshold; the pattern of supra threshold population activities is contour enhanced and stored in STM. Varying arousal level can therefore influence which pattern features will be stored. The total suprathreshold activity of the network is carefully regulated. Applications to seizure and hallucinatory phenomena, to position codes for motor control, to pattern discrimination, to influences of novel events on storage of redundant relevant cues, and to the construction of a sensory-drive heterarchy are mentioned, along with possible anatomical substrates in neocortex, hypothalamus, and hippocampus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号