首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

How do listeners manage to recognize words in an unfamiliar language? The physical continuity of the signal, in which real silent pauses between words are lacking, makes it a difficult task. However, there are multiple cues that can be exploited to localize word boundaries and to segment the acoustic signal. In the present study, word-stress was manipulated with statistical information and placed in different syllables within trisyllabic nonsense words to explore the result of the combination of the cues in an online word segmentation task.

Results

The behavioral results showed that words were segmented better when stress was placed on the final syllables than when it was placed on the middle or first syllable. The electrophysiological results showed an increase in the amplitude of the P2 component, which seemed to be sensitive to word-stress and its location within words.

Conclusion

The results demonstrated that listeners can integrate specific prosodic and distributional cues when segmenting speech. An ERP component related to word-stress cues was identified: stressed syllables elicited larger amplitudes in the P2 component than unstressed ones.
  相似文献   

2.

Background

Emotionally salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. Emotional prosody is essential to convey feelings through speech. In sensori-neural hearing loss, impaired speech perception can be improved by cochlear implants (CIs). Aim of this study was to investigate the performance of normal-hearing (NH) participants on the perception of emotional prosody with vocoded stimuli. Semantically neutral sentences with emotional (happy, angry and neutral) prosody were used. Sentences were manipulated to simulate two CI speech-coding strategies: the Advance Combination Encoder (ACE) and the newly developed Psychoacoustic Advanced Combination Encoder (PACE). Twenty NH adults were asked to recognize emotional prosody from ACE and PACE simulations. Performance was assessed using behavioral tests and event-related potentials (ERPs).

Results

Behavioral data revealed superior performance with original stimuli compared to the simulations. For simulations, better recognition for happy and angry prosody was observed compared to the neutral. Irrespective of simulated or unsimulated stimulus type, a significantly larger P200 event-related potential was observed for happy prosody after sentence onset than the other two emotions. Further, the amplitude of P200 was significantly more positive for PACE strategy use compared to the ACE strategy.

Conclusions

Results suggested P200 peak as an indicator of active differentiation and recognition of emotional prosody. Larger P200 peak amplitude for happy prosody indicated importance of fundamental frequency (F0) cues in prosody processing. Advantage of PACE over ACE highlighted a privileged role of the psychoacoustic masking model in improving prosody perception. Taken together, the study emphasizes on the importance of vocoded simulation to better understand the prosodic cues which CI users may be utilizing.  相似文献   

3.

Background

How does the brain convert sounds and phonemes into comprehensible speech? In the present magnetoencephalographic study we examined the hypothesis that the coherence of electromagnetic oscillatory activity within and across brain areas indicates neurophysiological processes linked to speech comprehension.

Results

Amplitude-modulated (sinusoidal 41.5 Hz) auditory verbal and nonverbal stimuli served to drive steady-state oscillations in neural networks involved in speech comprehension. Stimuli were presented to 12 subjects in the following conditions (a) an incomprehensible string of words, (b) the same string of words after being introduced as a comprehensible sentence by proper articulation, and (c) nonverbal stimulations that included a 600-Hz tone, a scale, and a melody. Coherence, defined as correlated activation of magnetic steady state fields across brain areas and measured as simultaneous activation of current dipoles in source space (Minimum-Norm-Estimates), increased within left- temporal-posterior areas when the sound string was perceived as a comprehensible sentence. Intra-hemispheric coherence was larger within the left than the right hemisphere for the sentence (condition (b) relative to all other conditions), and tended to be larger within the right than the left hemisphere for nonverbal stimuli (condition (c), tone and melody relative to the other conditions), leading to a more pronounced hemispheric asymmetry for nonverbal than verbal material.

Conclusions

We conclude that coherent neuronal network activity may index encoding of verbal information on the sentence level and can be used as a tool to investigate auditory speech comprehension.
  相似文献   

4.

Background

How does the brain repair obliterated speech and cope with acoustically ambivalent situations? A widely discussed possibility is to use top-down information for solving the ambiguity problem. In the case of speech, this may lead to a match of bottom-up sensory input with lexical expectations resulting in resonant states which are reflected in the induced gamma-band activity (GBA).

Methods

In the present EEG study, we compared the subject's pre-attentive GBA responses to obliterated speech segments presented after a series of correct words. The words were a minimal pair in German and differed with respect to the degree of specificity of segmental phonological information.

Results

The induced GBA was larger when the expected lexical information was phonologically fully specified compared to the underspecified condition. Thus, the degree of specificity of phonological information in the mental lexicon correlates with the intensity of the matching process of bottom-up sensory input with lexical information.

Conclusions

These results together with those of a behavioural control experiment support the notion of multi-level mechanisms involved in the repair of deficient speech. The delineated alignment of pre-existing knowledge with sensory input is in accordance with recent ideas about the role of internal forward models in speech perception.
  相似文献   

5.

Background

The present study used event-related brain potentials to investigate semantic, phonological and syntactic processes in adult German dyslexic and normal readers in a word reading task. Pairs of German words were presented one word at a time. Subjects had to perform a semantic judgment task (house – window; are they semantically related?), a rhyme judgment task (house – mouse; do they rhyme?) and a gender judgment task (das – Haus [the – house]; is the gender correct? [in German, house has a neutral gender: das Haus]).

Results

Normal readers responded faster compared to dyslexic readers in all three tasks. Onset latencies of the N400 component were delayed in dyslexic readers in the rhyme judgment and in the gender judgment task, but not in the semantic judgment task. N400 and the anterior negativity peak amplitudes did not differ between the two groups. However, the N400 persisted longer in the dyslexic group in the rhyme judgment and in the semantic judgment tasks.

Conclusion

These findings indicate that dyslexics are phonologically impaired (delayed N400 in the rhyme judgment task) but that they also have difficulties in other, non-phonological aspects of reading (longer response times, longer persistence of the N400). Specifically, semantic and syntactic integration seem to require more effort for dyslexic readers and take longer irrespective of the reading task that has to be performed.
  相似文献   

6.

Background

The ultrasonic vocalizations (USV) of courting male mice are known to possess a phonetic structure with a complex combination of several syllables. The genetic mechanisms underlying the syllable sequence organization were investigated.

Results

This study compared syllable sequence organization in two inbred strains of mice, 129S4/SvJae (129) and C57BL6J (B6), and demonstrated that they possessed two mutually exclusive phenotypes. The 129S4/SvJae (129) strain frequently exhibited a "chevron-wave" USV pattern, which was characterized by the repetition of chevron-type syllables. The C57BL/6J strain produced a "staccato" USV pattern, which was characterized by the repetition of short-type syllables. An F1 strain obtained by crossing the 129S4/SvJae and C57BL/6J strains produced only the staccato phenotype. The chevron-wave and staccato phenotypes reappeared in the F2 generations, following the Mendelian law of independent assortment.

Conclusions

These results suggest that two genetic loci control the organization of syllable sequences. These loci were occupied by the staccato and chevron-wave alleles in the B6 and 129 mouse strains, respectively. Recombination of these alleles might lead to the diversity of USV patterns produced by mice.  相似文献   

7.

Background

Survivin is a unique member of the inhibitor of apoptosis protein (IAP) family in that it exhibits antiapoptotic properties and also promotes the cell cycle and mediates mitosis as a chromosome passenger protein. Survivin is highly expressed in neural precursor cells in the brain, yet its function there has not been elucidated.

Results

To examine the role of neural precursor cell survivin, we first showed that survivin is normally expressed in periventricular neurogenic regions in the embryo, becoming restricted postnatally to proliferating and migrating NPCs in the key neurogenic sites, the subventricular zone (SVZ) and the subgranular zone (SGZ). We then used a conditional gene inactivation strategy to delete the survivin gene prenatally in those neurogenic regions. Lack of embryonic NPC survivin results in viable, fertile mice (Survivin Camcre ) with reduced numbers of SVZ NPCs, absent rostral migratory stream, and olfactory bulb hypoplasia. The phenotype can be partially rescued, as intracerebroventricular gene delivery of survivin during embryonic development increases olfactory bulb neurogenesis, detected postnatally. Survivin Camcre brains have fewer cortical inhibitory interneurons, contributing to enhanced sensitivity to seizures, and profound deficits in memory and learning.

Conclusions

The findings highlight the critical role that survivin plays during neural development, deficiencies of which dramatically impact on postnatal neural function.  相似文献   

8.
9.

Background  

Emotional stimuli are preferentially processed compared to neutral ones. Measuring the magnetic resonance blood-oxygen level dependent (BOLD) response or EEG event-related potentials, this has also been demonstrated for emotional versus neutral words. However, it is currently unclear whether emotion effects in word processing can also be detected with other measures such as EEG steady-state visual evoked potentials (SSVEPs) or optical brain imaging techniques. In the present study, we simultaneously performed SSVEP measurements and near-infrared diffusing-wave spectroscopy (DWS), a new optical technique for the non-invasive measurement of brain function, to measure brain responses to neutral, pleasant, and unpleasant nouns flickering at a frequency of 7.5 Hz.  相似文献   

10.

Background

Neuroimaging and neuropsychological literature show functional dissociations in brain activity during processing of stimuli belonging to different semantic categories (e.g., animals, tools, faces, places), but little information is available about the time course of object perceptual categorization. The aim of the study was to provide information about the timing of processing stimuli from different semantic domains, without using verbal or naming paradigms, in order to observe the emergence of non-linguistic conceptual knowledge in the ventral stream visual pathway. Event related potentials (ERPs) were recorded in 18 healthy right-handed individuals as they performed a perceptual categorization task on 672 pairs of images of animals and man-made objects (i.e., artifacts).

Results

Behavioral responses to animal stimuli were ~50 ms faster and more accurate than those to artifacts. At early processing stages (120–180 ms) the right occipital-temporal cortex was more activated in response to animals than to artifacts as indexed by posterior N1 response, while frontal/central N1 (130–160) showed the opposite pattern. In the next processing stage (200–260) the response was stronger to artifacts and usable items at anterior temporal sites. The P300 component was smaller, and the central/parietal N400 component was larger to artifacts than to animals.

Conclusion

The effect of animal and artifact categorization emerged at ~150 ms over the right occipital-temporal area as a stronger response of the ventral stream to animate, homomorphic, entities with faces and legs. The larger frontal/central N1 and the subsequent temporal activation for inanimate objects might reflect the prevalence of a functional rather than perceptual representation of manipulable tools compared to animals. Late ERP effects might reflect semantic integration and cognitive updating processes. Overall, the data are compatible with a modality-specific semantic memory account, in which sensory and action-related semantic features are represented in modality-specific brain areas.  相似文献   

11.
Segmental duration patterns have long been used to support the proposal that syllables are basic speech planning units, but production experiments almost always confound syllable and word boundaries. The current study tried to remedy this problem by comparing word-internal and word-peripheral consonantal duration patterns. Stress and sequencing were used to vary the nominal location of word-internal boundaries in American English productions of disyllabic nonsense words with medial consonant sequences. The word-internal patterns were compared to those that occurred at the edges of words, where boundary location was held constant and only stress and sequence order were varied. The English patterns were then compared to patterns from Russian and Finnish. All three languages showed similar effects of stress and sequencing on consonantal duration, but an independent effect of syllable position was observed only in English and only at a word boundary. English also showed stronger effects of stress and sequencing across a word boundary than within a word. Finnish showed the opposite pattern, whereas Russian showed little difference between word-internal and word-peripheral patterns. Overall, the results suggest that the suprasegmental units of motor planning are language-specific and that the word may be more a relevant planning unit in English.  相似文献   

12.

Background

To date, functional imaging studies of treatment-induced recovery from chronic aphasia only assessed short-term treatment effects after intensive language training. In the present study, we show with functional magnetic resonance imaging (fMRI), that different brain regions may be involved in immediate versus long-term success of intensive language training in chronic post-stroke aphasia patients.

Results

Eight patients were trained daily for three hours over a period of two weeks in naming of concrete objects. Prior to, immediately after, and eight months after training, patients overtly named trained and untrained objects during event-related fMRI. On average the patients improved from zero (at baseline) to 64.4% correct naming responses immediately after training, and treatment success remained highly stable at follow-up. Regression analyses showed that the degree of short-term treatment success was predicted by increased activity (compared to the pretraining scan) bilaterally in the hippocampal formation, the right precuneus and cingulate gyrus, and bilaterally in the fusiform gyri. A different picture emerged for long-term training success, which was best predicted by activity increases in the right-sided Wernicke's homologue and to a lesser degree in perilesional temporal areas.

Conclusion

The results show for the first time that treatment-induced language recovery in the chronic stage after stroke is a dynamic process. Initially, brain regions involved in memory encoding, attention, and multimodal integration mediated treatment success. In contrast, long-term treatment success was predicted mainly by activity increases in the so-called 'classical' language regions. The results suggest that besides perilesional and homologue language-associated regions, functional integrity of domain-unspecific memory structures may be a prerequisite for successful (intensive) language interventions.  相似文献   

13.

Background

Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG). Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/), complex non-speech stimuli (a composite of five sinusoidals), and sinusoidal tones by decreasing the amplitude resolution of the signal waveform. The amplitude resolution was impoverished by reducing the number of bits to represent the signal samples. Auditory evoked magnetic fields (AEFs) were measured in the left and right hemisphere of sixteen healthy subjects.

Results

We found that the AEF amplitudes increased significantly with stimulus distortion for all stimulus types, which indicates that the right-hemispheric N1m sensitivity is not related exclusively to degradation of acoustic properties of speech. In addition, the P1m and P2m responses were amplified with increasing distortion similarly in both hemispheres. The AEF latencies were not systematically affected by the distortion.

Conclusions

We propose that the increased activity of AEFs reflects cortical processing of acoustic properties common to both speech and non-speech stimuli. More specifically, the enhancement is most likely caused by spectral changes brought about by the decrease of amplitude resolution, in particular the introduction of periodic, signal-dependent distortion to the original sound. Converging evidence suggests that the observed AEF amplification could reflect cortical sensitivity to periodic sounds.  相似文献   

14.

Background

Junctional adhesion molecule-A (JAM-A) is an adhesive protein expressed in various cell types. JAM-A localizes to the tight junctions between contacting endothelial and epithelial cells, where it contributes to cell-cell adhesion and to the control of paracellular permeability.

Results

So far, the expression pattern of JAM-A has not been described in detail for the different cell types of the adult brain. Here we show that a subset of proliferating cells in the adult mouse brain express JAM-A. We further clarify that these cells belong to the lineage of NG2-glia cells. Although these mitotic NG2-glia cells express JAM-A, the protein never shows a polarized subcellular distribution. Also non-mitotic NG2-glia cells express JAM-A in a non-polarized pattern on their surface.

Conclusions

Our data show that JAM-A is a novel surface marker for NG2-glia cells of the adult brain.  相似文献   

15.
Shuiyuan Yu  Chunshan Xu 《Physica A》2011,390(7):1370-1380
The study of properties of speech sound systems is of great significance in understanding the human cognitive mechanism and the working principles of speech sound systems. Some properties of speech sound systems, such as the listener-oriented feature and the talker-oriented feature, have been unveiled with the statistical study of phonemes in human languages and the research of the interrelations between human articulatory gestures and the corresponding acoustic parameters. With all the phonemes of speech sound systems treated as a coherent whole, our research, which focuses on the dynamic properties of speech sound systems in operation, investigates some statistical parameters of Chinese phoneme networks based on real text and dictionaries. The findings are as follows: phonemic networks have high connectivity degrees and short average distances; the degrees obey normal distribution and the weighted degrees obey power law distribution; vowels enjoy higher priority than consonants in the actual operation of speech sound systems; the phonemic networks have high robustness against targeted attacks and random errors. In addition, for investigating the structural properties of a speech sound system, a statistical study of dictionaries is conducted, which shows the higher frequency of shorter words and syllables and the tendency that the longer a word is, the shorter the syllables composing it are. From these structural properties and dynamic properties one can derive the following conclusion: the static structure of a speech sound system tends to promote communication efficiency and save articulation effort while the dynamic operation of this system gives preference to reliable transmission and easy recognition. In short, a speech sound system is an effective, efficient and reliable communication system optimized in many aspects.  相似文献   

16.
The Headturn Preference Paradigm was used to examine infants' use of prosodically conditioned acoustic-phonetic cues to find words in speech. Twelve-month-olds were familiarized to one passage containing an intended target (e.g., toga from toga#lore) and one passage containing an unintended target (e.g., dogma from dog#maligns). Infants were tested on the familiarized intended word (e.g., toga), familiarized unintended word (e.g., dogma), and two unfamiliar words. Infants listened longer to familiar intended words than to familiar unintended or unfamiliar words, demonstrating their use of word-level prosodically conditioned cues to segment words from speech. Implications for models of developmental speech perception are discussed.  相似文献   

17.

Background

The present study compared the neural correlates of an intramodally and a crossmodally acquired second language (L2). Deaf people who had learned their L1, German Sign Language (DGS), and their L2, German, through the visual modality were compared with hearing L2 learners of German and German native speakers. Correct and incorrect German sentences were presented word by word on a computer screen while the electroencephalogram was recorded. At the end of each sentence, the participants judged whether or not the sentence was correct. Two types of violations were realized: Either a semantically implausible noun or a violation of subject-verb number agreement was embedded at a sentence medial position.

Results

Semantic errors elicited an N400, followed by a late positivity in all groups. In native speakers of German, verb-agreement violations were followed by a left lateralized negativity, which has been associated with an automatic parsing process. We observed a syntax related negativity in both high performing hearing and deaf L2 learners as well. Finally, this negativity was followed by a posteriorly distributed positivity in all three groups.

Conclusions

Although deaf learners have learned German as an L2 mainly via the visual modality they seem to engage comparable processing mechanisms as hearing L2 learners. Thus, the data underscore the modality transcendence of language.  相似文献   

18.

Background

Progressive accumulation of α-synuclein (α-Syn) protein in different brain regions is a hallmark of synucleinopathic diseases, such as Parkinson’s disease, dementia with Lewy bodies and multiple system atrophy. α-Syn transgenic mouse models have been developed to investigate the effects of α-Syn accumulation on behavioral deficits and neuropathology. However, the onset and progression of pathology in α-Syn transgenic mice have not been fully characterized. For this purpose we investigated the time course of behavioral deficits and neuropathology in PDGF-β human wild type α-Syn transgenic mice (D-Line) between 3 and 12 months of age.

Results

These mice showed progressive impairment of motor coordination of the limbs that resulted in significant differences compared to non-transgenic littermates at 9 and 12 months of age. Biochemical and immunohistological analyses revealed constantly increasing levels of human α-Syn in different brain areas. Human α-Syn was expressed particularly in somata and neurites of a subset of neocortical and limbic system neurons. Most of these neurons showed immunoreactivity for phosphorylated human α-Syn confined to nuclei and perinuclear cytoplasm. Analyses of the phenotype of α-Syn expressing cells revealed strong expression in dopaminergic olfactory bulb neurons, subsets of GABAergic interneurons and glutamatergic principal cells throughout the telencephalon. We also found human α-Syn expression in immature neurons of both the ventricular zone and the rostral migratory stream, but not in the dentate gyrus.

Conclusion

The present study demonstrates that the PDGF-β α-Syn transgenic mouse model presents with early and progressive accumulation of human α-Syn that is accompanied by motor deficits. This information is essential for the design of therapeutical studies of synucleinopathies.  相似文献   

19.

Background

How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes.

Results

Following learning the amplitude of theta (4-8 Hz), but not gamma (30-70 Hz) oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma). The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance.

Conclusions

Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex which are correlated with behavioral performance. A network model which can reproduce these changes suggests that a key function of such learning-evoked alterations in theta and theta-nested gamma activity may be increased temporal desynchronization in neuronal firing leading to optimal timing of inputs to downstream neural networks potentiating their responses. In this way learning can produce potentiation in neural networks simply through altering the temporal pattern of their inputs.  相似文献   

20.

Background

While there is a general agreement that picture-plane inversion is more detrimental to face processing than to other seemingly complex visual objects, the origin of this effect is still largely debatable. Here, we address the question of whether face inversion reflects a quantitative or a qualitative change in processing mode by investigating the pattern of event-related potential (ERP) response changes with picture plane rotation of face and house pictures. Thorough analyses of topographical (Scalp Current Density maps, SCD) and dipole source modeling were also conducted.

Results

We find that whilst stimulus orientation affected in a similar fashion participants' response latencies to make face and house decisions, only the ERPs in the N170 latency range were modulated by picture plane rotation of faces. The pattern of N170 amplitude and latency enhancement to misrotated faces displayed a curvilinear shape with an almost linear increase for rotations from 0° to 90° and a dip at 112.5° up to 180° rotations. A similar discontinuity function was also described for SCD occipito-temporal and temporal current foci with no topographic distribution changes, suggesting that upright and misrotated faces activated similar brain sources. This was confirmed by dipole source analyses showing the involvement of bilateral sources in the fusiform and middle occipital gyri, the activity of which was differentially affected by face rotation.

Conclusion

Our N170 findings provide support for both the quantitative and qualitative accounts for face rotation effects. Although the qualitative explanation predicted the curvilinear shape of N170 modulations by face misrotations, topographical and source modeling findings suggest that the same brain regions, and thus the same mechanisms, are probably at work when processing upright and rotated faces. Taken collectively, our results indicate that the same processing mechanisms may be involved across the whole range of face orientations, but would operate in a non-linear fashion. Finally, the response tuning of the N170 to rotated faces extends previous reports and further demonstrates that face inversion affects perceptual analyses of faces, which is reflected within the time range of the N170 component.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号