首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Neuroimaging and neuropsychological literature show functional dissociations in brain activity during processing of stimuli belonging to different semantic categories (e.g., animals, tools, faces, places), but little information is available about the time course of object perceptual categorization. The aim of the study was to provide information about the timing of processing stimuli from different semantic domains, without using verbal or naming paradigms, in order to observe the emergence of non-linguistic conceptual knowledge in the ventral stream visual pathway. Event related potentials (ERPs) were recorded in 18 healthy right-handed individuals as they performed a perceptual categorization task on 672 pairs of images of animals and man-made objects (i.e., artifacts).

Results

Behavioral responses to animal stimuli were ~50 ms faster and more accurate than those to artifacts. At early processing stages (120–180 ms) the right occipital-temporal cortex was more activated in response to animals than to artifacts as indexed by posterior N1 response, while frontal/central N1 (130–160) showed the opposite pattern. In the next processing stage (200–260) the response was stronger to artifacts and usable items at anterior temporal sites. The P300 component was smaller, and the central/parietal N400 component was larger to artifacts than to animals.

Conclusion

The effect of animal and artifact categorization emerged at ~150 ms over the right occipital-temporal area as a stronger response of the ventral stream to animate, homomorphic, entities with faces and legs. The larger frontal/central N1 and the subsequent temporal activation for inanimate objects might reflect the prevalence of a functional rather than perceptual representation of manipulable tools compared to animals. Late ERP effects might reflect semantic integration and cognitive updating processes. Overall, the data are compatible with a modality-specific semantic memory account, in which sensory and action-related semantic features are represented in modality-specific brain areas.  相似文献   

2.

Background

After a prolonged exposure to a paired presentation of different types of signals (e.g., color and motion), one of the signals (color) becomes a driver for the other signal (motion). This phenomenon, which is known as contingent motion aftereffect, indicates that the brain can establish new neural representations even in the adult's brain. However, contingent motion aftereffect has been reported only in visual or auditory domain. Here, we demonstrate that a visual motion aftereffect can be contingent on a specific sound.

Results

Dynamic random dots moving in an alternating right or left direction were presented to the participants. Each direction of motion was accompanied by an auditory tone of a unique and specific frequency. After a 3-minutes exposure, the tones began to exert marked influence on the visual motion perception, and the percentage of dots required to trigger motion perception systematically changed depending on the tones. Furthermore, this effect lasted for at least 2 days.

Conclusions

These results indicate that a new neural representation can be rapidly established between auditory and visual modalities.  相似文献   

3.

Background

Several studies have shown that Stroop interference is stronger in children than in adults. However, in a standard Stroop paradigm, stimulus interference and response interference are confounded. The purpose of the present study was to determine whether interference at the stimulus level and the response level are subject to distinct maturational patterns across childhood. Three groups of children (6–7 year-olds, 8–9 year-olds, and 10–12 year-olds) and a group of adults performed a manual Color-Object Stroop designed to disentangle stimulus interference and response interference. This was accomplished by comparing three trial types. In congruent (C) trials there was no interference. In stimulus incongruent (SI) trials there was only stimulus interference. In response incongruent (RI) trials there was stimulus interference and response interference. Stimulus interference and response interference were measured by a comparison of SI with C, and RI with SI trials, respectively. Event-related potentials (ERPs) were measured to study the temporal dynamics of these processes of interference.

Results

There was no behavioral evidence for stimulus interference in any of the groups, but in 6–7 year-old children ERPs in the SI condition in comparison with the C condition showed an occipital P1-reduction (80–140 ms) and a widely distributed amplitude enhancement of a negative component followed by an amplitude reduction of a positive component (400–560 ms). For response interference, all groups showed a comparable reaction time (RT) delay, but children made more errors than adults. ERPs in the RI condition in comparison with the SI condition showed an amplitude reduction of a positive component over lateral parietal (-occipital) sites in 10–12 year-olds and adults (300–540 ms), and a widely distributed amplitude enhancement of a positive component in all age groups (680–960 ms). The size of the enhancement correlated positively with the RT response interference effect.

Conclusion

Although processes of stimulus interference control as measured with the color-object Stroop task seem to reach mature levels relatively early in childhood (6–7 years), development of response interference control appears to continue into late adolescence as 10–12 year-olds were still more susceptible to errors of response interference than adults.  相似文献   

4.

Background

Tone languages such as Thai and Mandarin Chinese use differences in fundamental frequency (F0, pitch) to distinguish lexical meaning. Previous behavioral studies have shown that native speakers of a non-tone language have difficulty discriminating among tone contrasts and are sensitive to different F0 dimensions than speakers of a tone language. The aim of the present ERP study was to investigate the effect of language background and training on the non-attentive processing of lexical tones. EEG was recorded from 12 adult native speakers of Mandarin Chinese, 12 native speakers of American English, and 11 Thai speakers while they were watching a movie and were presented with multiple tokens of low-falling, mid-level and high-rising Thai lexical tones. High-rising or low-falling tokens were presented as deviants among mid-level standard tokens, and vice versa. EEG data and data from a behavioral discrimination task were collected before and after a two-day perceptual categorization training task.

Results

Behavioral discrimination improved after training in both the Chinese and the English groups. Low-falling tone deviants versus standards elicited a mismatch negativity (MMN) in all language groups. Before, but not after training, the English speakers showed a larger MMN compared to the Chinese, even though English speakers performed worst in the behavioral tasks. The MMN was followed by a late negativity, which became smaller with improved discrimination. The High-rising deviants versus standards elicited a late negativity, which was left-lateralized only in the English and Chinese groups.

Conclusion

Results showed that native speakers of English, Chinese and Thai recruited largely similar mechanisms when non-attentively processing Thai lexical tones. However, native Thai speakers differed from the Chinese and English speakers with respect to the processing of late F0 contour differences (high-rising versus mid-level tones). In addition, native speakers of a non-tone language (English) were initially more sensitive to F0 onset differences (low-falling versus mid-level contrast), which was suppressed as a result of training. This result converges with results from previous behavioral studies and supports the view that attentive as well as non-attentive processing of F0 contrasts is affected by language background, but is malleable even in adult learners.  相似文献   

5.
The loudness of auditory (A), tactile (T), and auditory-tactile (A+T) stimuli was measured at supra-threshold levels. Auditory stimuli were pure tones presented binaurally through headphones; tactile stimuli were sinusoids delivered through a single-channel vibrator to the left middle fingertip. All stimuli were presented together with a broadband auditory noise. The A and T stimuli were presented at levels that were matched in loudness to that of the 200-Hz auditory tone at 25 dB sensation level. The 200-Hz auditory tone was then matched in loudness to various combinations of auditory and tactile stimuli (A+T), and purely auditory stimuli (A+A). The results indicate that the matched intensity of the 200-Hz auditory tone is less when the A+T and A+A stimuli are close together in frequency than when they are separated by an octave or more. This suggests that A+T integration may operate in a manner similar to that found in auditory critical band studies, further supporting a strong frequency relationship between the auditory and somatosensory systems.  相似文献   

6.
Avoidance conditioning and a modified method of limits psychophysical procedure were used to study temporal integration of tone and noise signals in the budgerigar (Melopsittacus undulatus). Integration of both tone and noise signals can be described by a negative exponential function with a time constant of about 200 ms. At very short durations there were differences in the integration of tone and noise signals. These data are similar to those reported for a number of other vertebrates, including man. Thresholds for two complex natural vocalizations of the budgerigar are similar to those of pure tones of equivalent duration.  相似文献   

7.
Auditory stream segregation refers to the organization of sequential sounds into "perceptual streams" reflecting individual environmental sound sources. In the present study, sequences of alternating high and low tones, "...ABAB...," similar to those used in psychoacoustic experiments on stream segregation, were presented to awake monkeys while neural activity was recorded in primary auditory cortex (A1). Tone frequency separation (AF), tone presentation rate (PR), and tone duration (TD) were systematically varied to examine whether neural responses correlate with effects of these variables on perceptual stream segregation. "A" tones were fixed at the best frequency of the recording site, while "B" tones were displaced in frequency from "A" tones by an amount = delta F. As PR increased, "B" tone responses decreased in amplitude to a greater extent than "A" tone responses, yielding neural response patterns dominated by "A" tone responses occurring at half the alternation rate. Increasing TD facilitated the differential attenuation of "B" tone responses. These findings parallel psychoacoustic data and suggest a physiological model of stream segregation whereby increasing delta F, PR, or TD enhances spatial differentiation of "A" tone and "B" tone responses along the tonotopic map in A1.  相似文献   

8.

Background  

Due to auditory experience, musicians have better auditory expertise than non-musicians. An increased neocortical activity during auditory oddball stimulation was observed in different studies for musicians and for non-musicians after discrimination training. This suggests a modification of synaptic strength among simultaneously active neurons due to the training. We used amplitude-modulated tones (AM) presented in an oddball sequence and manipulated their carrier or modulation frequencies. We investigated non-musicians in order to see if behavioral discrimination training could modify the neocortical activity generated by change detection of AM tone attributes (carrier or modulation frequency). Cortical evoked responses like N1 and mismatch negativity (MMN) triggered by sound changes were recorded by a whole head magnetoencephalographic system (MEG). We investigated (i) how the auditory cortex reacts to pitch difference (in carrier frequency) and changes in temporal features (modulation frequency) of AM tones and (ii) how discrimination training modulates the neuronal activity reflecting the transient auditory responses generated in the auditory cortex.  相似文献   

9.

Background

Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG). Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/), complex non-speech stimuli (a composite of five sinusoidals), and sinusoidal tones by decreasing the amplitude resolution of the signal waveform. The amplitude resolution was impoverished by reducing the number of bits to represent the signal samples. Auditory evoked magnetic fields (AEFs) were measured in the left and right hemisphere of sixteen healthy subjects.

Results

We found that the AEF amplitudes increased significantly with stimulus distortion for all stimulus types, which indicates that the right-hemispheric N1m sensitivity is not related exclusively to degradation of acoustic properties of speech. In addition, the P1m and P2m responses were amplified with increasing distortion similarly in both hemispheres. The AEF latencies were not systematically affected by the distortion.

Conclusions

We propose that the increased activity of AEFs reflects cortical processing of acoustic properties common to both speech and non-speech stimuli. More specifically, the enhancement is most likely caused by spectral changes brought about by the decrease of amplitude resolution, in particular the introduction of periodic, signal-dependent distortion to the original sound. Converging evidence suggests that the observed AEF amplification could reflect cortical sensitivity to periodic sounds.  相似文献   

10.
This experiment examined the generation of virtual pitch for harmonically related tones that do not overlap in time. The interval between successive tones was systematically varied in order to gauge the integration period for virtual pitch. A pitch discrimination task was employed, and both harmonic and nonharmonic tone series were tested. The results confirmed that a virtual pitch can be generated by a series of brief, harmonically related tones that are separated in time. Robust virtual pitch information can be derived for intervals between successive 40-ms tones of up to about 45 ms, consistent with a minimum estimate of integration period of about 210 ms. Beyond intertone intervals of 45 ms, performance becomes more variable and approaches an upper limit where discrimination of tone sequences can be undertaken on the basis of the individual frequency components. The individual differences observed in this experiment suggest that the ability to derive a salient virtual pitch varies across listeners.  相似文献   

11.
This paper extends previous research on listeners' abilities to discriminate the details of brief tonal components occurring within sequential auditory patterns (Watson et al., 1975, 1976). Specifically, the ability to discriminate increments in the duration delta t of tonal components was examined. Stimuli consisted of sequences of ten sinusoidal tones: a 40-ms test tone to which delta t was added, plus nine context tones with individual durations fixed at 40 ms or varying between 20 and 140 ms. The level of stimulus uncertainty was varied from high (any of 20 test tones occurring in any of nine factorial contexts), through medium (any of 20 test tones occurring in ten contexts), to minimal levels (one test tone occurring in a single context). The ability to discriminate delta t depended strongly on the level of stimulus uncertainty, and on the listener's experience with the tonal context. Asymptotic thresholds under minimal uncertainty approached 4-6 ms, or 15% of the duration of the test tones; under high uncertainty, they approached 40 ms, or 10% of the total duration of the tonal sequence. Initial thresholds exhibited by inexperienced listeners are two-to-four times greater than the asymptotic thresholds achieved after considerable training (20,000-30,000 trials). Isochronous sequences, with context tones of uniform, 40-ms duration, yield lower thresholds than those with components of varying duration. The frequency and temporal position of the test tones had only minor effects on temporal discrimination. It is proposed that a major determinant of the ability to discriminate the duration of components of sequential patterns is the listener's knowledge about "what to listen for and where." Reduced stimulus uncertainty and extensive practice increase the precision of this knowledge, and result in high-resolution discrimination performance. Increased uncertainty, limited practice, or both, would allow only discrimination of gross changes in the temporal or spectral structure of the sequential patterns.  相似文献   

12.

Background

Emotionally salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. Emotional prosody is essential to convey feelings through speech. In sensori-neural hearing loss, impaired speech perception can be improved by cochlear implants (CIs). Aim of this study was to investigate the performance of normal-hearing (NH) participants on the perception of emotional prosody with vocoded stimuli. Semantically neutral sentences with emotional (happy, angry and neutral) prosody were used. Sentences were manipulated to simulate two CI speech-coding strategies: the Advance Combination Encoder (ACE) and the newly developed Psychoacoustic Advanced Combination Encoder (PACE). Twenty NH adults were asked to recognize emotional prosody from ACE and PACE simulations. Performance was assessed using behavioral tests and event-related potentials (ERPs).

Results

Behavioral data revealed superior performance with original stimuli compared to the simulations. For simulations, better recognition for happy and angry prosody was observed compared to the neutral. Irrespective of simulated or unsimulated stimulus type, a significantly larger P200 event-related potential was observed for happy prosody after sentence onset than the other two emotions. Further, the amplitude of P200 was significantly more positive for PACE strategy use compared to the ACE strategy.

Conclusions

Results suggested P200 peak as an indicator of active differentiation and recognition of emotional prosody. Larger P200 peak amplitude for happy prosody indicated importance of fundamental frequency (F0) cues in prosody processing. Advantage of PACE over ACE highlighted a privileged role of the psychoacoustic masking model in improving prosody perception. Taken together, the study emphasizes on the importance of vocoded simulation to better understand the prosodic cues which CI users may be utilizing.  相似文献   

13.

Objectives/Hypotheses

Singers learn to produce well-controlled tone onsets by accurate synchronization of glottal adduction and buildup of subglottal pressure. Spectrographic analyses have shown that the higher spectrum partials are present also at the vowel onset in classically trained singers’ performances. Such partials are produced by a sharp discontinuity in the waveform of the transglottal airflow, presumably produced by vocal fold collision.

Study Design

After hearing a prompt series of a triad pattern, six singer subjects sang the same triad pattern on the vowel /i/ (1) preceded by an aspirated /p/, (2) preceded by an unaspirated /p/, and (3) without any preceding consonant in staccato.

Methods

Using high-speed imaging we examined the initiation of vocal fold vibration in aspirated and unaspirated productions of the consonant /p/ as well as in the staccato tones.

Results

The number vibrations failing to produce vocal fold collision were significantly higher in the aspirated /p/ than in the unaspirated /p/ and in the staccato tones. High frequency ripple in the audio waveform was significantly delayed in the aspirated /p/.

Conclusions

Initiation of vocal fold collision and the appearance of high-frequency ripple in the vowel /i/ are slightly delayed in aspirated productions of a preceding consonant /p/.  相似文献   

14.
Subjects discriminated a "standard" pair of tone bursts (T1, T2) from a "comparison" pair (T1 + delta t, T2 + delta f), containing increments in the duration delta t of the first burst and/or the frequency delta f of the second burst. The threshold (d' = 2.0) for delta t was measured as a function of delta f, and the threshold for delta f as a function of delta t. The integration of increments in duration and frequency was studied as a function of the spectral and temporal separation between T1 and T2. A trade-off between the values of delta t and delta f required for d' = 2.0 performance was observed. This integration takes place when delta t, delta f occur simultaneously in the same spectral region, and when they occur separated by up to 120 ms, or by up to a full octave. The efficiency of integration was similar for all conditions of temporal and spectral separation studied, because the discriminability of delta t and of delta f is also nearly uniform across experimental conditions. The results from all experimental conditions are adequately described by a vector summation model derived from TSD. In a subsidiary experiment, subjects categorized pure tones varying in duration and frequency as "high" or "low" in pitch and "long" or "short" in duration. It was found that combined variations in duration and frequency result in essentially independent perceptual processes, although pitch has a small effect upon the perceived duration. It is concluded that spectral-temporal integration is a general ability operating in a variety of stimulus conditions.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

15.
In a previous paper, it was shown that sequential stream segregation could be based on both spectral information and periodicity information, if listeners were encouraged to hear segregation [Vliegen and Oxenham, J. Acoust. Soc. Am. 105, 339-346 (1999)]. The present paper investigates whether segregation based on periodicity information alone also occurs when the task requires integration. This addresses the question: Is segregation based on periodicity automatic and obligatory? A temporal discrimination task was used, as there is evidence that it is difficult to compare the timing of auditory events that are perceived as being in different perceptual streams. An ABA ABA ABA... sequence was used, in which tone B could be either exactly at the temporal midpoint between two successive tones A or slightly delayed. The tones A and B were of three types: (1) both pure tones; (2) both complex tones filtered through a fixed passband so as to contain only harmonics higher than the 10th, thereby eliminating detectable spectral differences, where only the fundamental frequency (f0) was varied between tones A and B; and (3) both complex tones with the same f0, but where the center frequency of the spectral passband varied between tones. Tone A had a fixed frequency of 300 Hz (when A and B were pure tones) or a fundamental frequency (f0) of 100 Hz (when A and B were complex tones). Five different intervals, ranging from 1 to 18 semitones, were used. The results for all three conditions showed that shift thresholds increased with increasing interval between tones A and B, but the effect was largest for the conditions where A and B differed in spectrum (i.e., the pure-tone and the variable-center-frequency conditions). The results suggest that spectral information is dominant in inducing (involuntary) segregation, but periodicity information can also play a role.  相似文献   

16.
When a low harmonic in a harmonic complex tone is mistuned from its harmonic value by a sufficient amount it is heard as a separate tone, standing out from the complex as a whole. This experiment estimated the degree of mistuning required for this phenomenon to occur, for complex tones with 10 or 12 equal-amplitude components (60 dB SPL per component). On each trial the subject was presented with a complex tone which either had all its partials at harmonic frequencies or had one partial mistuned from its harmonic frequency. The subject had to indicate whether he heard a single complex tone with one pitch or a complex tone plus a pure tone which did not "belong" to the complex. An adaptive procedure was used to track the degree of mistuning required to achieve a d' value of 1. Threshold was determined for each ot the first six harmonics of each complex tone. In one set of conditions stimulus duration was held constant at 410 ms, and the fundamental frequency was either 100, 200, or 400 Hz. For most conditions the thresholds fell between 1% and 3% of the harmonic frequency, depending on the subject. However, thresholds tended to be greater for the first two harmonics of the 100-Hz fundamental and, for some subjects, thresholds increased for the fifth and sixth harmonics. In a second set of conditions fundamental frequency was held constant at 200 Hz, and the duration was either 50, 110, 410, or 1610 ms. Thresholds increased by a factor of 3-5 as duration was decreased from 1610 ms to 50 ms. The results are discussed in terms of a hypothetical harmonic sieve and mechanisms for the formation of perceptual streams.  相似文献   

17.

Objectives

To ascertain whether cochlear implantation (CI), without specific vocal rehabilitation, is associated with changes in perceptual and acoustic vocal parameters in adults with severe to profound postlingual deafness.

Hypothesis

Merely restoring auditory feedback could allow the individual to make necessary adjustments in vocal pattern.

Study Design

Prospective and longitudinal.

Methods

The experimental group composed of 40 postlingually deaf adults (20 males and 20 females) with no previous laryngeal or voice disorders. Participants’ voices were recorded before CI and 6–9 months after CI. To check for chance modifications between two evaluations, a control group of 12 postlingually deaf adults, six male and six female, without CI was also evaluated. All sessions composed of the recording of read sentences from Consensus Auditory-Perceptual Evaluation of Voice and sustained vowel /a/. Auditory and acoustic analyses were then conducted.

Results

We found a statistically significant reduction in overall severity, strain, loudness, and instability in auditory analysis. In vocal acoustic analysis, we found statistically significant reduction fundamental frequency (F0) values (in male participants) and F0 variability (in both genders). The control group showed no statistically significant changes in most vocal parameters assessed, apart from pitch and F0 (in female participants only). On comparing the interval of variation of results between the experimental and control groups, we found no statistically significant difference in vocal parameters between CI recipients and nonrecipients, with the exception of F0 variability in male participants.

Conclusions

The patients in our sample showed changes in overall severity, strain, loudness, and instability values, and reductions in F0 and its variability. On comparing the variation of results between the groups, we were able to prove in our study that implant recipients postlingually deaf adults (experimental group), without specific vocal rehabilitation, differed from nonrecipients (control group) in loudness and F0 variability sustained vowel /a/ in male participants.  相似文献   

18.
Although numerous studies have investigated temporal integration of the acoustic-reflex threshold (ART), research is lacking on the effect of age on temporal integration of the ART. Therefore the effect of age on temporal integration of the ART was investigated for a broad-band noise (BBN) activator. Subjects consisted of two groups of adults with normal-hearing sensitivity: one group of 20 young adults (ten males and ten females, ages 18-29 years, with a mean age of 24 years) and one group of 20 older adults (ten males and ten females, ages 59-75 years, with a mean age of 67.5 years). Activating stimulus durations were 12, 25, 50, 100, 200, 300, 500, and 1000 ms. Significant main effects for duration and age were obtained. That is, as the duration increased, the acoustic reflex threshold for BBN decreased. The interactions of duration x age group and duration x hearing level were not significant. The result of pair-wise analysis indicated statistically significant differences between the two age groups at durations of 20 ms and longer. The observed age effect on temporal integration of the ART for the BBN activator is interpreted in relation to senescent changes in the auditory system.  相似文献   

19.
Frequency difference limens for pure tones (DLFs) and for complex tones (DLCs) were measured for four groups of subjects: young normal hearing, young hearing impaired, elderly with near-normal hearing, and elderly hearing impaired. The auditory filters of the subjects had been measured in earlier experiments using the notched-noise method, for center frequencies (fc) of 100, 200, 400, and 800 Hz. The DLFs for both impaired groups were higher than for the young normal group at all fc's (50-4000 Hz). The DLFs at a given fc were generally only weakly correlated with the sharpness of the auditory filter at that fc, and some subjects with broad filters had near-normal DLFs at low frequencies. Some subjects in the elderly normal group had very large DLFs at low frequencies in spite of near-normal auditory filters. These results suggest a partial dissociation of frequency selectivity and frequency discrimination of pure tones. The DLCs for the two impaired groups were higher than those for the young normal group at all fundamental frequencies (fo) tested (50, 100, 200, and 400 Hz); the DLCs for the elderly normal group were intermediate. At fo = 50 Hz, DLCs for a complex tone containing only low harmonics (1-5) were markedly higher than for complex tones containing higher harmonics, for all subject groups, suggesting that pitch was conveyed largely by the higher, unresolved harmonics. For the elderly impaired group, and some subjects in the elderly normal group, DLCs were larger for a complex tone with lower harmonics (1-12) than for tones without lower harmonics (4-12 and 6-12) for fo's up to 200 Hz. Some elderly normal subjects had markedly larger-than-normal DLCs in spite of near-normal auditory filters. The DLCs tended to be larger for complexes with components added in alternating sine/cosine phase than for complexes with components added in cosine phase. Phase effects were significant for all groups, but were small for the young normal group. The results are not consistent with place-based models of the pitch perception of complex tones; rather, they suggest that pitch is at least partly determined by temporal mechanisms.  相似文献   

20.
The experiments examined age-related changes in temporal sensitivity to increments in the interonset intervals (IOI) of components in tonal sequences. Discrimination was examined using reference sequences consisting of five 50-ms tones separated by silent intervals; tone frequencies were either fixed at 4 kHz or varied within a 2-4-kHz range to produce spectrally complex patterns. The tonal IOIs within the reference sequences were either equal (200 or 600 ms) or varied individually with an average value of 200 or 600 ms to produce temporally complex patterns. The difference limen (DL) for increments of IOI was measured. Comparison sequences featured either equal increments in all tonal IOIs or increments in a single target IOI, with the sequential location of the target changing randomly across trials. Four groups of younger and older adults with and without sensorineural hearing loss participated. Results indicated that DLs for uniform changes of sequence rate were smaller than DLs for single target intervals, with the largest DLs observed for single targets embedded within temporally complex sequences. Older listeners performed more poorly than younger listeners in all conditions, but the largest age-related differences were observed for temporally complex stimulus conditions. No systematic effects of hearing loss were observed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号