首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.

Background

Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG). Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/), complex non-speech stimuli (a composite of five sinusoidals), and sinusoidal tones by decreasing the amplitude resolution of the signal waveform. The amplitude resolution was impoverished by reducing the number of bits to represent the signal samples. Auditory evoked magnetic fields (AEFs) were measured in the left and right hemisphere of sixteen healthy subjects.

Results

We found that the AEF amplitudes increased significantly with stimulus distortion for all stimulus types, which indicates that the right-hemispheric N1m sensitivity is not related exclusively to degradation of acoustic properties of speech. In addition, the P1m and P2m responses were amplified with increasing distortion similarly in both hemispheres. The AEF latencies were not systematically affected by the distortion.

Conclusions

We propose that the increased activity of AEFs reflects cortical processing of acoustic properties common to both speech and non-speech stimuli. More specifically, the enhancement is most likely caused by spectral changes brought about by the decrease of amplitude resolution, in particular the introduction of periodic, signal-dependent distortion to the original sound. Converging evidence suggests that the observed AEF amplification could reflect cortical sensitivity to periodic sounds.  相似文献   

2.
张树林  刘扬波  曾佳  王永良  孔祥燕  谢晓明 《物理学报》2012,61(2):20701-020701
本文利用磁屏蔽室和二阶轴向梯度计抑制环境磁场噪声, 建立了单通道脑磁探测系统, 并对不用声音频率下脑听觉激励磁场N100m响应进行了初步探测.结果显示, 1000 Hz音频和100 ms持续声音激励下, N100m峰值的典型强度约为0.4 pT.在低的声音频率激励下, N100m峰出现延时, 100 Hz 和1000 Hz之间的延时差别达到25 ms.相比于1 kHz特定频率的声音激励, 1—4 kHz 随机变频下的N100m峰幅度增强, 出现了数毫秒的延时.本研究为下一步利用软件梯度计进行多通道脑磁系统和听觉机理研究奠定了一定的基础.  相似文献   

3.

Background  

The cortical activity underlying the perception of vowel identity has typically been addressed by manipulating the first and second formant frequency (F1 & F2) of the speech stimuli. These two values, originating from articulation, are already sufficient for the phonetic characterization of vowel category. In the present study, we investigated how the spectral cues caused by articulation are reflected in cortical speech processing when combined with phonation, the other major part of speech production manifested as the fundamental frequency (F0) and its harmonic integer multiples. To study the combined effects of articulation and phonation we presented vowels with either high (/a/) or low (/u/) formant frequencies which were driven by three different types of excitation: a natural periodic pulseform reflecting the vibration of the vocal folds, an aperiodic noise excitation, or a tonal waveform. The auditory N1m response was recorded with whole-head magnetoencephalography (MEG) from ten human subjects in order to resolve whether brain events reflecting articulation and phonation are specific to the left or right hemisphere of the human brain.  相似文献   

4.

Background  

Auditory evoked responses can be modulated by both the sequencing and the signal-to-noise ratio of auditory stimuli. Constant sequencing as well as intense masking sounds basically lead to N1m response amplitude reduction. However, the interaction between these two factors has not been investigated so far. Here, we presented subjects tone stimuli of different frequencies, which were either concatenated in blocks of constant frequency or in blocks of randomly changing frequencies. The tones were presented either in silence or together with broad-band noises of varying levels.  相似文献   

5.
Auditory evoked cortical responses to changes in the interaural phase difference (IPD) were recorded using magnetoencephalography (MEG). Twelve normal-hearing young adults were tested with amplitude-modulated tones with carrier frequencies of 500, 1000, 1250, and 1500 Hz. The onset of the stimuli evoked P1m-N1m-P2m cortical responses, as did the changes in the interaural phase. Significant responses to IPD changes were identified at 500 and 1000 Hz in all subjects and at 1250 Hz in nine subjects, whereas responses were absent in all subjects at 1500 Hz, indicating a group mean threshold for detecting IPDs of 1250 Hz. Behavioral thresholds were found at 1200 Hz using an adaptive two alternative forced choice procedure. Because the physiological responses require phase information, through synchronous bilateral inputs at the level of the auditory brainstem, physiological "change" detection thresholds likely reflect the upper limit of phase synchronous activity in the brainstem. The procedure has potential applications in investigating impaired binaural processing because phase statistic applied to single epoch MEG data allowed individual thresholds to be obtained.  相似文献   

6.

Background  

Due to auditory experience, musicians have better auditory expertise than non-musicians. An increased neocortical activity during auditory oddball stimulation was observed in different studies for musicians and for non-musicians after discrimination training. This suggests a modification of synaptic strength among simultaneously active neurons due to the training. We used amplitude-modulated tones (AM) presented in an oddball sequence and manipulated their carrier or modulation frequencies. We investigated non-musicians in order to see if behavioral discrimination training could modify the neocortical activity generated by change detection of AM tone attributes (carrier or modulation frequency). Cortical evoked responses like N1 and mismatch negativity (MMN) triggered by sound changes were recorded by a whole head magnetoencephalographic system (MEG). We investigated (i) how the auditory cortex reacts to pitch difference (in carrier frequency) and changes in temporal features (modulation frequency) of AM tones and (ii) how discrimination training modulates the neuronal activity reflecting the transient auditory responses generated in the auditory cortex.  相似文献   

7.
Three experiments investigated subjects' ability to detect and discriminate the simulated horizontal motion of auditory targets in an anechoic environment. "Moving" stimuli were produced by dynamic application of stereophonic balancing algorithms to a two-loudspeaker system with a 30 degree separation. All stimuli were 500-Hz tones. In experiment 1, subjects had to discriminate a left-to-right moving stimulus from a stationary stimulus pulsed for the same duration (300 or 600 ms). For both durations, minimum audible "movement" angles ("MAMA's") were on the order of 5 degrees for stimuli presented at 0 degrees azimuth (straight ahead), and increased to greater than 30 degrees for stimuli presented at +/- 90 degrees azimuth. Experiment 2 further investigated MAMA's at 0 degrees azimuth, employing two different procedures to track threshold: holding stimulus duration constant (at 100-600 ms) while varying velocity; or holding the velocity constant (at 22 degrees-360 degrees/s) while varying duration. Results from the two procedures agreed with each other and with the MAMA's determined by Perrott and Musicant for actually moving sound sources [J. Acoust. Soc. Am. 62, 1463-1466 (1977b)]: As stimulus duration decreased below 100-150 ms, the MAMA's increased sharply from 5 degrees-20 degrees or more, indicating that there is some minimum integration time required for subjects to perform optimally in an auditory spatial resolution task. Experiment 3 determined differential "velocity" thresholds employing simulated reference velocities of 0 degrees-150 degrees/s and stimulus durations of 150-600 ms. As with experiments 1 and 2, the data are more easily summarized by considering angular distance than velocity: For a given "extent of movement" of a reference target, about 4 degrees-10 degrees additional extent is required for threshold discrimination between two "moving" targets, more or less independently of stimulus duration or reference velocity. These data suggest that for the range of simulated velocities employed in these experiments, subjects respond to spatial changes--not velocity per se--when presented with a "motion" detection or discrimination task.  相似文献   

8.
Three-layer neural-network functions were developed to transform spectral representations of pinna-filtered stimuli at the input to a space-mapped representation of sound-source direction at the output. The inputs are modeled after transfer functions of the external ear of the cat; the output is modeled on the spatial sensitivity of superior colliculus neurons. Network solutions are obtained by backpropagation and by a method that enforces uniform task distribution in the hidden layer of the model. Solutions are characterized using bandlimited inputs to study the relative strength of potential sound localization cues in various frequency regions. This analysis suggests that the frequency region containing the first spectral notch (5-18 kHz) provides the best localization cues. Response properties of model neurons were studied using input patterns modeled after auditory nerve response profiles to pure tones at various frequencies and sound levels. The response properties of hidden layer model neurons resemble cochlear nucleus types III and IV and their composites. Neurons in both hidden and output layers show the properties of spectral notch detectors. Although neural networks have limitations as models of real neural systems, the results illustrate how they can provide insight into the computation of complex transformations in the nervous system.  相似文献   

9.
The cochlear plays a vital role in the sense and sensitivity of hearing; however, there is currently a lack of knowledge regarding the relationships between mechanical transduction of sound at different intensities and frequencies in the cochlear and the neurochemical processes that lead to neuronal responses in the central auditory system. In the current study, we introduced manganese-enhanced MRI (MEMRI), a convenient in vivo imaging method, for investigation of how sound, at different intensities and frequencies, is propagated from the cochlear to the central auditory system. Using MEMRI with intratympanic administration, we demonstrated differential manganese signal enhancements according to sound intensity and frequencies in the ascending auditory pathway of the rat after administration ofintratympanicMnCl2.Compared to signal enhancement without explicit sound stimuli, auditory structures in the ascending auditory pathway showed stronger signal enhancement in rats who received sound stimuli of 10 and 40 kHz. In addition, signal enhancement with a stimulation frequency of 40 kHz was stronger than that with 10 kHz. Therefore, the results of this study seem to suggest that, in order to achieve an effective response to high sound intensity or frequency, more firing of auditory neurons, or firing of many auditory neurons together for the pooled neural activity is needed.  相似文献   

10.
The transformations of sound by the auditory periphery of the ferret have been investigated using an impulse response technique for a large number of sound locations surrounding the animal. Individual frequencies were extracted from the detailed spectral transformation functions (STFs) obtained for each stimulus location and, using sophisticated spatial interpolation routines, were used to calculate the directional response of the periphery at that frequency. The strength of the directional response was directly related to the analysis frequency. Furthermore, as the analysis frequency was increased to 20 kHz, the orientation of the directional response increased in elevation from the horizon (E0 degrees) to about E30 degrees, while the azimuthal location remained fairly constant at 30 degrees to 40 degrees from the midline. For analysis frequencies above 20 kHz, the response became increasingly directional toward the ipsilateral interaural axis. The interaural level differences (ILDs) were also calculated for all animals studied. ILDs increased from around 5 to 25 dB over the range of frequencies from 3-24 kHz. The two-dimensional patterns of iso-ILD contours were roughly concentric and centered on the interaural axis for frequencies below 16 kHz. For higher frequencies, there was a tendency for the ILD contours to be centered on more anterior and inferior locations. The increased directionality of the auditory periphery with increasing analysis frequency, together with the presence of sharp nulls in the response at high analysis frequencies, is consistent with a diffractive effect produced by the aperture of the pinna. However, this simple model does not predict the directional responses over the low to middle frequency range.  相似文献   

11.
For human listeners, cues for vertical-plane localization are provided by direction-dependent pinna filtering. This study quantified listeners' weighting of the spectral cues from each ear as a function of stimulus lateral angle, interaural time difference (ITD), and interaural level difference (ILD). Subjects indicated the apparent position of headphone-presented noise bursts synthesized in virtual auditory space. The synthesis filters for the two ears either corresponded to the same location or to two different locations separated vertically by 20 deg. Weighting of each ear's spectral information was determined by a multiple regression between the elevations to which each ear's spectrum corresponded and the vertical component of listeners' responses. The apparent horizontal source location was controlled either by choosing synthesis filters corresponding to locations on or 30 deg left or right of the median plane or by attenuating or delaying the signal at one ear. For broadband stimuli, spectral weighting and apparent lateral angle were determined primarily by ITD. Only for high-pass stimuli were weighting and lateral angle determined primarily by ILD. The results suggest that the weighting of monaural spectral cues and the perceived lateral angle of a sound source depend similarly on ITD, ILD, and stimulus spectral range.  相似文献   

12.
The influence of pinnae-based spectral cues on sound localization   总被引:1,自引:0,他引:1  
The role of pinnae-based spectral cues was investigated by requiring listeners to locate sound, binaurally, in the horizontal plane with and without partial occlusion of their external ears. The main finding was that the high frequencies were necessary for optimal performance. When the stimulus contained the higher audio frequencies, e.g., broadband and 4.0-kHz high-pass noise, localization accuracy was significantly superior to that recorded for stimuli consisting only of the lower frequencies (4.0- and 1.0-kHz low-pass noise). This finding was attributed to the influence of the spectral cues furnished by the pinnae, for when the stimulus composition included high frequencies, pinnae occlusion resulted in a marked decline in localization accuracy. Numerous front-rear reversals occurred. Moreover, the ability to distinguish among sounds originating within the same quadrant also suffered. Performance proficiency for the low-pass stimuli was not further degraded under conditions of pinnae occlusion. In locating the 4.0-kHz high-pass noise when both, neither, or only one ear was occluded, the data demonstrated unequivocally that the pinna-based cues of the "near" ear contributed powerfully toward localization accuracy.  相似文献   

13.
An experiment was conducted to determine the effect of aging on sound localization. Seven groups of 16 subjects, aged 10-81 years, were tested. Sound localization was assessed using six different arrays of four or eight loudspeakers that surrounded the subject in the horizontal plane, at a distance of 1 m. For two 4-speaker arrays, one loudspeaker was positioned in each spatial quadrant, on either side of the midline or the interaural axis, respectively. For four 8-speaker arrays, two loudspeakers were positioned in each quadrant, one close to the midline and the second separated from the first by 15 degrees, 30 degrees, 45 degrees, or 60 degrees. Three different 300-ms stimuli were localized: two one-third-octave noise bands, centered at 0.5 and 4 kHz, and broadband noise. The stimulus level (75 dB SPL) was well above hearing threshold for all subjects tested. Over the age range studied, percent-correct sound-source identification judgments decreased by 12%-15%. Performance decrements were apparent as early as the third decade of life. Broadband noise was easiest to localize (both binaural and spectral cues were available), and the 0.5-kHz noise band, the most difficult to localize (primarily interaural temporal difference cue available). Accuracy was relatively higher in front of than behind the head, and errors were largely front/back mirror image reversals. A left-sided superiority was evident until the fifth decade of life. The results support the conclusions that the processing of spectral information becomes progressively less efficient with aging, and is generally worse for sources on the right side of space.  相似文献   

14.
Directional properties of the sound transformation at the ear of four intact echolocating bats, Eptesicus fuscus, were investigated via measurements of the head-related transfer function (HRTF). Contributions of external ear structures to directional features of the transfer functions were examined by remeasuring the HRTF in the absence of the pinna and tragus. The investigation mainly focused on the interactions between the spatial and the spectral features in the bat HRTF. The pinna provides gain and shapes these features over a large frequency band (20-90 kHz), and the tragus contributes gain and directionality at the high frequencies (60 to 90 kHz). Analysis of the spatial and spectral characteristics of the bat HRTF reveals that both interaural level differences (ILD) and monaural spectral features are subject to changes in sound source azimuth and elevation. Consequently, localization cues for horizontal and vertical components of the sound source location interact. Availability of multiple cues about sound source azimuth and elevation should enhance information to support reliable sound localization. These findings stress the importance of the acoustic information received at the two ears for sound localization of sonar target position in both azimuth and elevation.  相似文献   

15.
Perceptual integration of vibrotactile and auditory sinusoidal tone pulses was studied in detection experiments as a function of stimulation frequency. Vibrotactile stimuli were delivered through a single channel vibrator to the left middle fingertip. Auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. Detection performance for combined auditory-tactile presentations was measured using stimulus levels that yielded 63% to 77% correct unimodal performance. In Experiment 1, the vibrotactile stimulus was 250 Hz and the auditory stimulus varied between 125 and 2000 Hz. In Experiment 2, the auditory stimulus was 250 Hz and the tactile stimulus varied between 50 and 400 Hz. In Experiment 3, the auditory and tactile stimuli were always equal in frequency and ranged from 50 to 400 Hz. The highest rates of detection for the combined-modality stimulus were obtained when stimulating frequencies in the two modalities were equal or closely spaced (and within the Pacinian range). Combined-modality detection for closely spaced frequencies was generally consistent with an algebraic sum model of perceptual integration; wider-frequency spacings were generally better fit by a Pythagorean sum model. Thus, perceptual integration of auditory and tactile stimuli at near-threshold levels appears to depend both on absolute frequency and relative frequency of stimulation within each modality.  相似文献   

16.
The ability of six human subjects to discriminate the velocity of moving sound sources was examined using broadband stimuli presented in virtual auditory space. Subjects were presented with two successive stimuli moving in the frontal horizontal plane level with the ears, and were required to judge which moved the fastest. Discrimination thresholds were calculated for reference velocities of 15, 30, and 60 degrees/s under three stimulus conditions. In one condition, stimuli were centered on 0 degrees azimuth and their duration varied randomly to prevent subjects from using displacement as an indicator of velocity. Performance varied between subjects giving median thresholds of 5.5, 9.1, and 14.8 degrees/s for the three reference velocities, respectively. In a second condition, pairs of stimuli were presented for a constant duration and subjects would have been able to use displacement to assist their judgment as faster stimuli traveled further. It was found that thresholds decreased significantly for all velocities (3.8, 7.1, and 9.8 degrees/s), suggesting that the subjects were using the additional displacement cue. The third condition differed from the second in that the stimuli were "anchored" on the same starting location rather than centered on the midline, thus doubling the spatial offset between stimulus endpoints. Subjects showed the lowest thresholds in this condition (2.9, 4.0, and 7.0 degrees/s). The results suggested that the auditory system is sensitive to velocity per se, but velocity comparisons are greatly aided if displacement cues are present.  相似文献   

17.
考虑头部转动带来的动态因素对听觉垂直定位的贡献,提出了前方空间环绕声的四扬声器虚拟重放方法。4个扬声器分别布置在水平面左前、右前以及高仰角的左前上、右前上方向,并采用听觉传输信号处理的方法将多通路空间环绕声信号转换为4个扬声器的重放信号。以9.1通路空间环绕声虚拟重放为例,采用头相关传输函数对双耳声压及其包含的定位因素进行分析表明,该方法可以产生正确的双耳时间差及其随头部转动的变化,从而产生合适的侧向定位双耳因素和垂直定位的动态因素。而心理声学实验结果表明,该方法可以重放稳定的前方空间的水平和垂直虚拟源。因此,四扬声器布置结合听觉传输处理足以重放前方空间环绕声的垂直定位信息,实现多通路空间环绕声的向下混合与简化。   相似文献   

18.
The temporal representation of speechlike stimuli in the auditory-nerve output of a guinea pig cochlea model is described. The model consists of a bank of dual resonance nonlinear filters that simulate the vibratory response of the basilar membrane followed by a model of the inner hair cell/auditory nerve complex. The model is evaluated by comparing its output with published physiological auditory nerve data in response to single and double vowels. The evaluation includes analyses of individual fibers, as well as ensemble responses over a wide range of best frequencies. In all cases the model response closely follows the patterns in the physiological data, particularly the tendency for the temporal firing pattern of each fiber to represent the frequency of a nearby formant of the speech sound. In the model this behavior is largely a consequence of filter shapes; nonlinear filtering has only a small contribution at low frequencies. The guinea pig cochlear model produces a useful simulation of the measured physiological response to simple speech sounds and is therefore suitable for use in more advanced applications including attempts to generalize these principles to the response of human auditory system, both normal and impaired.  相似文献   

19.
A human psychoacoustical experiment is described that investigates the role of the monaural and interaural spectral cues in human sound localization. In particular, it focuses on the relative contribution of the monaural versus the interaural spectral cues towards resolving directions within a cone of confusion (i.e., directions with similar interaural time and level difference cues) in the auditory localization process. Broadband stimuli were presented in virtual space from 76 roughly equidistant locations around the listener. In the experimental conditions, a "false" flat spectrum was presented at the left eardrum. The sound spectrum at the right eardrum was then adjusted so that either the true right monaural spectrum or the true interaural spectrum was preserved. In both cases, the overall interaural time difference and overall interaural level difference were maintained at their natural values. With these virtual sound stimuli, the sound localization performance of four human subjects was examined. The localization performance results indicate that neither the preserved interaural spectral difference cue nor the preserved right monaural spectral cue was sufficient to maintain accurate elevation judgments in the presence of a flat monaural spectrum at the left eardrum. An explanation for the localization results is given in terms of the relative spectral information available for resolving directions within a cone of confusion.  相似文献   

20.
The broadening and splitting of auditory events in dichotic listening conditions with various degrees of interaural coherence are discussed. By using a psychoacoustical mapping method, it has been possible to observe broadening and splitting for a wide range of stimuli, including broadband pink noise as well as bandpass noises with different relative bandwidths and center frequencies. The spatial extents of the auditory events decrease with increasing center frequencies for bandpass stimuli of constant relative bandwidth. The number of partial events for bandpass stimuli decreases with increasing degrees of interaural coherence. These results are, for example, of interest with respect to auditory spaciousness in architectural acoustics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号