首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Tone complexes with positive (m+) and negative (m-) Schroeder phase show large differences in masking efficiency. This study investigated whether the different phase characteristics also affect loudness. Loudness matches between m+ and m- complexes were measured as a function of (1) the fundamental frequency (f0) for different frequency bands in normal-hearing and hearing-impaired subjects, and (2) intensity level in normal-hearing subjects. In normal-hearing subjects, the level of the m+ stimulus was up to 10 dB higher than that of the corresponding m- stimulus at the point of equal loudness. The largest differences in loudness were found for levels between 20 and 60 dB SL. In hearing-impaired listeners, the difference was reduced, indicating the relevance of active cochlear mechanisms. Loudness matches of m+ and m- stimuli to a common noise reference (experiment 3) showed differences as a function of f0 that were in line with direct comparisons from experiment 1 and indicated additionally that the effect is mainly due to the specific internal processing of m+. The findings are roughly consistent with studies pertaining to masking efficiency and can probably not be explained by current loudness models, supporting the need for incorporating more realistic cochlea simulations in future loudness models.  相似文献   

2.
Temporal integration for a 1000-Hz signal was determined for normal-hearing and cochlear hearing-impaired listeners in quiet and in masking noise of variable bandwidth. Critical ratio and 3-dB critical band measures of frequency resolution were derived from the masking data. Temporal integration for the normal-hearing listeners was markedly reduced in narrow-band noise, when contrasted with temporal integration in quiet or in wideband noise. The effect of noise bandwidth on temporal integration was smaller for the hearing-impaired group. Hearing-impaired subjects showed both reduced temporal integration and reduced frequency resolution for the 200-ms signal. However, a direct relation between temporal integration and frequency resolution was not indicated. Frequency resolution for the normal-hearing listeners did not differ from that of the hearing-impaired listeners for the 20-ms signal. It was suggested that some of the frequency resolution and temporal integration differences between normal-hearing and hearing-impaired listeners could be accounted for by off-frequency listening.  相似文献   

3.
A methodology for the estimation of individual loudness growth functions using tone-burst otoacoustic emissions (TBOAEs) and tone-burst auditory brainstem responses (TBABRs) was proposed by Silva and Epstein [J. Acoust. Soc. Am. 127, 3629-3642 (2010)]. This work attempted to investigate the application of such technique to the more challenging cases of hearing-impaired listeners. The specific aims of this study were to (1) verify the accuracy of this technique with eight hearing-impaired listeners for 1- and 4-kHz tone-burst stimuli, (2) investigate the effect of residual noise levels from the TBABRs on the quality of the loudness growth estimation, and (3) provide a public dataset of physiological and psychoacoustical responses to a wide range of stimuli intensity. The results show that some of the physiological loudness growth estimates were within the mean-square-error range for standard psychoacoustical procedures, with closer agreement at 1 kHz. The median residual noise in the TBABRs was found to be related to the performance of the estimation, with some listeners showing strong improvements in the estimated loudness growth function when controlling for noise levels. This suggests that future studies using evoked potentials to estimate loudness growth should control for the estimated averaged residual noise levels of the TBABRs.  相似文献   

4.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

5.
The purpose of this study was to examine the effect of spectral-cue audibility on the recognition of stop consonants in normal-hearing and hearing-impaired adults. Subjects identified six synthetic CV speech tokens in a closed-set response task. Each syllable differed only in the initial 40-ms consonant portion of the stimulus. In order to relate performance to spectral-cue audibility, the initial 40 ms of each CV were analyzed via FFT and the resulting spectral array was passed through a sliding-filter model of the human auditory system to account for logarithmic representation of frequency and the summation of stimulus energy within critical bands. This allowed the spectral data to be displayed in comparison to a subject's sensitivity thresholds. For normal-hearing subjects, an orderly function relating the percentage of audible stimulus to recognition performance was found, with perfect discrimination performance occurring when the bulk of the stimulus spectrum was presented at suprathreshold levels. For the hearing-impaired subjects, however, it was found in many instances that suprathreshold presentation of stop-consonant spectral cues did not yield recognition equivalent to that found for the normal-hearing subjects. These results demonstrate that while the audibility of individual stop consonants is an important factor influencing recognition performance in hearing-impaired subjects, it is not always sufficient to explain the effects of sensorineural hearing loss.  相似文献   

6.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listener's responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.  相似文献   

7.
The relative importance of temporal information in broad spectral regions for consonant identification was assessed in normal-hearing listeners. For the purpose of forcing listeners to use primarily temporal-envelope cues, speech sounds were spectrally degraded using four-noise-band vocoder processing Frequency-weighting functions were determined using two methods. The first method consisted of measuring the intelligibility of speech with a hole in the spectrum either in quiet or in noise. The second method consisted of correlating performance with the randomly and independently varied signal-to-noise ratio within each band. Results demonstrated that all bands contributed equally to consonant identification when presented in quiet. In noise, however, both methods indicated that listeners consistently placed relatively more weight upon the highest frequency band. It is proposed that the explanation for the difference in results between quiet and noise relates to the shape of the modulation spectra in adjacent frequency bands. Overall, the results suggest that normal-hearing listeners use a common listening strategy in a given condition. However, this strategy may be influenced by the competing sounds, and thus may vary according to the context. Some implications of the results for cochlear implantees and hearing-impaired listeners are discussed.  相似文献   

8.
Speech-intelligibility tests auralized in a virtual classroom were used to investigate the optimal reverberation times for verbal communication for normal-hearing and hearing-impaired adults. The idealized classroom had simple geometry, uniform surface absorption, and an approximately diffuse sound field. It contained a speech source, a listener at a receiver position, and a noise source located at one of two positions. The relative output levels of the speech and noise sources were varied, along with the surface absorption and the corresponding reverberation time. The binaural impulse responses of the speech and noise sources in each classroom configuration were convolved with Modified Rhyme Test (MRT) and babble-noise signals. The resulting signals were presented to normal-hearing and hearing-impaired adult subjects to identify the configurations that gave the highest speech intelligibilities for the two groups. For both subject groups, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time included both zero and nonzero values. The results generally support previous theoretical results.  相似文献   

9.
The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.  相似文献   

10.
Spectro-temporal analysis in normal-hearing and cochlear-impaired listeners   总被引:1,自引:0,他引:1  
Detection thresholds for a 1.0-kHz pure tone were determined in unmodulated noise and in noise modulated by a 15-Hz square wave. Comodulation masking release (CMR) was calculated as the difference in threshold between the modulated and unmodulated conditions. The noise bandwidth varied between 100 and 1000 Hz. Frequency selectivity was also examined using an abbreviated notched-noise masking method. The subjects in the main experiment consisted of 12 normal-hearing and 12 hearing-impaired subjects with hearing loss of cochlear origin. The most discriminating conditions were repeated on 16 additional hearing-impaired subjects. The CMR of the hearing-impaired group was reduced for the 1000-Hz noise bandwidth. The reduced CMR at this bandwidth correlated significantly with reduced frequency selectivity, consistent with the hypothesis that the across-frequency difference cue used in CMR is diminished by poor frequency selectivity. The results indicated that good frequency selectivity is a prerequisite, but not a guarantee, of large CMR.  相似文献   

11.
"Masking release" (MR), the improvement of speech intelligibility in modulated compared with unmodulated maskers, is typically smaller than normal for hearing-impaired listeners. The extent to which this is due to reduced audibility or to suprathreshold processing deficits is unclear. Here, the effects of audibility were controlled by using stimuli restricted to the low- (≤1.5 kHz) or mid-frequency (1-3 kHz) region for normal-hearing listeners and hearing-impaired listeners with near-normal hearing in the tested region. Previous work suggests that the latter may have suprathreshold deficits. Both spectral and temporal MR were measured. Consonant identification was measured in quiet and in the presence of unmodulated, amplitude-modulated, and spectrally modulated noise at three signal-to-noise ratios (the same ratios for the two groups). For both frequency regions, consonant identification was poorer for the hearing-impaired than for the normal-hearing listeners in all conditions. The results suggest the presence of suprathreshold deficits for the hearing-impaired listeners, despite near-normal audiometric thresholds over the tested frequency regions. However, spectral MR and temporal MR were similar for the two groups. Thus, the suprathreshold deficits for the hearing-impaired group did not lead to reduced MR.  相似文献   

12.
Objective acoustical parameters for halls are often measured in 1-octave bands with mid-frequencies from 125 to 4000 Hz. In reality, the frequency range of musical instruments is much wider than that, and the fundamentals of the lower notes of bass instruments are contained in 31.5 or 63?Hz bands. Overtones of fundamentals in these bands fall in 125?Hz band. This report presents subjective experiments designed to determine to what extent the overtones in 125?Hz band and higher bands influence the loudness sensation of the components in 63?Hz band. In the experiments, the 125?Hz and higher components of the musical tone are used to act as a masker against the lower component used as a maskee. The threshold of the difference between G(125?Hz) and G(lower band) that just enables one to hear the fundamental tones in the lower band is determined. Masked loudness of 63?Hz sinusoidal tone caused by partial masking noise with higher frequencies was determined based on a similar procedure to the masked loudness-matching function. The result indicates that the difference in loudness of low tone will not be noticeable even if G changed by ±2.5 to ±3?dB, at least when there are other accompanying instruments.  相似文献   

13.
Temporal fine structure (TFS) sensitivity, frequency selectivity, and speech reception in noise were measured for young normal-hearing (NHY), old normal-hearing (NHO), and hearing-impaired (HI) subjects. Two measures of TFS sensitivity were used: the "TFS-LF test" (interaural phase difference discrimination) and the "TFS2 test" (discrimination of harmonic and frequency-shifted tones). These measures were not significantly correlated with frequency selectivity (after partialing out the effect of audiometric threshold), suggesting that insensitivity to TFS cannot be wholly explained by a broadening of auditory filters. The results of the two tests of TFS sensitivity were significantly but modestly correlated, suggesting that performance of the tests may be partly influenced by different factors. The NHO group performed significantly more poorly than the NHY group for both measures of TFS sensitivity, but not frequency selectivity, suggesting that TFS sensitivity declines with age in the absence of elevated audiometric thresholds or broadened auditory filters. When the effect of mean audiometric threshold was partialed out, speech reception thresholds in modulated noise were correlated with TFS2 scores, but not measures of frequency selectivity or TFS-LF test scores, suggesting that a reduction in sensitivity to TFS can partly account for the speech perception difficulties experienced by hearing-impaired subjects.  相似文献   

14.
Spectral-shape discrimination thresholds were measured in the presence and absence of noise to determine whether normal-hearing and hearing-impaired listeners rely primarily on spectral peaks in the excitation pattern when discriminating between stimuli with different spectral shapes. Standard stimuli were the sum of 2, 4, 6, 8, 10, 20, or 30 equal-amplitude tones with frequencies fixed between 200 and 4000 Hz. Signal stimuli were generated by increasing and decreasing the levels of every other standard component. The function relating the spectral-shape discrimination threshold to the number of components (N) showed an initial decrease in threshold with increasing N and then an increase in threshold when the number of components reached 10 and 6, for normal-hearing and hearing-impaired listeners, respectively. The presence of a 50-dB SPL/Hz noise led to a 1.7 dB increase in threshold for normal-hearing listeners and a 3.5 dB increase for hearing-impaired listeners. Multichannel modeling and the relatively small influence of noise suggest that both normal-hearing and hearing-impaired listeners rely on the peaks in the excitation pattern for spectral-shape discrimination. The greater influence of noise in the data from hearing-impaired listeners is attributed to a poorer representation of spectral peaks.  相似文献   

15.
Contribution of spectral cues to human sound localization   总被引:1,自引:0,他引:1  
The contribution of spectral cues to human sound localization was investigated by removing cues in 1/2-, 1- or 2-octave bands in the frequency range above 4 kHz. Localization responses were given by placing an acoustic pointer at the same apparent position as a virtual target. The pointer was generated by filtering a 100-ms harmonic complex with equalized head-related transfer functions (HRTFs). Listeners controlled the pointer via a hand-held stick that rotated about a fixed point. In the baseline condition, the target, a 200-ms noise burst, was filtered with the same HRTFs as the pointer. In other conditions, the spectral information within a certain frequency band was removed by replacing the directional transfer function within this band with the average transfer of this band. Analysis of the data showed that removing cues in 1/2-octave bands did not affect localization, whereas for the 2-octave band correct localization was virtually impossible. The results obtained for the 1-octave bands indicate that up-down cues are located mainly in the 6-12-kHz band, and front-back cues in the 8-16-kHz band. The interindividual spread in response patterns suggests that different listeners use different localization cues. The response patterns in the median plane can be predicted using a model based on spectral comparison of directional transfer functions for target and response directions.  相似文献   

16.
The purpose of these experiments was to determine whether detecting brief decrements in noise level ("gaps") varies with the spectral content and bandwidth of noise in mice as it does in humans. The behavioral effect of gaps was quantified by their inhibiting a subsequent acoustic startle reflex. Gap durations from 1 to 29 ms were presented in five adjacent 1-octave noise bands and one 5-octave band, their range being 2 kHz to 64 kHz. Gaps ended 60 ms before the startle stimulus (experiment 1) or at startle onset (experiment 2). Asymptotic inhibition was greater for higher-frequency 1-octave bands and highest for the 5-octave band in both experiments, but time constants were related to frequency only in experiment 1. For the lowest band (2-4 kHz) neither noise decrements (experiment 1 and 2) nor increments (experiment 3) had any behavioral consequence, but this band was effective when presented as a pulse in quiet (experiment 4). The lowest frequencies in the most effective 1-octave band were one octave above the spectral region where mice have their best absolute thresholds. These effects are similar to those obtained in humans, and reveal a special contribution of wide band, high-frequency stimulation to temporal acuity.  相似文献   

17.
Reports using a variety of psychophysical tasks indicate that pitch perception by hearing-impaired listeners may be abnormal, contributing to difficulties in understanding speech and enjoying music. Pitches of complex sounds may be weaker and more indistinct in the presence of cochlear damage, especially when frequency regions are affected that form the strongest basis for pitch perception in normal-hearing listeners. In this study, the strength of the complex pitch generated by iterated rippled noise was assessed in normal-hearing and hearing-impaired listeners. Pitch strength was measured for broadband noises with spectral ripples generated by iteratively delaying a copy of a given noise and adding it back into the original. Octave-band-pass versions of these noises also were evaluated to assess frequency dominance regions for rippled-noise pitch. Hearing-impaired listeners demonstrated consistently weaker pitches in response to the rippled noises relative to pitch strength in normal-hearing listeners. However, in most cases, the frequency regions of pitch dominance, i.e., strongest pitch, were similar to those observed in normal-hearing listeners. Except where there exists a substantial sensitivity loss, contributions from normal pitch dominance regions associated with the strongest pitches may not be directly related to impaired spectral processing. It is suggested that the reduced strength of rippled-noise pitch in listeners with hearing loss results from impaired frequency resolution and possibly an associated deficit in temporal processing.  相似文献   

18.
Frequency resolution was evaluated for two normal-hearing and seven hearing-impaired subjects with moderate, flat sensorineural hearing loss by measuring percent correct detection of a 2000-Hz tone as the width of a notch in band-reject noise increased. The level of the tone was fixed for each subject at a criterion performance level in broadband noise. Discrimination of synthetic speech syllables that differed in spectral content in the 2000-Hz region was evaluated as a function of the notch width in the same band-reject noise. Recognition of natural speech consonant/vowel syllables in quiet was also tested; results were analyzed for percent correct performance and relative information transmitted for voicing and place features. In the hearing-impaired subjects, frequency resolution at 2000 Hz was significantly correlated with the discrimination of synthetic speech information in the 2000-Hz region and was not related to the recognition of natural speech nonsense syllables unless (a) the speech stimuli contained the vowel /i/ rather than /a/, and (b) the score reflected information transmitted for place of articulation rather than percent correct.  相似文献   

19.
This study investigated the effect of mild-to-moderate sensorineural hearing loss on the ability to identify speech in noise for vowel-consonant-vowel tokens that were either unprocessed, amplitude modulated synchronously across frequency, or amplitude modulated asynchronously across frequency. One goal of the study was to determine whether hearing-impaired listeners have a particular deficit in the ability to integrate asynchronous spectral information in the perception of speech. Speech tokens were presented at a high, fixed sound level and the level of a speech-shaped noise was changed adaptively to estimate the masked speech identification threshold. The performance of the hearing-impaired listeners was generally worse than that of the normal-hearing listeners, but the impaired listeners showed particularly poor performance in the synchronous modulation condition. This finding suggests that integration of asynchronous spectral information does not pose a particular difficulty for hearing-impaired listeners with mild/moderate hearing losses. Results are discussed in terms of common mechanisms that might account for poor speech identification performance of hearing-impaired listeners when either the masking noise or the speech is synchronously modulated.  相似文献   

20.
Many of the 9 million workers exposed to average noise levels of 85 dB (A) and above are required to wear hearing protection devices, and many of these workers have already developed noise-induced hearing impairments. There is some evidence in the literature that hearing-impaired users may not receive as much attenuation from hearing protectors as normal-hearing users. This study assessed real-ear attenuation at threshold for ten normal-hearing and ten hearing-impaired subjects using a set of David Clark 10A earmuffs. Testing procedures followed the specifications of ANSI S12.6-1984. The results showed that the hearing-impaired subjects received slightly more attenuation than the normal-hearing subjects at all frequencies, but these differences were not statistically significant. These results provide additional support to the finding that hearing protection devices are capable of providing as much attenuation to hearing-impaired users as they do to normal-hearing individuals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号