首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
The speech-reception threshold (SRT) for sentences presented in a fluctuating interfering background sound of 80 dBA SPL is measured for 20 normal-hearing listeners and 20 listeners with sensorineural hearing impairment. The interfering sounds range from steady-state noise, via modulated noise, to a single competing voice. Two voices are used, one male and one female, and the spectrum of the masker is shaped according to these voices. For both voices, the SRT is measured as well in noise spectrally shaped according to the target voice as shaped according to the other voice. The results show that, for normal-hearing listeners, the SRT for sentences in modulated noise is 4-6 dB lower than for steady-state noise; for sentences masked by a competing voice, this difference is 6-8 dB. For listeners with moderate sensorineural hearing loss, elevated thresholds are obtained without an appreciable effect of masker fluctuations. The implications of these results for estimating a hearing handicap in everyday conditions are discussed. By using the articulation index (AI), it is shown that hearing-impaired individuals perform poorer than suggested by the loss of audibility for some parts of the speech signal. Finally, three mechanisms are discussed that contribute to the absence of unmasking by masker fluctuations in hearing-impaired listeners. The low sensation level at which the impaired listeners receive the masker seems a major determinant. The second and third factors are: reduced temporal resolution and a reduction in comodulation masking release, respectively.  相似文献   

2.
Binaural speech intelligibility in noise for hearing-impaired listeners   总被引:2,自引:0,他引:2  
The effect of head-induced interaural time delay (ITD) and interaural level differences (ILD) on binaural speech intelligibility in noise was studied for listeners with symmetrical and asymmetrical sensorineural hearing losses. The material, recorded with a KEMAR manikin in an anechoic room, consisted of speech, presented from the front (0 degree), and noise, presented at azimuths of 0 degree, 30 degrees, and 90 degrees. Derived noise signals, containing either only ITD or only ILD, were generated using a computer. For both groups of subjects, speech-reception thresholds (SRT) for sentences in noise were determined as a function of: (1) noise azimuth, (2) binaural cue, and (3) an interaural difference in overall presentation level, simulating the effect of a monaural hearing acid. Comparison of the mean results with corresponding data obtained previously from normal-hearing listeners shows that the hearing impaired have a 2.5 dB higher SRT in noise when both speech and noise are presented from the front, and 2.6-5.1 dB less binaural gain when the noise azimuth is changed from 0 degree to 90 degrees. The gain due to ILD varies among the hearing-impaired listeners between 0 dB and normal values of 7 dB or more. It depends on the high-frequency hearing loss at the side presented with the most favorable signal-to-noise (S/N) ratio. The gain due to ITD is nearly normal for the symmetrically impaired (4.2 dB, compared with 4.7 dB for the normal hearing), but only 2.5 dB in the case of asymmetrical impairment. When ITD is introduced in noise already containing ILD, the resulting gain is 2-2.5 dB for all groups. The only marked effect of the interaural difference in overall presentation level is a reduction of the gain due to ILD when the level at the ear with the better S/N ratio is decreased. This implies that an optimal monaural hearing aid (with a moderate gain) will hardly interfere with unmasking through ITD, while it may increase the gain due to ILD by preventing or diminishing threshold effects.  相似文献   

3.
Speech-in-noise-measurements are important in clinical practice and have been the subject of research for a long time. The results of these measurements are often described in terms of the speech reception threshold (SRT) and SNR loss. Using the basic concepts that underlie several models of speech recognition in steady-state noise, the present study shows that these measures are ill-defined, most importantly because the slope of the speech recognition functions for hearing-impaired listeners always decreases with hearing loss. This slope can be determined from the slope of the normal-hearing speech recognition function when the SRT for the hearing-impaired listener is known. The SII-function (i.e., the speech intelligibility index (SII) against SNR) is important and provides insights into many potential pitfalls when interpreting SRT data. Standardized SNR loss, sSNR loss, is introduced as a universal measure of hearing loss for speech in steady-state noise. Experimental data demonstrates that, unlike the SRT or SNR loss, sSNR loss is invariant to the target point chosen, the scoring method or the type of speech material.  相似文献   

4.
Binaural speech intelligibility of individual listeners under realistic conditions was predicted using a model consisting of a gammatone filter bank, an independent equalization-cancellation (EC) process in each frequency band, a gammatone resynthesis, and the speech intelligibility index (SII). Hearing loss was simulated by adding uncorrelated masking noises (according to the pure-tone audiogram) to the ear channels. Speech intelligibility measurements were carried out with 8 normal-hearing and 15 hearing-impaired listeners, collecting speech reception threshold (SRT) data for three different room acoustic conditions (anechoic, office room, cafeteria hall) and eight directions of a single noise source (speech in front). Artificial EC processing errors derived from binaural masking level difference data using pure tones were incorporated into the model. Except for an adjustment of the SII-to-intelligibility mapping function, no model parameter was fitted to the SRT data of this study. The overall correlation coefficient between predicted and observed SRTs was 0.95. The dependence of the SRT of an individual listener on the noise direction and on room acoustics was predicted with a median correlation coefficient of 0.91. The effect of individual hearing impairment was predicted with a median correlation coefficient of 0.95. However, for mild hearing losses the release from masking was overestimated.  相似文献   

5.
Temporal masking curves were obtained from 12 normal-hearing and 16 hearing-impaired listeners using 200-ms, 1000-Hz pure-tone maskers and 20-ms, 1000-Hz fixed-level probe tones. For the delay times used here (greater than 40 ms), temporal masking curves obtained from both groups can be well described by an exponential function with a single level-independent time constant for each listener. Normal-hearing listeners demonstrated time constants that ranged between 37 and 67 ms, with a mean of 50 ms. Most hearing-impaired listeners, with significant hearing loss at the probe frequency, demonstrated longer time constants (range 58-114 ms) than those obtained from normal-hearing listeners. Time constants were found to grow exponentially with hearing loss according to the function tau = 52e0.011(HL), when the slope of the growth of masking is unity. The longest individual time constant was larger than normal by a factor of 2.3 for a hearing loss of 52 dB. The steep slopes of the growth of masking functions typically observed at long delay times in hearing-impaired listeners' data appear to be a direct result of longer time constants. When iterative fitting procedures included a slope parameter, the slopes of the growth of masking from normal-hearing listeners varied around unity, while those from hearing-impaired listeners tended to be less (flatter) than normal. Predictions from the results of these fixed-probe-level experiments are consistent with the results of previous fixed-masker-level experiments, and they indicate that deficiencies in the ability to detect sequential stimuli should be considerable in hearing-impaired listeners, partially because of extended time constants, but mostly because forward masking involves a recovery process that depends upon the sensory response evoked by the masking stimulus. Large sensitivity losses reduce the sensory response to high SPL maskers so that the recovery process is slower, much like the recovery process for low-level stimuli in normal-hearing listeners.  相似文献   

6.
This study investigated the effect of mild-to-moderate sensorineural hearing loss on the ability to identify speech in noise for vowel-consonant-vowel tokens that were either unprocessed, amplitude modulated synchronously across frequency, or amplitude modulated asynchronously across frequency. One goal of the study was to determine whether hearing-impaired listeners have a particular deficit in the ability to integrate asynchronous spectral information in the perception of speech. Speech tokens were presented at a high, fixed sound level and the level of a speech-shaped noise was changed adaptively to estimate the masked speech identification threshold. The performance of the hearing-impaired listeners was generally worse than that of the normal-hearing listeners, but the impaired listeners showed particularly poor performance in the synchronous modulation condition. This finding suggests that integration of asynchronous spectral information does not pose a particular difficulty for hearing-impaired listeners with mild/moderate hearing losses. Results are discussed in terms of common mechanisms that might account for poor speech identification performance of hearing-impaired listeners when either the masking noise or the speech is synchronously modulated.  相似文献   

7.
This study tested the hypothesis that the reduction in spatial release from masking (SRM) resulting from sensorineural hearing loss in competing speech mixtures is influenced by the characteristics of the interfering speech. A frontal speech target was presented simultaneously with two intelligible or two time-reversed (unintelligible) speech maskers that were either colocated with the target or were symmetrically separated from the target in the horizontal plane. The difference in SRM between listeners with hearing impairment and listeners with normal hearing was substantially larger for the forward maskers (deficit of 5.8 dB) than for the reversed maskers (deficit of 1.6 dB). This was driven by the fact that all listeners, regardless of hearing abilities, performed similarly (and poorly) in the colocated condition with intelligible maskers. The same conditions were then tested in listeners with normal hearing using headphone stimuli that were degraded by noise vocoding. Reducing the number of available spectral channels systematically reduced the measured SRM, and again, more so for forward (reduction of 3.8 dB) than for reversed speech maskers (reduction of 1.8 dB). The results suggest that non-spatial factors can strongly influence both the magnitude of SRM and the apparent deficit in SRM for listeners with impaired hearing.  相似文献   

8.
Many hearing-impaired listeners suffer from distorted auditory processing capabilities. This study examines which aspects of auditory coding (i.e., intensity, time, or frequency) are distorted and how this affects speech perception. The distortion-sensitivity model is used: The effect of distorted auditory coding of a speech signal is simulated by an artificial distortion, and the sensitivity of speech intelligibility to this artificial distortion is compared for normal-hearing and hearing-impaired listeners. Stimuli (speech plus noise) are wavelet coded using a complex sinusoidal carrier with a Gaussian envelope (1/4 octave bandwidth). Intensity information is distorted by multiplying the modulus of each wavelet coefficient by a random factor. Temporal and spectral information are distorted by randomly shifting the wavelet positions along the temporal or spectral axis, respectively. Measured were (1) detection thresholds for each type of distortion, and (2) speech-reception thresholds for various degrees of distortion. For spectral distortion, hearing-impaired listeners showed increased detection thresholds and were also less sensitive to the distortion with respect to speech perception. For intensity and temporal distortion, this was not observed. Results indicate that a distorted coding of spectral information may be an important factor underlying reduced speech intelligibility for the hearing impaired.  相似文献   

9.
"Masking release" (MR), the improvement of speech intelligibility in modulated compared with unmodulated maskers, is typically smaller than normal for hearing-impaired listeners. The extent to which this is due to reduced audibility or to suprathreshold processing deficits is unclear. Here, the effects of audibility were controlled by using stimuli restricted to the low- (≤1.5 kHz) or mid-frequency (1-3 kHz) region for normal-hearing listeners and hearing-impaired listeners with near-normal hearing in the tested region. Previous work suggests that the latter may have suprathreshold deficits. Both spectral and temporal MR were measured. Consonant identification was measured in quiet and in the presence of unmodulated, amplitude-modulated, and spectrally modulated noise at three signal-to-noise ratios (the same ratios for the two groups). For both frequency regions, consonant identification was poorer for the hearing-impaired than for the normal-hearing listeners in all conditions. The results suggest the presence of suprathreshold deficits for the hearing-impaired listeners, despite near-normal audiometric thresholds over the tested frequency regions. However, spectral MR and temporal MR were similar for the two groups. Thus, the suprathreshold deficits for the hearing-impaired group did not lead to reduced MR.  相似文献   

10.
The purpose of this experiment was to determine the applicability of the Articulation Index (AI) model for characterizing the speech recognition performance of listeners with mild-to-moderate hearing loss. Performance-intensity functions were obtained from five normal-hearing listeners and 11 hearing-impaired listeners using a closed-set nonsense syllable test for two frequency responses (uniform and high-frequency emphasis). For each listener, the fitting constant Q of the nonlinear transfer function relating AI and speech recognition was estimated. Results indicated that the function mapping AI onto performance was approximately the same for normal and hearing-impaired listeners with mild-to-moderate hearing loss and high speech recognition scores. For a hearing-impaired listener with poor speech recognition ability, the AI procedure was a poor predictor of performance. The AI procedure as presently used is inadequate for predicting performance of individuals with reduced speech recognition ability and should be used conservatively in applications predicting optimal or acceptable frequency response characteristics for hearing-aid amplification systems.  相似文献   

11.
Thresholds of ongoing interaural time difference (ITD) were obtained from normal-hearing and hearing-impaired listeners who had high-frequency, sensorineural hearing loss. Several stimuli (a 500-Hz sinusoid, a narrow-band noise centered at 500 Hz, a sinusoidally amplitude-modulated 4000-Hz tone, and a narrow-band noise centered at 4000 Hz) and two criteria [equal sound-pressure level (Eq SPL) and equal sensation level (Eq SL)] for determining the level of stimuli presented to each listener were employed. The ITD thresholds and slopes of the psychometric functions were elevated for hearing-impaired listeners for the two high-frequency stimuli in comparison to: the listener's own low-frequency thresholds; and data obtained from normal-hearing listeners for stimuli presented with Eq SPL interaurally. The two groups of listeners required similar ITDs to reach threshold when stimuli were presented at Eq SLs to each ear. For low-frequency stimuli, the ITD thresholds of the hearing-impaired listener were generally slightly greater than those obtained from the normal-hearing listeners. Whether these stimuli were presented at either Eq SPL or Eq SL did not differentially affect the ITD thresholds across groups.  相似文献   

12.
Speech-reception threshold in noise with one and two hearing aids   总被引:1,自引:0,他引:1  
The binaural free-field speech-reception threshold (SRT) in 70-dBA noise was measured with conversational sentences for 24 hearing-impaired subjects without hearing aids, with a hearing aid left, right, and left plus right, respectively. The sentences were always presented in front of the listener and the interfering noise, with a spectrum equal to the long-term average spectrum of the sentences, was presented either frontally, from the right, or from the left side. For subjects with only moderate hearing loss, PTA (average air-conduction hearing level at 500, 1000, and 2000 Hz) less than 50 dB, the SRT in 70-dBA noise in both ears is determined by the signal-to-noise ratio even if only one hearing aid is used. For larger hearing losses the SRT appears to be partly determined by the absolute threshold. In conditions with a high noise level relative to the absolute threshold, in which case for both ears the SRT is determined by the signal-to-noise ratio, a second hearing aid, just as a monaural hearing aid, generally does not improve the SRT. However, in the case of a high hearing level, or a low noise level, in which a monaural hearing aid is profitable, the use of two hearing aids is even more profitable. In a separate experiment, acoustic head shadow was measured at the entrance of the ear canal and at the microphone location of a hearing aid. It appeared that, for a lateral noise source and speech frontal, the microphone position of behind-the-ear hearing aids has a negative effect on the signal-to-noise ratio of 2-3 dB.  相似文献   

13.
An articulation index calculation procedure developed for use with individual normal-hearing listeners [C. Pavlovic and G. Studebaker, J. Acoust. Soc. Am. 75, 1606-1612 (1984)] was modified to account for the deterioration in suprathreshold speech processing produced by sensorineural hearing impairment. Data from four normal-hearing and four hearing-impaired subjects were used to relate the loss in hearing sensitivity to the deterioration in speech processing in quiet and in noise. The new procedure only requires hearing threshold measurements and consists of the following two modifications of the original AI procedure of Pavlovic and Studebaker (1984): The speech and noise spectrum densities are integrated over bandwidths which are, when expressed in decibels, larger than the critical bandwidths by 10% of the hearing loss. This is in contrast to the unmodified procedure where integration is performed over critical bandwidths. The contribution of each frequency to the AI is the product of its contribution in the unmodified AI procedure and a "speech desensitization factor." The desensitization factor is specified as a function of the hearing loss. The predictive accuracies of both the unmodified and the modified calculation procedures were assessed by comparing the expected and observed speech recognition scores of four hearing-impaired subjects under various conditions of speech filtering and noise masking. The modified procedure appears accurate for general applications. In contrast, the unmodified procedure appears accurate only for applications where results obtained under various conditions on a single listener are compared to each other.  相似文献   

14.
This study aimed to clarify the basic auditory and cognitive processes that affect listeners' performance on two spatial listening tasks: sound localization and speech recognition in spatially complex, multi-talker situations. Twenty-three elderly listeners with mild-to-moderate sensorineural hearing impairments were tested on the two spatial listening tasks, a measure of monaural spectral ripple discrimination, a measure of binaural temporal fine structure (TFS) sensitivity, and two (visual) cognitive measures indexing working memory and attention. All auditory test stimuli were spectrally shaped to restore (partial) audibility for each listener on each listening task. Eight younger normal-hearing listeners served as a control group. Data analyses revealed that the chosen auditory and cognitive measures could predict neither sound localization accuracy nor speech recognition when the target and maskers were separated along the front-back dimension. When the competing talkers were separated along the left-right dimension, however, speech recognition performance was significantly correlated with the attentional measure. Furthermore, supplementary analyses indicated additional effects of binaural TFS sensitivity and average low-frequency hearing thresholds. Altogether, these results are in support of the notion that both bottom-up and top-down deficits are responsible for the impaired functioning of elderly hearing-impaired listeners in cocktail party-like situations.  相似文献   

15.
The goal of this study was to determine the extent to which the difficulty experienced by impaired listeners in understanding noisy speech can be explained on the basis of elevated tone-detection thresholds. Twenty-one impaired ears of 15 subjects, spanning a variety of audiometric configurations with average hearing losses to 75 dB, were tested for reception of consonants in a speech-spectrum noise. Speech level, noise level, and frequency-gain characteristic were varied to generate a range of listening conditions. Results for impaired listeners were compared to those of normal-hearing listeners tested under the same conditions with extra noise added to approximate the impaired listeners' detection thresholds. Results for impaired and normal listeners were also compared on the basis of articulation indices. Consonant recognition by this sample of impaired listeners was generally comparable to that of normal-hearing listeners with similar threshold shifts listening under the same conditions. When listening conditions were equated for articulation index, there was no clear dependence of consonant recognition on average hearing loss. Assuming that the primary consequence of the threshold simulation in normals is loss of audibility (as opposed to suprathreshold discrimination or resolution deficits), it is concluded that the primary source of difficulty in listening in noise for listeners with moderate or milder hearing impairments, aside from the noise itself, is the loss of audibility.  相似文献   

16.
The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.  相似文献   

17.
Temporal processing ability in the hearing impaired was investigated in a 2IFC gap-detection paradigm. The stimuli were digitally constructed 50-Hz-wide bands of noise centered at 250, 500, and 1000 Hz. On each trial, two 400-ms noise samples were paired, shaped at onset and offset, filtered, and presented in the quiet with and without a temporal gap. A modified up-down procedure with trial-by-trial feedback was used to establish threshold of detection of the gap. Approximately 4 h of practice preceded data collection; final estimate of threshold was the average of six listening blocks. There were 10 listeners, 19-25 years old. Five had normal hearing; five had a moderate congenital sensorineural hearing loss with relatively flat audiometric configuration. Near threshold (5 dB SL), all listeners performed similarly. At 15 and 25 dB SL, the normal-hearing group performed better than the hearing-impaired group. At 78 dB SPL, equal to the average intensity of the 5-dB SL condition for the hearing impaired, the normal-hearing group continued to improve and demonstrated a frequency effect not seen in the other conditions. Substantial individual differences were found in both groups, though intralistener variability was as small as expected for these narrow-bandwidth signals.  相似文献   

18.
Three experiments were conducted to determine whether listeners with a sensorineural hearing loss exhibited greater than normal amounts of masking at frequencies above the frequency of the masker. Excess masking was defined as the difference (in dB) between the masked thresholds actually obtained from a hearing-impaired listener and the expected thresholds calculated for the same individual. The expected thresholds were the power sum of the listener's thresholds in quiet and the average masked thresholds obtained from a group of normal-hearing subjects at the test frequency. Hearing-impaired listeners, with thresholds in quiet ranging from approximately 35-70 dB SPL (at test frequencies between 500-3000 Hz), displayed approximately 12-15 dB of maximum excess masking. The maximum amount of excess masking occurred in the region where the threshold in quiet of the hearing-impaired listener and the average normal masked threshold were equal. These findings indicate that listeners with a sensorineural hearing loss display one form of reduced frequency selectivity (i.e., abnormal upward spread of masking) even when their thresholds in quiet are taken into account.  相似文献   

19.
Reports using a variety of psychophysical tasks indicate that pitch perception by hearing-impaired listeners may be abnormal, contributing to difficulties in understanding speech and enjoying music. Pitches of complex sounds may be weaker and more indistinct in the presence of cochlear damage, especially when frequency regions are affected that form the strongest basis for pitch perception in normal-hearing listeners. In this study, the strength of the complex pitch generated by iterated rippled noise was assessed in normal-hearing and hearing-impaired listeners. Pitch strength was measured for broadband noises with spectral ripples generated by iteratively delaying a copy of a given noise and adding it back into the original. Octave-band-pass versions of these noises also were evaluated to assess frequency dominance regions for rippled-noise pitch. Hearing-impaired listeners demonstrated consistently weaker pitches in response to the rippled noises relative to pitch strength in normal-hearing listeners. However, in most cases, the frequency regions of pitch dominance, i.e., strongest pitch, were similar to those observed in normal-hearing listeners. Except where there exists a substantial sensitivity loss, contributions from normal pitch dominance regions associated with the strongest pitches may not be directly related to impaired spectral processing. It is suggested that the reduced strength of rippled-noise pitch in listeners with hearing loss results from impaired frequency resolution and possibly an associated deficit in temporal processing.  相似文献   

20.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号