首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Le?ger et al. [J. Acoust. Soc. Am. 131, 1502-1514 (2012)] reported deficits in the identification of consonants in noise by hearing-impaired listeners using stimuli filtered into low- or mid-frequency regions in which audiometric thresholds were normal or near-normal. The deficits could not be fully explained in terms of reduced audibility or temporal-envelope processing. However, previous studies indicate that the listeners may have had reduced frequency selectivity, with auditory filters broadened by a factor of about 1.3, despite having normal or near-normal audiometric thresholds in the tested regions. The present study aimed to determine whether the speech-perception deficits could be explained by such a small reduction of frequency selectivity. Consonant identification was measured for normal-hearing listeners in quiet and in unmodulated and modulated noises using the same method as Le?ger et al. The signal-to-noise ratio was set to -3 dB for the masked conditions. Various amounts of reduced frequency selectivity were simulated using a spectral-smearing algorithm. Performance was reduced only for spectral-smearing factors greater than 1.7. For all conditions, identification scores for hearing-impaired listeners could not be explained by a mild reduction of frequency selectivity.  相似文献   

2.
Relations between perception of suprathreshold speech and auditory functions were examined in 24 hearing-impaired listeners and 12 normal-hearing listeners. The speech intelligibility index (SII) was used to account for audibility. The auditory functions included detection efficiency, temporal and spectral resolution, temporal and spectral integration, and discrimination of intensity, frequency, rhythm, and spectro-temporal shape. All auditory functions were measured at 1 kHz. Speech intelligibility was assessed with the speech-reception threshold (SRT) in quiet and in noise, and with the speech-reception bandwidth threshold (SRBT), previously developed for investigating speech perception in a limited frequency region around 1 kHz. The results showed that the elevated SRT in quiet could be explained on the basis of audibility. Audibility could only partly account for the elevated SRT values in noise and the deviant SRBT values, suggesting that suprathreshold deficits affected intelligibility in these conditions. SII predictions for the SRBT improved significantly by including the individually measured upward spread of masking in the SII model. Reduced spectral resolution, reduced temporal resolution, and reduced frequency discrimination appeared to be related to speech perception deficits. Loss of peripheral compression appeared to have the smallest effect on the intelligibility of suprathreshold speech.  相似文献   

3.
The Speech Reception Threshold for sentences in stationary noise and in several amplitude-modulated noises was measured for 8 normal-hearing listeners, 29 sensorineural hearing-impaired listeners, and 16 normal-hearing listeners with simulated hearing loss. This approach makes it possible to determine whether the reduced benefit from masker modulations, as often observed for hearing-impaired listeners, is due to a loss of signal audibility, or due to suprathreshold deficits, such as reduced spectral and temporal resolution, which were measured in four separate psychophysical tasks. Results show that the reduced masking release can only partly be accounted for by reduced audibility, and that, when considering suprathreshold deficits, the normal effects associated with a raised presentation level should be taken into account. In this perspective, reduced spectral resolution does not appear to qualify as an actual suprathreshold deficit, while reduced temporal resolution does. Temporal resolution and age are shown to be the main factors governing masking release for speech in modulated noise, accounting for more than half of the intersubject variance. Their influence appears to be related to the processing of mainly the higher stimulus frequencies. Results based on calculations of the Speech Intelligibility Index in modulated noise confirm these conclusions.  相似文献   

4.
The purpose of this study was to examine the effect of spectral-cue audibility on the recognition of stop consonants in normal-hearing and hearing-impaired adults. Subjects identified six synthetic CV speech tokens in a closed-set response task. Each syllable differed only in the initial 40-ms consonant portion of the stimulus. In order to relate performance to spectral-cue audibility, the initial 40 ms of each CV were analyzed via FFT and the resulting spectral array was passed through a sliding-filter model of the human auditory system to account for logarithmic representation of frequency and the summation of stimulus energy within critical bands. This allowed the spectral data to be displayed in comparison to a subject's sensitivity thresholds. For normal-hearing subjects, an orderly function relating the percentage of audible stimulus to recognition performance was found, with perfect discrimination performance occurring when the bulk of the stimulus spectrum was presented at suprathreshold levels. For the hearing-impaired subjects, however, it was found in many instances that suprathreshold presentation of stop-consonant spectral cues did not yield recognition equivalent to that found for the normal-hearing subjects. These results demonstrate that while the audibility of individual stop consonants is an important factor influencing recognition performance in hearing-impaired subjects, it is not always sufficient to explain the effects of sensorineural hearing loss.  相似文献   

5.
For normal-hearing (NH) listeners, masker energy outside the spectral region of a target signal can improve target detection and identification, a phenomenon referred to as comodulation masking release (CMR). This study examined whether, for cochlear implant (CI) listeners and for NH listeners presented with a "noise vocoded" CI simulation, speech identification in modulated noise is improved by a co-modulated flanking band. In Experiment 1, NH listeners identified noise-vocoded speech in a background of on-target noise with or without a flanking narrow band of noise outside the spectral region of the target. The on-target noise and flanker were either 16-Hz square-wave modulated with the same phase or were unmodulated; the speech was taken from a closed-set corpus. Performance was better in modulated than in unmodulated noise, and this difference was slightly greater when the comodulated flanker was present, consistent with a small CMR of about 1.7 dB for noise-vocoded speech. Experiment 2, which tested CI listeners using the same speech materials, found no advantage for modulated versus unmodulated maskers and no CMR. Thus although NH listeners can benefit from CMR even for speech signals with reduced spectro-temporal detail, no CMR was observed for CI users.  相似文献   

6.
This study investigated the effect of mild-to-moderate sensorineural hearing loss on the ability to identify speech in noise for vowel-consonant-vowel tokens that were either unprocessed, amplitude modulated synchronously across frequency, or amplitude modulated asynchronously across frequency. One goal of the study was to determine whether hearing-impaired listeners have a particular deficit in the ability to integrate asynchronous spectral information in the perception of speech. Speech tokens were presented at a high, fixed sound level and the level of a speech-shaped noise was changed adaptively to estimate the masked speech identification threshold. The performance of the hearing-impaired listeners was generally worse than that of the normal-hearing listeners, but the impaired listeners showed particularly poor performance in the synchronous modulation condition. This finding suggests that integration of asynchronous spectral information does not pose a particular difficulty for hearing-impaired listeners with mild/moderate hearing losses. Results are discussed in terms of common mechanisms that might account for poor speech identification performance of hearing-impaired listeners when either the masking noise or the speech is synchronously modulated.  相似文献   

7.
The detection of 500- or 2000-Hz pure-tone signals in unmodulated and modulated noise was investigated in normal-hearing and sensorineural hearing-impaired listeners, as a function of noise bandwidth. Square-wave modulation rates of 15 and 40 Hz were used in the modulated noise conditions. A notched noise measure of frequency selectivity and a gap detection measure of temporal resolution were also obtained on each subject. The modulated noise results indicated a masking release that increased as a function of increasing noise bandwidth, and as a function of decreasing modulation rate for both groups of listeners. However, the improvement of threshold with increasing modulated noise bandwidth was often greatly reduced among the sensorineural hearing-impaired listeners. It was hypothesized that the masking release in modulated noise may be due to several types of processes including across-critical band analysis (CMR), within-critical band analysis, and suppression. Within-band effects appeared to be especially large at the higher frequency region and lower modulation rate. In agreement with previous research, there was a significant correlation between frequency selectivity and masking release in modulated noise. At the 500-Hz region, masking release was correlated more highly with the filter skirt and tail measures than with the filter passband measure. At the 2000-Hz region, masking release was correlated more with the filter passband and skirt measures than with the filter tail measure. The correlation between gap detection and masking release was significant at the 40-Hz modulation rate, but not at the 15-Hz modulation rate. The results of this study suggest that masking release in modulated noise is limited by frequency selectivity at low modulation rates, and by both frequency selectivity and temporal resolution at high modulation rates. However, even when the present measures of frequency selectivity and temporal resolution are both taken into account, significant variance in masking release still remains unaccounted for.  相似文献   

8.
Temporal gap resolution was measured in five normal-hearing listeners and five cochlear-impaired listeners, whose sensitivity losses were restricted to the frequency regions above 1000 Hz. The stimuli included a broadband noise and three octave band noises centered at 0.5, 1.0, and 4.0 kHz. Results for the normal-hearing subjects agree with previous findings and reveal that gap resolution improves progressively with an increase in signal frequency. Gap resolution in the impaired listeners was significantly poorer than normal for all signals including those that stimulated frequency regions with normal pure-tone sensitivity. Smallest gap thresholds for the impaired listeners were observed with the broadband signal at high levels. This result agrees with data from other experiments and confirms the importance of high-frequency signal audibility in gap detection. The octave band data reveal that resolution deficits can be quite large within restricted frequency regions, even those with minimal sensitivity loss.  相似文献   

9.
Reports using a variety of psychophysical tasks indicate that pitch perception by hearing-impaired listeners may be abnormal, contributing to difficulties in understanding speech and enjoying music. Pitches of complex sounds may be weaker and more indistinct in the presence of cochlear damage, especially when frequency regions are affected that form the strongest basis for pitch perception in normal-hearing listeners. In this study, the strength of the complex pitch generated by iterated rippled noise was assessed in normal-hearing and hearing-impaired listeners. Pitch strength was measured for broadband noises with spectral ripples generated by iteratively delaying a copy of a given noise and adding it back into the original. Octave-band-pass versions of these noises also were evaluated to assess frequency dominance regions for rippled-noise pitch. Hearing-impaired listeners demonstrated consistently weaker pitches in response to the rippled noises relative to pitch strength in normal-hearing listeners. However, in most cases, the frequency regions of pitch dominance, i.e., strongest pitch, were similar to those observed in normal-hearing listeners. Except where there exists a substantial sensitivity loss, contributions from normal pitch dominance regions associated with the strongest pitches may not be directly related to impaired spectral processing. It is suggested that the reduced strength of rippled-noise pitch in listeners with hearing loss results from impaired frequency resolution and possibly an associated deficit in temporal processing.  相似文献   

10.
Two experiments are reported which explore variables that may complicate the interpretation of phoneme boundary data from hearing-impaired listeners. Fourteen synthetic consonant-vowel syllables comprising a/ba-da-ga/ continuum were used as stimuli. The first experiment examined the influence of presentation level and ear of presentation in normal-hearing subjects. Only small differences in the phoneme boundaries and labeling functions were observed between ears and across presentation levels. Thus monaural presentation and relatively high signal level do not appear to be complicating factors in research with hearing-impaired listeners, at least for these stimuli. The second experiment described a test procedure for obtaining phoneme boundaries in some hearing-impaired listeners that controlled for between-subject sources of variation unrelated to hearing impairment and delineated the effects of spectral shaping imposed by the hearing impairment on the labeling functions. Labeling data were obtained from unilaterally hearing-impaired listeners under three test conditions: in the normal ear without any signal distortion; in the normal ear listening through a spectrum shaper that was set to match the subject's suprathreshold audiometric configuration; and in the impaired ear. The reduction in the audibility of the distinctive acoustic/phonetic cues seemed to explain all or part of the effects of the hearing impairment on the labeling functions of some subjects. For many other subjects, however, other forms of distortion in addition to reduced audibility seemed to affect their labeling behavior.  相似文献   

11.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

12.
Young normal-hearing listeners, elderly normal-hearing listeners, and elderly hearing-impaired listeners were tested on a variety of phonetic identification tasks. Where identity was cued by stimulus duration, the elderly hearing-impaired listeners evidenced normal identification functions. On a task in which there were multiple cues to vowel identity, performance was also normal. On a/b d g/identification task in which the starting frequency of the second formant was varied, performance was abnormal for both the elderly hearing-impaired listeners and the elderly normal-hearing listeners. We conclude that errors in phonetic identification among elderly hearing-impaired listeners with mild to moderate, sloping hearing impairment do not stem from abnormalities in processing stimulus duration. The results with the /b d g/continuum suggest that one factor underlying errors may be an inability to base identification on dynamic spectral information when relatively static information, which is normally characteristic of a phonetic segment, is unavailable.  相似文献   

13.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listener's responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.  相似文献   

14.
The purpose of this study is to specify the contribution of certain frequency regions to consonant place perception for normal-hearing listeners and listeners with high-frequency hearing loss, and to characterize the differences in stop-consonant place perception among these listeners. Stop-consonant recognition and error patterns were examined at various speech-presentation levels and under conditions of low- and high-pass filtering. Subjects included 18 normal-hearing listeners and a homogeneous group of 10 young, hearing-impaired individuals with high-frequency sensorineural hearing loss. Differential filtering effects on consonant place perception were consistent with the spectral composition of acoustic cues. Differences in consonant recognition and error patterns between normal-hearing and hearing-impaired listeners were observed when the stimulus bandwidth included regions of threshold elevation for the hearing-impaired listeners. Thus place-perception differences among listeners are, for the most part, associated with stimulus bandwidths corresponding to regions of hearing loss.  相似文献   

15.
Temporal integration for a 1000-Hz signal was determined for normal-hearing and cochlear hearing-impaired listeners in quiet and in masking noise of variable bandwidth. Critical ratio and 3-dB critical band measures of frequency resolution were derived from the masking data. Temporal integration for the normal-hearing listeners was markedly reduced in narrow-band noise, when contrasted with temporal integration in quiet or in wideband noise. The effect of noise bandwidth on temporal integration was smaller for the hearing-impaired group. Hearing-impaired subjects showed both reduced temporal integration and reduced frequency resolution for the 200-ms signal. However, a direct relation between temporal integration and frequency resolution was not indicated. Frequency resolution for the normal-hearing listeners did not differ from that of the hearing-impaired listeners for the 20-ms signal. It was suggested that some of the frequency resolution and temporal integration differences between normal-hearing and hearing-impaired listeners could be accounted for by off-frequency listening.  相似文献   

16.
Speech reception thresholds (SRTs) for sentences were determined in stationary and modulated background noise for two age-matched groups of normal-hearing (N = 13) and hearing-impaired listeners (N = 21). Correlations were studied between the SRT in noise and measures of auditory and nonauditory performance, after which stepwise regression analyses were performed within both groups separately. Auditory measures included the pure-tone audiogram and tests of spectral and temporal acuity. Nonauditory factors were assessed by measuring the text reception threshold (TRT), a visual analogue of the SRT, in which partially masked sentences were adaptively presented. Results indicate that, for the normal-hearing group, the variance in speech reception is mainly associated with nonauditory factors, both in stationary and in modulated noise. For the hearing-impaired group, speech reception in stationary noise is mainly related to the audiogram, even when audibility effects are accounted for. In modulated noise, both auditory (temporal acuity) and nonauditory factors (TRT) contribute to explaining interindividual differences in speech reception. Age was not a significant factor in the results. It is concluded that, under some conditions, nonauditory factors are relevant for the perception of speech in noise. Further evaluation of nonauditory factors might enable adapting the expectations from auditory rehabilitation in clinical settings.  相似文献   

17.
A conditional-on-a-single-stimulus (COSS) analysis procedure [B. G. Berg, J. Acoust. Soc. Am. 86, 1743-1746 (1989)] was used to estimate how well normal-hearing and hearing-impaired listeners selectively attend to individual spectral components of a broadband signal in a level discrimination task. On each trial, two multitone complexes consisting of six octave frequencies from 250 to 8000 Hz were presented to listeners. The levels of the individual tones were chosen independently and at random on each presentation. The target tone was selected, within a block of trials, as the 250-, 1000-, or 4000-Hz component. On each trial, listeners were asked to indicate which of the two complex sounds contained the higher level target. As a group, normal-hearing listeners exhibited greater selectivity than hearing-impaired listeners to the 250-Hz target, while hearing-impaired listeners showed greater selectivity than normal-hearing listeners to the 4000-Hz target, which is in the region of their hearing loss. Both groups of listeners displayed large variability in their ability to selectively weight the 1000-Hz target. Trial-by-trial analysis showed a decrease in weighting efficiency with increasing frequency for normal-hearing listeners, but a relatively constant weighting efficiency across frequency for hearing-impaired listeners. Interestingly, hearing-impaired listeners selectively weighted the 4000-Hz target, which was in the region of their hearing loss, more efficiently than did the normal-hearing listeners.  相似文献   

18.
Hearing-impaired (HI) listeners often show poorer performance on psychoacoustic tasks than do normal-hearing (NH) listeners. Although some such deficits may reflect changes in suprathreshold sound processing, others may be due to stimulus audibility and the elevated absolute thresholds associated with hearing loss. Masking noise can be used to raise the thresholds of NH to equal the thresholds in quiet of HI listeners. However, such noise may have other effects, including changing peripheral response characteristics, such as the compressive input-output function of the basilar membrane in the normal cochlea. This study estimated compression behaviorally across a range of background noise levels in NH listeners at a 4 kHz signal frequency, using a growth of forward masking paradigm. For signals 5 dB or more above threshold in noise, no significant effect of broadband noise level was found on estimates of compression. This finding suggests that broadband noise does not significantly alter the compressive response of the basilar membrane to sounds that are presented well above their threshold in the noise. Similarities between the performance of HI listeners and NH listeners in threshold-equalizing noise are therefore unlikely to be due to a linearization of basilar-membrane responses to suprathreshold stimuli in the NH listeners.  相似文献   

19.
In a previous study [Noordhoek et al., J. Acoust. Soc. Am. 105, 2895-2902 (1999)], an adaptive test was developed to determine the speech-reception bandwidth threshold (SRBT), i.e., the width of a speech band around 1 kHz required for a 50% intelligibility score. In this test, the band-filtered speech is presented in complementary bandstop-filtered noise. In the present study, the performance of 34 hearing-impaired listeners was measured on this SRBT test and on more common SRT (speech-reception threshold) tests, namely the SRT in quiet, the standard SRT in noise (standard speech spectrum), and the spectrally adapted SRT in noise (fitted to the individual's dynamic range). The aim was to investigate to what extent the performance on these tests could be explained simply from audibility, as estimated with the SII (speech intelligibility index) model, or require the assumption of suprathreshold deficits. For most listeners, an elevated SRT in quiet or an elevated standard SRT in noise could be explained on the basis of audibility. For the spectrally adapted SRT in noise, and especially for the SRBT, the data of most listeners could not be explained from audibility, suggesting that the effects of suprathreshold deficits may be present. Possibly, such a deficit is an increased downward spread of masking.  相似文献   

20.
An analysis of psychophysical tuning curves in normal and pathological ears   总被引:2,自引:0,他引:2  
Simultaneous psychophysical tuning curves were obtained from normal-hearing and hearing-impaired listeners, using probe tones that were either at similar sound pressure levels or at similar sensation levels for the two types of listeners. Tuning curves from the hearing-impaired listeners were flat, erratic, broad, and/or inverted, depending upon the frequency region of the probe tone and the frequency characteristics of the hearing loss. Tuning curves from the normal-hearing listeners at low-SPL's were sharp as expected; tuning curves at high-SPL's were discontinuous. An analysis of high-SPL tuning curves suggests that tuning curves from normal-hearing listeners reflect low-pass filter characteristics instead of the sharp bandpass filter characteristics seen with low-SPL probe tones. Tuning curves from hearing-impaired listeners at high-SPL probe levels appear to reflect similar low-pass filter characteristics, but with much more gradual high-frequency slopes than in the normal ear. This appeared as abnormal downward spread of masking. Relatively good temporal resolution and broader tuning mechanisms were proposed to explain inverted tuning curves in the hearing-impaired listeners.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号