首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The speech understanding of persons with "flat" hearing loss (HI) was compared to a normal-hearing (NH) control group to examine how hearing loss affects the contribution of speech information in various frequency regions. Speech understanding in noise was assessed at multiple low- and high-pass filter cutoff frequencies. Noise levels were chosen to ensure that the noise, rather than quiet thresholds, determined audibility. The performance of HI subjects was compared to a NH group listening at the same signal-to-noise ratio and a comparable presentation level. Although absolute speech scores for the HI group were reduced, performance improvements as the speech and noise bandwidth increased were comparable between groups. These data suggest that the presence of hearing loss results in a uniform, rather than frequency-specific, deficit in the contribution of speech information. Measures of auditory thresholds in noise and speech intelligibility index (SII) calculations were also performed. These data suggest that differences in performance between the HI and NH groups are due primarily to audibility differences between groups. Measures of auditory thresholds in noise showed the "effective masking spectrum" of the noise was greater for the HI than the NH subjects.  相似文献   

2.
Relations between perception of suprathreshold speech and auditory functions were examined in 24 hearing-impaired listeners and 12 normal-hearing listeners. The speech intelligibility index (SII) was used to account for audibility. The auditory functions included detection efficiency, temporal and spectral resolution, temporal and spectral integration, and discrimination of intensity, frequency, rhythm, and spectro-temporal shape. All auditory functions were measured at 1 kHz. Speech intelligibility was assessed with the speech-reception threshold (SRT) in quiet and in noise, and with the speech-reception bandwidth threshold (SRBT), previously developed for investigating speech perception in a limited frequency region around 1 kHz. The results showed that the elevated SRT in quiet could be explained on the basis of audibility. Audibility could only partly account for the elevated SRT values in noise and the deviant SRBT values, suggesting that suprathreshold deficits affected intelligibility in these conditions. SII predictions for the SRBT improved significantly by including the individually measured upward spread of masking in the SII model. Reduced spectral resolution, reduced temporal resolution, and reduced frequency discrimination appeared to be related to speech perception deficits. Loss of peripheral compression appeared to have the smallest effect on the intelligibility of suprathreshold speech.  相似文献   

3.
The effects of intensity on monosyllabic word recognition were studied in adults with normal hearing and mild-to-moderate sensorineural hearing loss. The stimuli were bandlimited NU#6 word lists presented in quiet and talker-spectrum-matched noise. Speech levels ranged from 64 to 99 dB SPL and S/N ratios from 28 to -4 dB. In quiet, the performance of normal-hearing subjects remained essentially constant in noise, at a fixed S/N ratio, it decreased as a linear function of speech level. Hearing-impaired subjects performed like normal-hearing subjects tested in noise when the data were corrected for the effects of audibility loss. From these and other results, it was concluded that: (1) speech intelligibility in noise decreases when speech levels exceed 69 dB SPL and the S/N ratio remains constant; (2) the effects of speech and noise level are synergistic; (3) the deterioration in intelligibility can be modeled as a relative increase in the effective masking level; (4) normal-hearing and hearing-impaired subjects are affected similarly by increased signal level when differences in speech audibility are considered; (5) the negative effects of increasing speech and noise levels on speech recognition are similar for all adult subjects, at least up to 80 years; and (6) the effective dynamic range of speech may be larger than the commonly assumed value of 30 dB.  相似文献   

4.
The Articulation Index and Speech Intelligibility Index predict intelligibility scores from measurements of speech and hearing parameters. One component in the prediction is the frequency-importance function, a weighting function that characterizes contributions of particular spectral regions of speech to speech intelligibility. The purpose of this study was to determine whether such importance functions could similarly characterize contributions of electrode channels in cochlear implant systems. Thirty-eight subjects with normal hearing listened to vowel-consonant-vowel tokens, either as recorded or as output from vocoders that simulated aspects of cochlear implant processing. Importance functions were measured using the method of Whitmal and DeRoy [J. Acoust. Soc. Am. 130, 4032-4043 (2011)], in which signal bandwidths were varied adaptively to produce specified token recognition scores in accordance with the transformed up-down rules of Levitt [J. Acoust. Soc. Am. 49, 467-477 (1971)]. Psychometric functions constructed from recognition scores were subsequently converted into importance functions. Comparisons of the resulting importance functions indicate that vocoder processing causes peak importance regions to shift downward in frequency. This shift is attributed to changes in strategy and capability for detecting voicing in speech, and is consistent with previously measured data.  相似文献   

5.
The Articulation Index (AI) and Speech Intelligibility Index (SII) predict intelligibility scores from measurements of speech and hearing parameters. One component in the prediction is the "importance function," a weighting function that characterizes contributions of particular spectral regions of speech to speech intelligibility. Previous work with SII predictions for hearing-impaired subjects suggests that prediction accuracy might improve if importance functions for individual subjects were available. Unfortunately, previous importance function measurements have required extensive intelligibility testing with groups of subjects, using speech processed by various fixed-bandwidth low-pass and high-pass filters. A more efficient approach appropriate to individual subjects is desired. The purpose of this study was to evaluate the feasibility of measuring importance functions for individual subjects with adaptive-bandwidth filters. In two experiments, ten subjects with normal-hearing listened to vowel-consonant-vowel (VCV) nonsense words processed by low-pass and high-pass filters whose bandwidths were varied adaptively to produce specified performance levels in accordance with the transformed up-down rules of Levitt [(1971). J. Acoust. Soc. Am. 49, 467-477]. Local linear psychometric functions were fit to resulting data and used to generate an importance function for VCV words. Results indicate that the adaptive method is reliable and efficient, and produces importance function data consistent with that of the corresponding AI/SII importance function.  相似文献   

6.
The goal of this study was to determine the extent to which the difficulty experienced by impaired listeners in understanding noisy speech can be explained on the basis of elevated tone-detection thresholds. Twenty-one impaired ears of 15 subjects, spanning a variety of audiometric configurations with average hearing losses to 75 dB, were tested for reception of consonants in a speech-spectrum noise. Speech level, noise level, and frequency-gain characteristic were varied to generate a range of listening conditions. Results for impaired listeners were compared to those of normal-hearing listeners tested under the same conditions with extra noise added to approximate the impaired listeners' detection thresholds. Results for impaired and normal listeners were also compared on the basis of articulation indices. Consonant recognition by this sample of impaired listeners was generally comparable to that of normal-hearing listeners with similar threshold shifts listening under the same conditions. When listening conditions were equated for articulation index, there was no clear dependence of consonant recognition on average hearing loss. Assuming that the primary consequence of the threshold simulation in normals is loss of audibility (as opposed to suprathreshold discrimination or resolution deficits), it is concluded that the primary source of difficulty in listening in noise for listeners with moderate or milder hearing impairments, aside from the noise itself, is the loss of audibility.  相似文献   

7.
An adaptive test has been developed to determine the minimum bandwidth of speech that a listener needs to reach 50% intelligibility. Measuring this speech-reception bandwidth threshold (SRBT), in addition to the more common speech-reception threshold (SRT) in noise, may be useful in investigating the factors underlying impaired suprathreshold speech perception. Speech was bandpass filtered (center frequency: 1 kHz) and complementary bandstop filtered noise was added. To obtain reference values, the SRBT was measured in 12 normal-hearing listeners at four sound-pressure levels, in combination with three overall spectral tilts. Plotting SRBT as a function of sound-pressure level resulted in U-shaped curves. The most narrow SRBT (1.4 octave) was obtained at an A-weighted sound-pressure level of 55 dB. The required bandwidth increases with increasing level, probably due to upward spread of masking. At a lower level (40 dBA) listeners also need a broader band, because parts of the speech signal will be below threshold. The SII (Speech Intelligibility Index) model reasonably predicts the data, although it seems to underestimate upward spread of masking.  相似文献   

8.
To examine spectral and threshold effects for speech and noise at high levels, recognition of nonsense syllables was assessed for low-pass-filtered speech and speech-shaped maskers and high-pass-filtered speech and speech-shaped maskers at three speech levels, with signal-to-noise ratio held constant. Subjects were younger adults with normal hearing and older adults with normal hearing but significantly higher average quiet thresholds. A broadband masker was always present to minimize audibility differences between subject groups and across presentation levels. For subjects with lower thresholds, the declines in recognition of low-frequency syllables in low-frequency maskers were attributed to nonlinear growth of masking which reduced "effective" signal-to-noise ratio at high levels, whereas the decline for subjects with higher thresholds was not fully explained by nonlinear masking growth. For all subjects, masking growth did not entirely account for declines in recognition of high-frequency syllables in high-frequency maskers at high levels. Relative to younger subjects with normal hearing and lower quiet thresholds, older subjects with normal hearing and higher quiet thresholds had poorer consonant recognition in noise, especially for high-frequency speech in high-frequency maskers. Age-related effects on thresholds and task proficiency may be determining factors in the recognition of speech in noise at high levels.  相似文献   

9.
The speech understanding of persons with sloping high-frequency (HF) hearing impairment (HI) was compared to normal hearing (NH) controls and previous research on persons with "flat" losses [Hornsby and Ricketts (2003). J. Acoust. Soc. Am. 113, 1706-1717] to examine how hearing loss configuration affects the contribution of speech information in various frequency regions. Speech understanding was assessed at multiple low- and high-pass filter cutoff frequencies. Crossover frequencies, defined as the cutoff frequencies at which low- and high-pass filtering yielded equivalent performance, were significantly lower for the sloping HI, compared to NH, group suggesting that HF HI limits the utility of HF speech information. Speech intelligibility index calculations suggest this limited utility was not due simply to reduced audibility but also to the negative effects of high presentation levels and a poorer-than-normal use of speech information in the frequency region with the greatest hearing loss (the HF regions). This deficit was comparable, however, to that seen in low-frequency regions of persons with similar HF thresholds and "flat" hearing losses suggesting that sensorineural HI results in a "uniform," rather than frequency-specific, deficit in speech understanding, at least for persons with HF thresholds up to 60-80 dB HL.  相似文献   

10.
This experiment assessed the benefits of suppression and the impact of reduced or absent suppression on speech recognition in noise. Psychophysical suppression was measured in forward masking using tonal maskers and suppressors and band limited noise maskers and suppressors. Subjects were 10 younger and 10 older adults with normal hearing, and 10 older adults with cochlear hearing loss. For younger subjects with normal hearing, suppression measured with noise maskers increased with masker level and was larger at 2.0 kHz than at 0.8 kHz. Less suppression was observed for older than younger subjects with normal hearing. There was little evidence of suppression for older subjects with cochlear hearing loss. Suppression measured with noise maskers and suppressors was larger in magnitude and more prevalent than suppression measured with tonal maskers and suppressors. The benefit of suppression to speech recognition in noise was assessed by obtaining scores for filtered consonant-vowel syllables as a function of the bandwidth of a forward masker. Speech-recognition scores in forward maskers should be higher than those in simultaneous maskers given that forward maskers are less effective than simultaneous maskers. If suppression also mitigated the effects of the forward masker and resulted in an improved signal-to-noise ratio, scores should decrease less in forward masking as forward-masker bandwidth increased, and differences between scores in forward and simultaneous maskers should increase, as was observed for younger subjects with normal hearing. Less or no benefit of suppression to speech recognition in noise was observed for older subjects with normal hearing or hearing loss. In general, as suppression measured with tonal signals increased, the combined benefit of forward masking and suppression to speech recognition in noise also increased.  相似文献   

11.
The purpose of the present study was to examine the benefits of providing audible speech to listeners with sensorineural hearing loss when the speech is presented in a background noise. Previous studies have shown that when listeners have a severe hearing loss in the higher frequencies, providing audible speech (in a quiet background) to these higher frequencies usually results in no improvement in speech recognition. In the present experiments, speech was presented in a background of multitalker babble to listeners with various severities of hearing loss. The signal was low-pass filtered at numerous cutoff frequencies and speech recognition was measured as additional high-frequency speech information was provided to the hearing-impaired listeners. It was found in all cases, regardless of hearing loss or frequency range, that providing audible speech resulted in an increase in recognition score. The change in recognition as the cutoff frequency was increased, along with the amount of audible speech information in each condition (articulation index), was used to calculate the "efficiency" of providing audible speech. Efficiencies were positive for all degrees of hearing loss. However, the gains in recognition were small, and the maximum score obtained by an listener was low, due to the noise background. An analysis of error patterns showed that due to the limited speech audibility in a noise background, even severely impaired listeners used additional speech audibility in the high frequencies to improve their perception of the "easier" features of speech including voicing.  相似文献   

12.
A Speech Intelligibility Index (SII) for the sentences in the Cantonese version of the Hearing In Noise Test (CHINT) was derived using conventional procedures described previously in studies such as Studebaker and Sherbecoe [J. Speech Hear. Res. 34, 427-438 (1991)]. Two studies were conducted to determine the signal-to-noise ratios and high- and low-pass filtering conditions that should be used and to measure speech intelligibility in these conditions. Normal hearing subjects listened to the sentences presented in speech-spectrum shaped noise. Compared to other English speech assessment materials such as the English Hearing In Noise Test [Nilsson et al., J. Acoust. Soc. Am. 95, 1085-1099 (1994)], the frequency importance function of the CHINT suggests that low-frequency information is more important for Cantonese speech understanding. The difference in ,frequency importance weight in Chinese, compared to English, was attributed to the redundancy of test material, tonal nature of the Cantonese language, or a combination of these factors.  相似文献   

13.
These experiments examined how high presentation levels influence speech recognition for high- and low-frequency stimuli in noise. Normally hearing (NH) and hearing-impaired (HI) listeners were tested. In Experiment 1, high- and low-frequency bandwidths yielding 70%-correct word recognition in quiet were determined at levels associated with broadband speech at 75 dB SPL. In Experiment 2, broadband and band-limited sentences (based on passbands measured in Experiment 1) were presented at this level in speech-shaped noise filtered to the same frequency bandwidths as targets. Noise levels were adjusted to produce approximately 30%-correct word recognition. Frequency bandwidths and signal-to-noise ratios supporting criterion performance in Experiment 2 were tested at 75, 87.5, and 100 dB SPL in Experiment 3. Performance tended to decrease as levels increased. For NH listeners, this "rollover" effect was greater for high-frequency and broadband materials than for low-frequency stimuli. For HI listeners, the 75- to 87.5-dB increase improved signal audibility for high-frequency stimuli and rollover was not observed. However, the 87.5- to 100-dB increase produced qualitatively similar results for both groups: scores decreased most for high-frequency stimuli and least for low-frequency materials. Predictions of speech intelligibility by quantitative methods such as the Speech Intelligibility Index may be improved if rollover effects are modeled as frequency dependent.  相似文献   

14.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

15.
Under certain conditions, speech recognition in noise decreases above conversational levels when signal-to-noise ratio is held constant. The current study was undertaken to determine if nonlinear growth of masking and the subsequent reduction in "effective" signal-to-noise ratio accounts for this decline. Nine young adults with normal hearing listened to monosyllabic words at three levels in each of three levels of a masker shaped to match the speech spectrum. An additional low-level noise equated audibility by producing equivalent masked thresholds for all subjects. If word recognition was determined entirely by signal-to-noise ratio and was independent of overall speech and masker levels, scores at a given signal-to-noise ratio should remain constant with increasing level. Masked pure-tone thresholds measured in the speech-shaped maskers increased linearly with increasing masker level at lower frequencies but nonlinearly at higher frequencies, consistent with nonlinear growth of upward spread of masking that followed the peaks in the spectrum of the speech-shaped masker. Word recognition declined significantly with increasing level when signal-to-noise ratio was held constant which was attributed to nonlinear growth of masking and reduced "effective" signal-to-noise ratio at high speech-shaped masker levels, as indicated by audibility estimates based on the Articulation Index.  相似文献   

16.
Speech recognition in noisy environments improves when the speech signal is spatially separated from the interfering sound. This effect, known as spatial release from masking (SRM), was recently shown in young children. The present study compared SRM in children of ages 5-7 with adults for interferers introducing energetic, informational, and/or linguistic components. Three types of interferers were used: speech, reversed speech, and modulated white noise. Two female voices with different long-term spectra were also used. Speech reception thresholds (SRTs) were compared for: Quiet (target 0 degrees front, no interferer), Front (target and interferer both 0 degrees front), and Right (interferer 90 degrees right, target 0 degrees front). Children had higher SRTs and greater masking than adults. When spatial cues were not available, adults, but not children, were able to use differences in interferer type to separate the target from the interferer. Both children and adults showed SRM. Children, unlike adults, demonstrated large amounts of SRM for a time-reversed speech interferer. In conclusion, masking and SRM vary with the type of interfering sound, and this variation interacts with age; SRM may not depend on the spectral peculiarities of a particular type of voice when the target speech and interfering speech are different sex talkers.  相似文献   

17.
In a previous study [Noordhoek et al., J. Acoust. Soc. Am. 105, 2895-2902 (1999)], an adaptive test was developed to determine the speech-reception bandwidth threshold (SRBT), i.e., the width of a speech band around 1 kHz required for a 50% intelligibility score. In this test, the band-filtered speech is presented in complementary bandstop-filtered noise. In the present study, the performance of 34 hearing-impaired listeners was measured on this SRBT test and on more common SRT (speech-reception threshold) tests, namely the SRT in quiet, the standard SRT in noise (standard speech spectrum), and the spectrally adapted SRT in noise (fitted to the individual's dynamic range). The aim was to investigate to what extent the performance on these tests could be explained simply from audibility, as estimated with the SII (speech intelligibility index) model, or require the assumption of suprathreshold deficits. For most listeners, an elevated SRT in quiet or an elevated standard SRT in noise could be explained on the basis of audibility. For the spectrally adapted SRT in noise, and especially for the SRBT, the data of most listeners could not be explained from audibility, suggesting that the effects of suprathreshold deficits may be present. Possibly, such a deficit is an increased downward spread of masking.  相似文献   

18.
The present study examined the benefits of providing amplified speech to the low- and mid-frequency regions of listeners with various degrees of sensorineural hearing loss. Nonsense syllables were low-pass filtered at various cutoff frequencies and consonant recognition was measured as the bandwidth of the signal was increased. In addition, error patterns were analyzed to determine the types of speech cues that were, or were not, transmitted to the listeners. For speech frequencies of 2800 Hz and below, a positive benefit of amplified speech was observed in every case, although the benefit provided was very often less than that observed in normal-hearing listeners who received the same increase in speech audibility. There was no dependence of this benefit upon the degree of hearing loss. Error patterns suggested that the primary difficulty that hearing-impaired individuals have in using amplified speech is due to their poor ability to perceive the place of articulation of consonants, followed by a reduced ability to perceive manner information.  相似文献   

19.
Speech production by children with cochlear implants (CIs) is generally less intelligible and less accurate on a phonemic level than that of normally hearing children. Research has reported that children with CIs produce less acoustic contrast between phonemes than normally hearing children, but these studies have included correct and incorrect productions. The present study compared the extent of contrast between correct productions of /s/ and /∫/ by children with CIs and two comparison groups: (1) normally hearing children of the same chronological age as the children with CIs and (2) normally hearing children with the same duration of auditory experience. Spectral peaks and means were calculated from the frication noise of productions of /s/ and /∫/. Results showed that the children with CIs produced less contrast between /s/ and /∫/ than normally hearing children of the same chronological age and normally hearing children with the same duration of auditory experience due to production of /s/ with spectral peaks and means at lower frequencies. The results indicate that there may be differences between the speech sounds produced by children with CIs and their normally hearing peers even for sounds that adults judge as correct.  相似文献   

20.
Although cochlear implant (CI) users have enjoyed good speech recognition in quiet, they still have difficulties understanding speech in noise. We conducted three experiments to determine whether a directional microphone and an adaptive multichannel noise reduction algorithm could enhance CI performance in noise and whether Speech Transmission Index (STI) can be used to predict CI performance in various acoustic and signal processing conditions. In Experiment I, CI users listened to speech in noise processed by 4 hearing aid settings: omni-directional microphone, omni-directional microphone plus noise reduction, directional microphone, and directional microphone plus noise reduction. The directional microphone significantly improved speech recognition in noise. Both directional microphone and noise reduction algorithm improved overall preference. In Experiment II, normal hearing individuals listened to the recorded speech produced by 4- or 8-channel CI simulations. The 8-channel simulation yielded similar speech recognition results as in Experiment I, whereas the 4-channel simulation produced no significant difference among the 4 settings. In Experiment III, we examined the relationship between STIs and speech recognition. The results suggested that STI could predict actual and simulated CI speech intelligibility with acoustic degradation and the directional microphone, but not the noise reduction algorithm. Implications for intelligibility enhancement are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号