首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The purpose of the present study was to examine the benefits of providing audible speech to listeners with sensorineural hearing loss when the speech is presented in a background noise. Previous studies have shown that when listeners have a severe hearing loss in the higher frequencies, providing audible speech (in a quiet background) to these higher frequencies usually results in no improvement in speech recognition. In the present experiments, speech was presented in a background of multitalker babble to listeners with various severities of hearing loss. The signal was low-pass filtered at numerous cutoff frequencies and speech recognition was measured as additional high-frequency speech information was provided to the hearing-impaired listeners. It was found in all cases, regardless of hearing loss or frequency range, that providing audible speech resulted in an increase in recognition score. The change in recognition as the cutoff frequency was increased, along with the amount of audible speech information in each condition (articulation index), was used to calculate the "efficiency" of providing audible speech. Efficiencies were positive for all degrees of hearing loss. However, the gains in recognition were small, and the maximum score obtained by an listener was low, due to the noise background. An analysis of error patterns showed that due to the limited speech audibility in a noise background, even severely impaired listeners used additional speech audibility in the high frequencies to improve their perception of the "easier" features of speech including voicing.  相似文献   

2.
This study examined proportional frequency compression as a strategy for improving speech recognition in listeners with high-frequency sensorineural hearing loss. This method of frequency compression preserved the ratios between the frequencies of the components of natural speech, as well as the temporal envelope of the unprocessed speech stimuli. Nonsense syllables spoken by a female and a male talker were used as the speech materials. Both frequency-compressed speech and the control condition of unprocessed speech were presented with high-pass amplification. For the materials spoken by the female talker, significant increases in speech recognition were observed in slightly less than one-half of the listeners with hearing impairment. For the male-talker materials, one-fifth of the hearing-impaired listeners showed significant recognition improvements. The increases in speech recognition due solely to frequency compression were generally smaller than those solely due to high-pass amplification. The results indicate that while high-pass amplification is still the most effective approach for improving speech recognition of listeners with high-frequency hearing loss, proportional frequency compression can offer significant improvements in addition to those provided by amplification for some patients.  相似文献   

3.
4.
This study investigated the effects of age and hearing loss on perception of accented speech presented in quiet and noise. The relative importance of alterations in phonetic segments vs. temporal patterns in a carrier phrase with accented speech also was examined. English sentences recorded by a native English speaker and a native Spanish speaker, together with hybrid sentences that varied the native language of the speaker of the carrier phrase and the final target word of the sentence were presented to younger and older listeners with normal hearing and older listeners with hearing loss in quiet and noise. Effects of age and hearing loss were observed in both listening environments, but varied with speaker accent. All groups exhibited lower recognition performance for the final target word spoken by the accented speaker compared to that spoken by the native speaker, indicating that alterations in segmental cues due to accent play a prominent role in intelligibility. Effects of the carrier phrase were minimal. The findings indicate that recognition of accented speech, especially in noise, is a particularly challenging communication task for older people.  相似文献   

5.
Listeners with sensorineural hearing loss are poorer than listeners with normal hearing at understanding one talker in the presence of another. This deficit is more pronounced when competing talkers are spatially separated, implying a reduced "spatial benefit" in hearing-impaired listeners. This study tested the hypothesis that this deficit is due to increased masking specifically during the simultaneous portions of competing speech signals. Monosyllabic words were compressed to a uniform duration and concatenated to create target and masker sentences with three levels of temporal overlap: 0% (non-overlapping in time), 50% (partially overlapping), or 100% (completely overlapping). Listeners with hearing loss performed particularly poorly in the 100% overlap condition, consistent with the idea that simultaneous speech sounds are most problematic for these listeners. However, spatial release from masking was reduced in all overlap conditions, suggesting that increased masking during periods of temporal overlap is only one factor limiting spatial unmasking in hearing-impaired listeners.  相似文献   

6.
There is limited documentation available on how sensorineurally hearing-impaired listeners use the various sources of phonemic information that are known to be distributed across time in the speech waveform. In this investigation, a group of normally hearing listeners and a group of sensorineurally hearing-impaired listeners (with and without the benefit of amplification) identified various consonant and vowel productions that had been systematically varied in duration. The consonants (presented in a /haCa/ environment) and the vowels (presented in a /bVd/ environment) were truncated in steps to eliminate various segments from the end of the stimulus. The results indicated that normally hearing listeners could extract more phonemic information, especially cues to consonant place, from the earlier occurring portions of the stimulus waveforms than could the hearing-impaired listeners. The use of amplification partially decreased the performance differences between the normally hearing listeners and the unaided hearing-impaired listeners. The results are relevant to current models of normal speech perception that emphasize the need for the listener to make phonemic identifications as quickly as possible.  相似文献   

7.
The purpose of this study is to specify the contribution of certain frequency regions to consonant place perception for normal-hearing listeners and listeners with high-frequency hearing loss, and to characterize the differences in stop-consonant place perception among these listeners. Stop-consonant recognition and error patterns were examined at various speech-presentation levels and under conditions of low- and high-pass filtering. Subjects included 18 normal-hearing listeners and a homogeneous group of 10 young, hearing-impaired individuals with high-frequency sensorineural hearing loss. Differential filtering effects on consonant place perception were consistent with the spectral composition of acoustic cues. Differences in consonant recognition and error patterns between normal-hearing and hearing-impaired listeners were observed when the stimulus bandwidth included regions of threshold elevation for the hearing-impaired listeners. Thus place-perception differences among listeners are, for the most part, associated with stimulus bandwidths corresponding to regions of hearing loss.  相似文献   

8.
Fundamental frequency (F0) information extracted from low-pass-filtered speech and aurally presented as frequency-modulated sinusoids can greatly improve speechreading performance [Grant et al., J. Acoust. Soc. Am. 77, 671-677 (1985)]. To use this source of information, listeners must be able to detect the presence or absence of F0 (i.e., voicing), discriminate changes in frequency, and make judgments about the linguistic meaning of perceived variations in F0. In the present study, normally hearing and hearing-impaired subjects were required to locate the stressed peak of an intonation contour according to the extent of frequency transition at the primary peak. The results showed that listeners with profound hearing impairments required frequency transitions that were 1.5-6 times greater than those required by normally hearing subjects. These results were consistent with the subjects' identification performance for intonation and stress patterns in natural speech, and suggest that natural variations in F0 may be too small for some impaired listeners to perceive and follow accurately.  相似文献   

9.
The Speech Reception Threshold for sentences in stationary noise and in several amplitude-modulated noises was measured for 8 normal-hearing listeners, 29 sensorineural hearing-impaired listeners, and 16 normal-hearing listeners with simulated hearing loss. This approach makes it possible to determine whether the reduced benefit from masker modulations, as often observed for hearing-impaired listeners, is due to a loss of signal audibility, or due to suprathreshold deficits, such as reduced spectral and temporal resolution, which were measured in four separate psychophysical tasks. Results show that the reduced masking release can only partly be accounted for by reduced audibility, and that, when considering suprathreshold deficits, the normal effects associated with a raised presentation level should be taken into account. In this perspective, reduced spectral resolution does not appear to qualify as an actual suprathreshold deficit, while reduced temporal resolution does. Temporal resolution and age are shown to be the main factors governing masking release for speech in modulated noise, accounting for more than half of the intersubject variance. Their influence appears to be related to the processing of mainly the higher stimulus frequencies. Results based on calculations of the Speech Intelligibility Index in modulated noise confirm these conclusions.  相似文献   

10.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

11.
Eight normal listeners and eight listeners with sensorineural hearing losses were compared on a gap-detection task and on a speech perception task. The minimum detectable gap (71% correct) was determined as a function of noise level, and a time constant was computed from these data for each listener. The time constants of the hearing-impaired listeners were significantly longer than those of the normal listeners. The speech consisted of sentences that were mixed with two levels of noise and subjected to two kinds of reverberation (real or simulated). The speech thresholds (minimum signal-to-noise ratio for 50% correct) were significantly higher for the hearing-impaired listeners than for the normal listeners for both kinds of reverberation. The longer reverberation times produced significantly higher thresholds than the shorter times. The time constant was significantly correlated with all the speech threshold measures (r = -0.58 to -0.74) and a measure of hearing threshold loss also correlated significantly with all the speech thresholds (r = 0.53 to 0.95). A principal components analysis yielded two factors that accounted for the intercorrelations. The factor loadings for the time constant were similar to those on the speech thresholds for real reverberation and the loadings for hearing loss were similar to those of the thresholds for simulated reverberation.  相似文献   

12.
Perceptual coherence, the process by which the individual elements of complex sounds are bound together, was examined in adult listeners with longstanding childhood hearing losses, listeners with adult-onset hearing losses, and listeners with normal hearing. It was hypothesized that perceptual coherence would vary in strength between the groups due to their substantial differences in hearing history. Bisyllabic words produced by three talkers as well as comodulated three-tone complexes served as stimuli. In the first task, the second formant of each word was isolated and presented for recognition. In the second task, an isolated formant was paired with an intact word and listeners indicated whether or not the isolated second formant was a component of the intact word. In the third task, the middle component of the three-tone complex was presented in the same manner. For the speech stimuli, results indicate normal perceptual coherence in the listeners with adult-onset hearing loss but significantly weaker coherence in the listeners with childhood hearing losses. No differences were observed across groups for the nonspeech stimuli. These results suggest that perceptual coherence is relatively unaffected by hearing loss acquired during adulthood but appears to be impaired when hearing loss is present in early childhood.  相似文献   

13.
The purpose of this experiment was to determine the applicability of the Articulation Index (AI) model for characterizing the speech recognition performance of listeners with mild-to-moderate hearing loss. Performance-intensity functions were obtained from five normal-hearing listeners and 11 hearing-impaired listeners using a closed-set nonsense syllable test for two frequency responses (uniform and high-frequency emphasis). For each listener, the fitting constant Q of the nonlinear transfer function relating AI and speech recognition was estimated. Results indicated that the function mapping AI onto performance was approximately the same for normal and hearing-impaired listeners with mild-to-moderate hearing loss and high speech recognition scores. For a hearing-impaired listener with poor speech recognition ability, the AI procedure was a poor predictor of performance. The AI procedure as presently used is inadequate for predicting performance of individuals with reduced speech recognition ability and should be used conservatively in applications predicting optimal or acceptable frequency response characteristics for hearing-aid amplification systems.  相似文献   

14.
This study tested the hypothesis that the reduction in spatial release from masking (SRM) resulting from sensorineural hearing loss in competing speech mixtures is influenced by the characteristics of the interfering speech. A frontal speech target was presented simultaneously with two intelligible or two time-reversed (unintelligible) speech maskers that were either colocated with the target or were symmetrically separated from the target in the horizontal plane. The difference in SRM between listeners with hearing impairment and listeners with normal hearing was substantially larger for the forward maskers (deficit of 5.8 dB) than for the reversed maskers (deficit of 1.6 dB). This was driven by the fact that all listeners, regardless of hearing abilities, performed similarly (and poorly) in the colocated condition with intelligible maskers. The same conditions were then tested in listeners with normal hearing using headphone stimuli that were degraded by noise vocoding. Reducing the number of available spectral channels systematically reduced the measured SRM, and again, more so for forward (reduction of 3.8 dB) than for reversed speech maskers (reduction of 1.8 dB). The results suggest that non-spatial factors can strongly influence both the magnitude of SRM and the apparent deficit in SRM for listeners with impaired hearing.  相似文献   

15.
Masking period patterns (MPPs) were measured in listeners with normal and impaired hearing using amplitude-modulated tonal maskers and short tonal probes. The frequency of the masker was either the same as the frequency of the probe (on-frequency masking) or was one octave below the frequency of the probe (off-frequency masking). In experiment 1, MPPs were measured for listeners with normal hearing using different masker levels. Carrier frequencies of 3 and 6 kHz were used for the masker. The probe had a frequency of 6 kHz. For all masker levels, the off-frequency MPPs exhibited deeper and longer valleys compared with the on-frequency MPPs. Hearing-impaired listeners were tested in experiment 2. For some hearing-impaired subjects, masker frequencies of 1.5 kHz and 3 kHz were paired with a probe frequency of 3 kHz. MPPs measured for listeners with hearing loss had similar shapes for on- and off-frequency maskers. It was hypothesized that the shapes of MPPs reflect nonlinear processing at the level of the basilar membrane in normal hearing and more linear processing in impaired hearing. A model assuming different cochlear gains for normal versus impaired hearing and similar parameters of the temporal integrator for both groups of listeners successfully predicted the MPPs.  相似文献   

16.
A functional simulation of hearing loss was evaluated in its ability to reproduce the temporal masking functions for eight listeners with mild to severe sensorineural hearing loss. Each audiometric loss was simulated in a group of age-matched normal-hearing listeners through a combination of spectrally-shaped masking noise and multi-band expansion. Temporal-masking functions were obtained in both groups of listeners using a forward-masking paradigm in which the level of a 110-ms masker required to just mask a 10-ms fixed-level probe (5-10 dB SL) was measured as a function of the time delay between the masker offset and probe onset. At each of four probe frequencies (500, 1000, 2000, and 4000 Hz), temporal-masking functions were obtained using maskers that were 0.55, 1.0, and 1.15 times the probe frequency. The slopes and y-intercepts of the masking functions were not significantly different for listeners with real and simulated hearing loss. The y-intercepts were positively correlated with level of hearing loss while the slopes were negatively correlated. The ratio of the slopes obtained with the low-frequency maskers relative to the on-frequency maskers was similar for both groups of listeners and indicated a smaller compressive effect than that observed in normal-hearing listeners.  相似文献   

17.
Effects of age and mild hearing loss on speech recognition in noise   总被引:5,自引:0,他引:5  
Using an adaptive strategy, the effects of mild sensorineural hearing loss and adult listeners' chronological age on speech recognition in babble were evaluated. The signal-to-babble ratio required to achieve 50% recognition was measured for three speech materials presented at soft to loud conversational speech levels. Four groups of subjects were tested: (1) normal-hearing listeners less than 44 years of age, (2) subjects less than 44 years old with mild sensorineural hearing loss and excellent speech recognition in quiet, (3) normal-hearing listeners greater than 65 with normal hearing, and (4) subjects greater than 65 years old with mild hearing loss and excellent performance in quiet. Groups 1 and 3, and groups 2 and 4 were matched on the basis of pure-tone thresholds, and thresholds for each of the three speech materials presented in quiet. In addition, groups 1 and 2 were similar in terms of mean age and age range, as were groups 3 and 4. Differences in performance in noise as a function of age were observed for both normal-hearing and hearing-impaired listeners despite equivalent performance in quiet. Subjects with mild hearing loss performed significantly worse than their normal-hearing counterparts. These results and their implications are discussed.  相似文献   

18.
Weak consonants (e.g., stops) are more susceptible to noise than vowels, owing partially to their lower intensity. This raises the question whether hearing-impaired (HI) listeners are able to perceive (and utilize effectively) the high-frequency cues present in consonants. To answer this question, HI listeners were presented with clean (noise absent) weak consonants in otherwise noise-corrupted sentences. Results indicated that HI listeners received significant benefit in intelligibility (4 dB decrease in speech reception threshold) when they had access to clean consonant information. At extremely low signal-to-noise ratio (SNR) levels, however, HI listeners received only 64% of the benefit obtained by normal-hearing listeners. This lack of equitable benefit was investigated in Experiment 2 by testing the hypothesis that the high-frequency cues present in consonants were not audible to HI listeners. This was tested by selectively amplifying the noisy consonants while leaving the noisy sonorant sounds (e.g., vowels) unaltered. Listening tests indicated small (~10%), but statistically significant, improvements in intelligibility at low SNR conditions when the consonants were amplified in the high-frequency region. Selective consonant amplification provided reliable low-frequency acoustic landmarks that in turn facilitated a better lexical segmentation of the speech stream and contributed to the small improvement in intelligibility.  相似文献   

19.
The brain can restore missing speech segments using linguistic knowledge and context. The phonemic restoration effect is commonly quantified by the increase in intelligibility of interrupted speech when the silent gaps are filled with noise bursts. In normal hearing, the restoration effect is negatively correlated with the baseline scores with interrupted speech; listeners with poorer baseline show more benefit from restoration. Reanalyzing data from Bas?kent et al. [(2010). Hear. Res. 260, 54-62], correlations with mild and moderate hearing impairment were observed to differ than with normal hearing. This analysis further shows that hearing impairment may affect top-down restoration of speech.  相似文献   

20.
The speech level of verbal information in public spaces should be determined to make it acceptable to as many listeners as possible, while simultaneously maintaining maximum intelligibility and considering the variation in the hearing levels of listeners. In the present study, the universally acceptable range of speech level in reverberant and quiet sound fields for both young listeners with normal hearing and aged listeners with hearing loss due to aging was investigated. Word intelligibility scores and listening difficulty ratings as a function of speech level were obtained by listening tests. The results of the listening tests clarified that (1) the universally acceptable ranges of speech level are from 60 to 70 dBA, from 56 to 61 dBA, from 52 to 67 dBA and from 58 to 63 dBA for the test sound fields with the reverberation times of 0.0, 0.5, 1.0 and 2.0 s, respectively, and (2) there is a speech level that falls within all of the universally acceptable ranges of speech level obtained in the present study; that speech level is around 60 dBA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号