首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
To examine spectral effects on declines in speech recognition in noise at high levels, word recognition for 18 young adults with normal hearing was assessed for low-pass-filtered speech and speech-shaped maskers or high-pass-filtered speech and speech-shaped maskers at three speech levels (70, 77, and 84 dB SPL) for each of three signal-to-noise ratios (+8, +3, and -2 dB). An additional low-level noise produced equivalent masked thresholds for all subjects. Pure-tone thresholds were measured in quiet and in all maskers. If word recognition was determined entirely by signal-to-noise ratio, and was independent of signal levels and the spectral content of speech and maskers, scores should remain constant with increasing level for both low- and high-frequency speech and maskers. Recognition of low-frequency speech in low-frequency maskers and high-frequency speech in high-frequency maskers decreased significantly with increasing speech level when signal-to-noise ratio was held constant. For low-frequency speech and speech-shaped maskers, the decline was attributed to nonlinear growth of masking which reduced the "effective" signal-to-noise ratio at high levels, similar to previous results for broadband speech and speech-shaped maskers. Masking growth and reduced "effective" signal-to-noise ratio accounted for some but not all the decline in recognition of high-frequency speech in high-frequency maskers.  相似文献   

2.
Under certain conditions, speech recognition in noise decreases above conversational levels when signal-to-noise ratio is held constant. The current study was undertaken to determine if nonlinear growth of masking and the subsequent reduction in "effective" signal-to-noise ratio accounts for this decline. Nine young adults with normal hearing listened to monosyllabic words at three levels in each of three levels of a masker shaped to match the speech spectrum. An additional low-level noise equated audibility by producing equivalent masked thresholds for all subjects. If word recognition was determined entirely by signal-to-noise ratio and was independent of overall speech and masker levels, scores at a given signal-to-noise ratio should remain constant with increasing level. Masked pure-tone thresholds measured in the speech-shaped maskers increased linearly with increasing masker level at lower frequencies but nonlinearly at higher frequencies, consistent with nonlinear growth of upward spread of masking that followed the peaks in the spectrum of the speech-shaped masker. Word recognition declined significantly with increasing level when signal-to-noise ratio was held constant which was attributed to nonlinear growth of masking and reduced "effective" signal-to-noise ratio at high speech-shaped masker levels, as indicated by audibility estimates based on the Articulation Index.  相似文献   

3.
To examine spectral and threshold effects for speech and noise at high levels, recognition of nonsense syllables was assessed for low-pass-filtered speech and speech-shaped maskers and high-pass-filtered speech and speech-shaped maskers at three speech levels, with signal-to-noise ratio held constant. Subjects were younger adults with normal hearing and older adults with normal hearing but significantly higher average quiet thresholds. A broadband masker was always present to minimize audibility differences between subject groups and across presentation levels. For subjects with lower thresholds, the declines in recognition of low-frequency syllables in low-frequency maskers were attributed to nonlinear growth of masking which reduced "effective" signal-to-noise ratio at high levels, whereas the decline for subjects with higher thresholds was not fully explained by nonlinear masking growth. For all subjects, masking growth did not entirely account for declines in recognition of high-frequency syllables in high-frequency maskers at high levels. Relative to younger subjects with normal hearing and lower quiet thresholds, older subjects with normal hearing and higher quiet thresholds had poorer consonant recognition in noise, especially for high-frequency speech in high-frequency maskers. Age-related effects on thresholds and task proficiency may be determining factors in the recognition of speech in noise at high levels.  相似文献   

4.
Recognition of isolated monosyllabic words in quiet and recognition of key words in low- and high-context sentences in babble were measured in a large sample of older persons enrolled in a longitudinal study of age-related hearing loss. Repeated measures were obtained yearly or every 2 to 3 years. To control for concurrent changes in pure-tone thresholds and speech levels, speech-recognition scores were adjusted using an importance-weighted speech-audibility metric (AI). Linear-regression slope estimated the rate of change in adjusted speech-recognition scores. Recognition of words in quiet declined significantly faster with age than predicted by declines in speech audibility. As subjects aged, observed scores deviated increasingly from AI-predicted scores, but this effect did not accelerate with age. Rate of decline in word recognition was significantly faster for females than males and for females with high serum progesterone levels, whereas noise history had no effect. Rate of decline did not accelerate with age but increased with degree of hearing loss, suggesting that with more severe injury to the auditory system, impairments to auditory function other than reduced audibility resulted in faster declines in word recognition as subjects aged. Recognition of key words in low- and high-context sentences in babble did not decline significantly with age.  相似文献   

5.
To assess age-related differences in benefit from masker modulation, younger and older adults with normal hearing but not identical audiograms listened to nonsense syllables in each of two maskers: (1) a steady-state noise shaped to match the long-term spectrum of the speech, and (2) this same noise modulated by a 10-Hz square wave, resulting in an interrupted noise. An additional low-level broadband noise was always present which was shaped to produce equivalent masked thresholds for all subjects. This minimized differences in speech audibility due to differences in quiet thresholds among subjects. An additional goal was to determine if age-related differences in benefit from modulation could be explained by differences in thresholds measured in simultaneous and forward maskers. Accordingly, thresholds for 350-ms pure tones were measured in quiet and in each masker; thresholds for 20-ms signals in forward and simultaneous masking were also measured at selected signal frequencies. To determine if benefit from modulated maskers varied with masker spectrum and to provide a comparison with previous studies, a subgroup of younger subjects also listened in steady-state and interrupted noise that was not spectrally shaped. Articulation index (AI) values were computed and speech-recognition scores were predicted for steady-state and interrupted noise; predicted benefit from modulation was also determined. Masked thresholds of older subjects were slightly higher than those of younger subjects; larger age-related threshold differences were observed for short-duration than for long-duration signals. In steady-state noise, speech recognition for older subjects was poorer than for younger subjects, which was partially attributable to older subjects' slightly higher thresholds in these maskers. In interrupted noise, although predicted benefit was larger for older than younger subjects, scores improved more for younger than for older subjects, particularly at the higher noise level. This may be related to age-related increases in thresholds in steady-state noise and in forward masking, especially at higher frequencies. Benefit of interrupted maskers was larger for unshaped than for speech-shaped noise, consistent with AI predictions.  相似文献   

6.
Mathematical treatment of context effects in phoneme and word recognition   总被引:2,自引:0,他引:2  
Percent recognition of phonemes and whole syllables, measured in both consonant-vowel-consonant (CVC) words and CVC nonsense syllables, is reported for normal young adults listening at four signal-to-noise (S/N) ratios. Similar data are reported for the recognition of words and whole sentences in three types of sentence: high predictability (HP) sentences, with both semantic and syntactic constraints; low predictability (LP) sentences, with primarily syntactic constraints; and zero predictability (ZP) sentences, with neither semantic nor syntactic constraints. The probability of recognition of speech units in context (pc) is shown to be related to the probability of recognition without context (pi) by the equation pc = 1 - (1-pi)k, where k is a constant. The factor k is interpreted as the amount by which the channels of statistically independent information are effectively multiplied when contextual constraints are added. Empirical values of k are approximately 1.3 and 2.7 for word and sentence context, respectively. In a second analysis, the probability of recognition of wholes (pw) is shown to be related to the probability of recognition of the constituent parts (pp) by the equation pw = pjp, where j represents the effective number of statistically independent parts within a whole. The empirically determined mean values of j for nonsense materials are not significantly different from the number of parts in a whole, as predicted by the underlying theory. In CVC words, the value of j is constant at approximately 2.5. In the four-word HP sentences, it falls from approximately 2.5 to approximately 1.6 as the inherent recognition probability for words falls from 100% to 0%, demonstrating an increasing tendency to perceive HP sentences either as wholes, or not at all, as S/N ratio deteriorates.  相似文献   

7.
Effects of age and mild hearing loss on speech recognition in noise   总被引:5,自引:0,他引:5  
Using an adaptive strategy, the effects of mild sensorineural hearing loss and adult listeners' chronological age on speech recognition in babble were evaluated. The signal-to-babble ratio required to achieve 50% recognition was measured for three speech materials presented at soft to loud conversational speech levels. Four groups of subjects were tested: (1) normal-hearing listeners less than 44 years of age, (2) subjects less than 44 years old with mild sensorineural hearing loss and excellent speech recognition in quiet, (3) normal-hearing listeners greater than 65 with normal hearing, and (4) subjects greater than 65 years old with mild hearing loss and excellent performance in quiet. Groups 1 and 3, and groups 2 and 4 were matched on the basis of pure-tone thresholds, and thresholds for each of the three speech materials presented in quiet. In addition, groups 1 and 2 were similar in terms of mean age and age range, as were groups 3 and 4. Differences in performance in noise as a function of age were observed for both normal-hearing and hearing-impaired listeners despite equivalent performance in quiet. Subjects with mild hearing loss performed significantly worse than their normal-hearing counterparts. These results and their implications are discussed.  相似文献   

8.
This study investigated the effects of age and hearing loss on perception of accented speech presented in quiet and noise. The relative importance of alterations in phonetic segments vs. temporal patterns in a carrier phrase with accented speech also was examined. English sentences recorded by a native English speaker and a native Spanish speaker, together with hybrid sentences that varied the native language of the speaker of the carrier phrase and the final target word of the sentence were presented to younger and older listeners with normal hearing and older listeners with hearing loss in quiet and noise. Effects of age and hearing loss were observed in both listening environments, but varied with speaker accent. All groups exhibited lower recognition performance for the final target word spoken by the accented speaker compared to that spoken by the native speaker, indicating that alterations in segmental cues due to accent play a prominent role in intelligibility. Effects of the carrier phrase were minimal. The findings indicate that recognition of accented speech, especially in noise, is a particularly challenging communication task for older people.  相似文献   

9.
The effects of intensity on monosyllabic word recognition were studied in adults with normal hearing and mild-to-moderate sensorineural hearing loss. The stimuli were bandlimited NU#6 word lists presented in quiet and talker-spectrum-matched noise. Speech levels ranged from 64 to 99 dB SPL and S/N ratios from 28 to -4 dB. In quiet, the performance of normal-hearing subjects remained essentially constant in noise, at a fixed S/N ratio, it decreased as a linear function of speech level. Hearing-impaired subjects performed like normal-hearing subjects tested in noise when the data were corrected for the effects of audibility loss. From these and other results, it was concluded that: (1) speech intelligibility in noise decreases when speech levels exceed 69 dB SPL and the S/N ratio remains constant; (2) the effects of speech and noise level are synergistic; (3) the deterioration in intelligibility can be modeled as a relative increase in the effective masking level; (4) normal-hearing and hearing-impaired subjects are affected similarly by increased signal level when differences in speech audibility are considered; (5) the negative effects of increasing speech and noise levels on speech recognition are similar for all adult subjects, at least up to 80 years; and (6) the effective dynamic range of speech may be larger than the commonly assumed value of 30 dB.  相似文献   

10.
The accuracy of automatic speech recognition (ASR) systems is generally evaluated using corpora of grammatically sound read speech or natural spontaneous speech. This prohibits an accurate estimation of the performance of the acoustic modeling part of ASR because the language modeling performance is inherently integrated in the overall performance metric. In this work, ASR and human speech recognition (HSR) accuracies are compared for null grammar sentences in different signal-to-noise ratios and vocabulary sizes-1000, 2000, 4000, and 8000. The results shed light on differences between ASR and HSR in relative significance of bottom-up word recognition and context awareness.  相似文献   

11.
It was investigated whether the model for context effects, developed earlier by Bronkhorst et al. [J. Acoust. Soc. Am. 93, 499-509 (1993)], can be applied to results of sentence tests, used for the evaluation of speech recognition. Data for two German sentence tests, that differed with respect to their semantic content, were analyzed. They had been obtained from normal-hearing listeners using adaptive paradigms in which the signal-to-noise ratio was varied. It appeared that the model can accurately reproduce the complete pattern of scores as a function of signal-to-noise ratio: both sentence recognition scores and proportions of incomplete responses. In addition, it is shown that the model can provide a better account of the relationship between average word recognition probability (p(e)) and sentence recognition probability (p(w)) than the relationship p(w) =p(e)j, which has been used in previous studies. Analysis of the relationship between j and the model parameters shows that j is, nevertheless, a very useful parameter, especially when it is combined with the parameter j', which can be derived using the equivalent relationship p(w,0) = (1 - p(e))(j'), where p(w,0) is the probability of recognizing none of the words in the sentence. These parameters not only provide complementary information on context effects present in the speech material, but they also can be used to estimate the model parameters. Because the model can be applied to both speech and printed text, an experiment was conducted in which part of the sentences was presented orthographically with 1-3 missing words. The results revealed a large difference between the values of the model parameters for the two presentation modes. This is probably due to the fact that, with speech, subjects can reduce the number of alternatives for a certain word using partial information that they have perceived (i.e., not only using the sentence context). A method for mapping model parameters from one mode to the other is suggested, but the validity of this approach has to be confirmed with additional data.  相似文献   

12.
The goal of this study was to establish the ability of normal-hearing listeners to discriminate formant frequency in vowels in everyday speech. Vowel formant discrimination in syllables, phrases, and sentences was measured for high-fidelity (nearly natural) speech synthesized by STRAIGHT [Kawahara et al., Speech Commun. 27, 187-207 (1999)]. Thresholds were measured for changes in F1 and F2 for the vowels /I, epsilon, ae, lambda/ in /bVd/ syllables. Experimental factors manipulated included phonetic context (syllables, phrases, and sentences), sentence discrimination with the addition of an identification task, and word position. Results showed that neither longer phonetic context nor the addition of the identification task significantly affected thresholds, while thresholds for word final position showed significantly better performance than for either initial or middle position in sentences. Results suggest that an average of 0.37 barks is required for normal-hearing listeners to discriminate vowel formants in modest length sentences, elevated by 84% compared to isolated vowels. Vowel formant discrimination in several phonetic contexts was slightly elevated for STRAIGHT-synthesized speech compared to formant-synthesized speech stimuli reported in the study by Kewley-Port and Zheng [J. Acoust. Soc. Am. 106, 2945-2958 (1999)]. These elevated thresholds appeared related to greater spectral-temporal variability for high-fidelity speech produced by STRAIGHT than for formant-synthesized speech.  相似文献   

13.
The speech signal contains many acoustic properties that may contribute differently to spoken word recognition. Previous studies have demonstrated that the importance of properties present during consonants or vowels is dependent upon the linguistic context (i.e., words versus sentences). The current study investigated three potentially informative acoustic properties that are present during consonants and vowels for monosyllabic words and sentences. Natural variations in fundamental frequency were either flattened or removed. The speech envelope and temporal fine structure were also investigated by limiting the availability of these cues via noisy signal extraction. Thus, this study investigated the contribution of these acoustic properties, present during either consonants or vowels, to overall word and sentence intelligibility. Results demonstrated that all processing conditions displayed better performance for vowel-only sentences. Greater performance with vowel-only sentences remained, despite removing dynamic cues of the fundamental frequency. Word and sentence comparisons suggest that the speech envelope may be at least partially responsible for additional vowel contributions in sentences. Results suggest that speech information transmitted by the envelope is responsible, in part, for greater vowel contributions in sentences, but is not predictive for isolated words.  相似文献   

14.
The ability to recognize spoken words interrupted by silence was investigated with young normal-hearing listeners and older listeners with and without hearing impairment. Target words from the revised SPIN test by Bilger et al. [J. Speech Hear. Res. 27(1), 32-48 (1984)] were presented in isolation and in the original sentence context using a range of interruption patterns in which portions of speech were replaced with silence. The number of auditory "glimpses" of speech and the glimpse proportion (total duration glimpsed/word duration) were varied using a subset of the SPIN target words that ranged in duration from 300 to 600 ms. The words were presented in isolation, in the context of low-predictability (LP) sentences, and in high-predictability (HP) sentences. The glimpse proportion was found to have a strong influence on word recognition, with relatively little influence of the number of glimpses, glimpse duration, or glimpse rate. Although older listeners tended to recognize fewer interrupted words, there was considerable overlap in recognition scores across listener groups in all conditions, and all groups were affected by interruption parameters and context in much the same way.  相似文献   

15.
These experiments examined how high presentation levels influence speech recognition for high- and low-frequency stimuli in noise. Normally hearing (NH) and hearing-impaired (HI) listeners were tested. In Experiment 1, high- and low-frequency bandwidths yielding 70%-correct word recognition in quiet were determined at levels associated with broadband speech at 75 dB SPL. In Experiment 2, broadband and band-limited sentences (based on passbands measured in Experiment 1) were presented at this level in speech-shaped noise filtered to the same frequency bandwidths as targets. Noise levels were adjusted to produce approximately 30%-correct word recognition. Frequency bandwidths and signal-to-noise ratios supporting criterion performance in Experiment 2 were tested at 75, 87.5, and 100 dB SPL in Experiment 3. Performance tended to decrease as levels increased. For NH listeners, this "rollover" effect was greater for high-frequency and broadband materials than for low-frequency stimuli. For HI listeners, the 75- to 87.5-dB increase improved signal audibility for high-frequency stimuli and rollover was not observed. However, the 87.5- to 100-dB increase produced qualitatively similar results for both groups: scores decreased most for high-frequency stimuli and least for low-frequency materials. Predictions of speech intelligibility by quantitative methods such as the Speech Intelligibility Index may be improved if rollover effects are modeled as frequency dependent.  相似文献   

16.
This experiment assessed the benefits of suppression and the impact of reduced or absent suppression on speech recognition in noise. Psychophysical suppression was measured in forward masking using tonal maskers and suppressors and band limited noise maskers and suppressors. Subjects were 10 younger and 10 older adults with normal hearing, and 10 older adults with cochlear hearing loss. For younger subjects with normal hearing, suppression measured with noise maskers increased with masker level and was larger at 2.0 kHz than at 0.8 kHz. Less suppression was observed for older than younger subjects with normal hearing. There was little evidence of suppression for older subjects with cochlear hearing loss. Suppression measured with noise maskers and suppressors was larger in magnitude and more prevalent than suppression measured with tonal maskers and suppressors. The benefit of suppression to speech recognition in noise was assessed by obtaining scores for filtered consonant-vowel syllables as a function of the bandwidth of a forward masker. Speech-recognition scores in forward maskers should be higher than those in simultaneous maskers given that forward maskers are less effective than simultaneous maskers. If suppression also mitigated the effects of the forward masker and resulted in an improved signal-to-noise ratio, scores should decrease less in forward masking as forward-masker bandwidth increased, and differences between scores in forward and simultaneous maskers should increase, as was observed for younger subjects with normal hearing. Less or no benefit of suppression to speech recognition in noise was observed for older subjects with normal hearing or hearing loss. In general, as suppression measured with tonal signals increased, the combined benefit of forward masking and suppression to speech recognition in noise also increased.  相似文献   

17.
A group of 29 elderly subjects between 60.0 and 83.7 years of age at the beginning of the study, and whose hearing loss was not greater than moderate, was tested twice, an average of 5.27 years apart. The tests measured pure-tone thresholds, word recognition in quiet, and understanding of speech with various types of distortion (low-pass filtering, time compression) or interference (single speaker, babble noise, reverberation). Performance declined consistently and significantly between the two testing phases. In addition, the variability of speech understanding measures increased significantly between testing phases, though the variability of audiometric measurements did not. A right-ear superiority was observed but this lateral asymmetry did not increase between testing phases. Comparison of the elderly subjects with a group of young subjects with normal hearing shows that the decline of speech understanding measures accelerated significantly relative to the decline in audiometric measures in the seventh to ninth decades of life. On the assumption that speech understanding depends linearly on age and audiometric variables, there is evidence that this linear relationship changes with age, suggesting that not only the accuracy but also the nature of speech understanding evolves with age.  相似文献   

18.
For 140 male subjects (20 per decade between the ages 20 and 89) and 72 female subjects (20 per decade between 60 and 89, and 12 for the age interval 90-96), the monaural speech-reception threshold (SRT) for sentences was investigated in quiet and at four noise levels (22.2, 37.5, 52.5, and 67.5 dBA noise with long-term average speech spectra). The median SRT as well as the quartiles are given as a function of age. The data are described in terms of a model published earlier [J. Acoust. Soc. Am. 63, 533-549 (1978)]. According to this model every hearing loss for speech (SHL) is interpreted as the sum of a loss class A (attenuation), characterized by a reduction of the levels of both speech signal and noise, and a loss class D (distortion), comparable with a decrease in signal-to-noise ratio. Both SHLA+D (hearing loss in quiet) and SHLD (hearing loss at high noise levels) increase progressively above the age of 50 (reaching typical values of 30 and 6 dB, respectively, at age 85). The spread of SHLD as a function of SHLA+D for the individual ears is so large (sigma = 2.7 dB) that subjects with the same hearing loss for speech in quiet may differ considerably in their ability to understand speech in noise. The data confirm that the hearing handicap of many elderly subjects manifests itself primarily in a noisy environment. Acceptable noise levels in rooms used by the aged must be 5 to 10 dB lower than those for normal-hearing subjects.  相似文献   

19.
Speech can remain intelligible for listeners with normal hearing when processed by narrow bandpass filters that transmit only a small fraction of the audible spectrum. Two experiments investigated the basis for the high intelligibility of narrowband speech. Experiment 1 confirmed reports that everyday English sentences can be recognized accurately (82%-98% words correct) when filtered at center frequencies of 1500, 2100, and 3000 Hz. However, narrowband low predictability (LP) sentences were less accurately recognized than high predictability (HP) sentences (20% lower scores), and excised narrowband words were even less intelligible than LP sentences (a further 23% drop). While experiment 1 revealed similar levels of performance for narrowband and broadband sentences at conversational speech levels, experiment 2 showed that speech reception thresholds were substantially (>30 dB) poorer for narrowband sentences. One explanation for this increased disparity between narrowband and broadband speech at threshold (compared to conversational speech levels) is that spectral components in the sloping transition bands of the filters provide important cues for the recognition of narrowband speech, but these components become inaudible as the signal level is reduced. Experiment 2 also showed that performance was degraded by the introduction of a speech masker (a single competing talker). The elevation in threshold was similar for narrowband and broadband speech (11 dB, on average), but because the narrowband sentences required considerably higher sound levels to reach their thresholds in quiet compared to broadband sentences, their target-to-masker ratios were very different (+23 dB for narrowband sentences and -12 dB for broadband sentences). As in experiment 1, performance was better for HP than LP sentences. The LP-HP difference was larger for narrowband than broadband sentences, suggesting that context provides greater benefits when speech is distorted by narrow bandpass filtering.  相似文献   

20.
Psychometric functions for gap detection of temporal gaps in wideband noise were measured in a "yes/no" paradigm from normal-hearing young and aged subjects with closely matched audiograms. The effects of noise-burst duration, gap location, and uncertainty of gap location were tested. A typical psychometric function obtained in this study featured a steep slope, which was independent of most experimental conditions as well as age. However, gap thresholds were generally improved with increasing duration of the noise burst for both young and aged subjects. Gap location and uncertainty had no significant effects on the thresholds for the young subjects. For the aged subjects, whenever the gap was sufficiently away from the onset or offset of the noise burst, detectability was robust despite uncertainty about the gap location. Significant differences between young and aged subjects could be observed only when the gap was very close to the signal onset and offset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号