首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Spectral peak resolution was investigated in normal hearing (NH), hearing impaired (HI), and cochlear implant (CI) listeners. The task involved discriminating between two rippled noise stimuli in which the frequency positions of the log-spaced peaks and valleys were interchanged. The ripple spacing was varied adaptively from 0.13 to 11.31 ripples/octave, and the minimum ripple spacing at which a reversal in peak and trough positions could be detected was determined as the spectral peak resolution threshold for each listener. Spectral peak resolution was best, on average, in NH listeners, poorest in CI listeners, and intermediate for HI listeners. There was a significant relationship between spectral peak resolution and both vowel and consonant recognition in quiet across the three listener groups. The results indicate that the degree of spectral peak resolution required for accurate vowel and consonant recognition in quiet backgrounds is around 4 ripples/octave, and that spectral peak resolution poorer than around 1-2 ripples/octave may result in highly degraded speech recognition. These results suggest that efforts to improve spectral peak resolution for HI and CI users may lead to improved speech recognition.  相似文献   

2.
Speech-reception thresholds (SRT) were measured for 17 normal-hearing and 17 hearing-impaired listeners in conditions simulating free-field situations with between one and six interfering talkers. The stimuli, speech and noise with identical long-term average spectra, were recorded with a KEMAR manikin in an anechoic room and presented to the subjects through headphones. The noise was modulated using the envelope fluctuations of the speech. Several conditions were simulated with the speaker always in front of the listener and the maskers either also in front, or positioned in a symmetrical or asymmetrical configuration around the listener. Results show that the hearing impaired have significantly poorer performance than the normal hearing in all conditions. The mean SRT differences between the groups range from 4.2-10 dB. It appears that the modulations in the masker act as an important cue for the normal-hearing listeners, who experience up to 5-dB release from masking, while being hardly beneficial for the hearing impaired listeners. The gain occurring when maskers are moved from the frontal position to positions around the listener varies from 1.5 to 8 dB for the normal hearing, and from 1 to 6.5 dB for the hearing impaired. It depends strongly on the number of maskers and their positions, but less on hearing impairment. The difference between the SRTs for binaural and best-ear listening (the "cocktail party effect") is approximately 3 dB in all conditions for both the normal-hearing and the hearing-impaired listeners.  相似文献   

3.
A loss of cochlear compression may underlie many of the difficulties experienced by hearing-impaired listeners. Two behavioral forward-masking paradigms that have been used to estimate the magnitude of cochlear compression are growth of masking (GOM) and temporal masking (TM). The aim of this study was to determine whether these two measures produce within-subjects results that are consistent across a range of signal frequencies and, if so, to compare them in terms of reliability or efficiency. GOM and TM functions were measured in a group of five normal-hearing and five hearing-impaired listeners at signal frequencies of 1000, 2000, and 4000 Hz. Compression values were derived from the masking data and confidence intervals were constructed around these estimates. Both measures produced comparable estimates of compression, but both measures have distinct advantages and disadvantages, so that the more appropriate measure depends on factors such as the frequency region of interest and the degree of hearing loss. Because of the long testing times needed, neither measure is suitable for clinical use in its current form.  相似文献   

4.
Speech intelligibility (PB words) in traffic-like noise was investigated in a laboratory situation simulating three common listening situations, indoors at 1 and 4 m and outdoors at 1 m. The maximum noise levels still permitting 75% intelligibility of PB words in these three listening situations were also defined. A total of 269 persons were examined. Forty-six had normal hearing, 90 a presbycusis-type hearing loss, 95 a noise-induced hearing loss and 38 a conductive hearing loss. In the indoor situation the majority of the groups with impaired hearing retained good speech intelligibility in 40 dB(A) masking noise. Lowering the noise level to less than 40 dB(A) resulted in a minor, usually insignificant, improvement in speech intelligibility. Listeners with normal hearing maintained good speech intelligibility in the outdoor listening situation at noise levels up to 60 dB(A), without lip-reading (i.e., using non-auditory information). For groups with impaired hearing due to age and/or noise, representing 8% of the population in Sweden, the noise level outdoors had to be lowered to less than 50 dB(A), in order to achieve good speech intelligibility at 1 m without lip-reading.  相似文献   

5.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

6.
Cues to the voicing distinction for final /f,s,v,z/ were assessed for 24 impaired- and 11 normal-hearing listeners. In base-line tests the listeners identified the consonants in recorded /d circumflex C/ syllables. To assess the importance of various cues, tests were conducted of the syllables altered by deletion and/or temporal adjustment of segments containing acoustic patterns related to the voicing distinction for the fricatives. The results showed that decreasing the duration of /circumflex/ preceding /v/ or /z/, and lengthening the /circumflex/ preceding /f/ or /s/, considerably reduced the correctness of voicing perception for the hearing-impaired group, while showing no effect for the normal-hearing group. For the normals, voicing perception deteriorated for /f/ and /s/ when the frications were deleted from the syllables, and for /v/ and /z/ when the vowel offsets were removed from the syllables with duration-adjusted vowels and deleted frications. We conclude that some hearing-impaired listeners rely to a greater extent on vowel duration as a voicing cue than do normal-hearing listeners.  相似文献   

7.
Consonant recognition in quiet and in noise was investigated as a function of age for essentially normal hearing listeners 21-68 years old, using the nonsense syllable test (NST) [Resnick et al., J. Acoust. Soc. Am. Suppl. 1 58, S114 (1975)]. The subjects audited the materials in quiet and at S/N ratios of +10 and +5 dB at their most comfortable listening levels (MCLs). The MCLs approximated conversational speech levels and were not significantly different between the age groups. The effects of age group, S/N condition (quiet, S/N +10, S/N +5) and NST subsets, and the S/N condition X subset interaction were all significant. Interactions involving the age factor were nonsignificant. Confusion matrices were similar across age groups, including the directions of errors between the most frequently confused phonemes. Also, the older subjects experienced performance decrements on the same features that were least accurately recognized by the younger subjects. The findings suggest that essentially normal older persons listening in quiet and in noise experience decreased consonant recognition ability, but that the nature of their phoneme confusions is similar to that of younger individuals. Even though the older subjects met the same selection criteria as did younger ones, there was an expected shift upward in auditory thresholds with age within these limits. Sensitivity at 8000 Hz was correlated with NST scores in noise when controlling for age, but the correlation between performance in noise and age was nonsignificant when controlling for the 8000-Hz threshold. These associations seem to implicate the phenomena underlying the increased 8000-Hz thresholds in the speech recognition problems of the elderly, and appear to support the concept of peripheral auditory deterioration with aging even among those with essentially normal hearing.  相似文献   

8.
Upward spreading of masking, measured in terms of absolute masked threshold, is greater in hearing-impaired listeners than in listeners with normal hearing. The purpose of this study was to make further observations on upward-masked thresholds and speech recognition in noise in elderly listeners. Two age groups were used: One group consisted of listeners who were more than 60 years old, and the second group consisted of listeners who were less than 36 years old. Both groups had listeners with normal hearing as well as listeners with mild to moderate sensorineural loss. The masking paradigm consisted of a continuous low-pass-filtered (1000-Hz) noise, which was mixed with the output of a self-tracking, sweep-frequency Bekesy audiometer. Thresholds were measured in quiet and with maskers at 70 and 90 dB SPL. The upward-masked thresholds were similar for young and elderly hearing-impaired listeners. A few elderly listeners had lower upward-masked thresholds compared with the young control group; however, their on-frequency masked thresholds were nearly identical to the control group. A significant correlation was found between upward-masked thresholds and the Speech Perception in Noise (SPIN) test in elderly listeners.  相似文献   

9.
An articulation index calculation procedure developed for use with individual normal-hearing listeners [C. Pavlovic and G. Studebaker, J. Acoust. Soc. Am. 75, 1606-1612 (1984)] was modified to account for the deterioration in suprathreshold speech processing produced by sensorineural hearing impairment. Data from four normal-hearing and four hearing-impaired subjects were used to relate the loss in hearing sensitivity to the deterioration in speech processing in quiet and in noise. The new procedure only requires hearing threshold measurements and consists of the following two modifications of the original AI procedure of Pavlovic and Studebaker (1984): The speech and noise spectrum densities are integrated over bandwidths which are, when expressed in decibels, larger than the critical bandwidths by 10% of the hearing loss. This is in contrast to the unmodified procedure where integration is performed over critical bandwidths. The contribution of each frequency to the AI is the product of its contribution in the unmodified AI procedure and a "speech desensitization factor." The desensitization factor is specified as a function of the hearing loss. The predictive accuracies of both the unmodified and the modified calculation procedures were assessed by comparing the expected and observed speech recognition scores of four hearing-impaired subjects under various conditions of speech filtering and noise masking. The modified procedure appears accurate for general applications. In contrast, the unmodified procedure appears accurate only for applications where results obtained under various conditions on a single listener are compared to each other.  相似文献   

10.
Perceptual coherence, the process by which the individual elements of complex sounds are bound together, was examined in adult listeners with longstanding childhood hearing losses, listeners with adult-onset hearing losses, and listeners with normal hearing. It was hypothesized that perceptual coherence would vary in strength between the groups due to their substantial differences in hearing history. Bisyllabic words produced by three talkers as well as comodulated three-tone complexes served as stimuli. In the first task, the second formant of each word was isolated and presented for recognition. In the second task, an isolated formant was paired with an intact word and listeners indicated whether or not the isolated second formant was a component of the intact word. In the third task, the middle component of the three-tone complex was presented in the same manner. For the speech stimuli, results indicate normal perceptual coherence in the listeners with adult-onset hearing loss but significantly weaker coherence in the listeners with childhood hearing losses. No differences were observed across groups for the nonspeech stimuli. These results suggest that perceptual coherence is relatively unaffected by hearing loss acquired during adulthood but appears to be impaired when hearing loss is present in early childhood.  相似文献   

11.
This study investigated the relationship between audibility and predictions of speech recognition for children and adults with normal hearing. The Speech Intelligibility Index (SII) is used to quantify the audibility of speech signals and can be applied to transfer functions to predict speech recognition scores. Although the SII is used clinically with children, relatively few studies have evaluated SII predictions of children's speech recognition directly. Children have required more audibility than adults to reach maximum levels of speech understanding in previous studies. Furthermore, children may require greater bandwidth than adults for optimal speech understanding, which could influence frequency-importance functions used to calculate the SII. Speech recognition was measured for 116 children and 19 adults with normal hearing. Stimulus bandwidth and background noise level were varied systematically in order to evaluate speech recognition as predicted by the SII and derive frequency-importance functions for children and adults. Results suggested that children required greater audibility to reach the same level of speech understanding as adults. However, differences in performance between adults and children did not vary across frequency bands.  相似文献   

12.
The present study examined the benefits of providing amplified speech to the low- and mid-frequency regions of listeners with various degrees of sensorineural hearing loss. Nonsense syllables were low-pass filtered at various cutoff frequencies and consonant recognition was measured as the bandwidth of the signal was increased. In addition, error patterns were analyzed to determine the types of speech cues that were, or were not, transmitted to the listeners. For speech frequencies of 2800 Hz and below, a positive benefit of amplified speech was observed in every case, although the benefit provided was very often less than that observed in normal-hearing listeners who received the same increase in speech audibility. There was no dependence of this benefit upon the degree of hearing loss. Error patterns suggested that the primary difficulty that hearing-impaired individuals have in using amplified speech is due to their poor ability to perceive the place of articulation of consonants, followed by a reduced ability to perceive manner information.  相似文献   

13.
Abnormalities in the cochlear function usually cause broadening of the auditory filters which reduces the speech intelligibility. An attempt to apply a spectral enhancement algorithm has been undertaken to improve the identification of Polish vowels by subjects with cochlear-based hearing-impairment. The identification scores of natural (unprocessed) vowels and spectrally enhanced (processed) vowels has been measured for hearing-impaired subjects. It has been found that spectral enhancement improves vowel scores by about 10% for those subjects, however, a wide variation in individual performance among subjects has been observed. The overall vowels identification scores obtained were 85% for natural vowels and 96% for spectrally enhanced vowels.  相似文献   

14.
The speech-reception threshold (SRT) for sentences presented in a fluctuating interfering background sound of 80 dBA SPL is measured for 20 normal-hearing listeners and 20 listeners with sensorineural hearing impairment. The interfering sounds range from steady-state noise, via modulated noise, to a single competing voice. Two voices are used, one male and one female, and the spectrum of the masker is shaped according to these voices. For both voices, the SRT is measured as well in noise spectrally shaped according to the target voice as shaped according to the other voice. The results show that, for normal-hearing listeners, the SRT for sentences in modulated noise is 4-6 dB lower than for steady-state noise; for sentences masked by a competing voice, this difference is 6-8 dB. For listeners with moderate sensorineural hearing loss, elevated thresholds are obtained without an appreciable effect of masker fluctuations. The implications of these results for estimating a hearing handicap in everyday conditions are discussed. By using the articulation index (AI), it is shown that hearing-impaired individuals perform poorer than suggested by the loss of audibility for some parts of the speech signal. Finally, three mechanisms are discussed that contribute to the absence of unmasking by masker fluctuations in hearing-impaired listeners. The low sensation level at which the impaired listeners receive the masker seems a major determinant. The second and third factors are: reduced temporal resolution and a reduction in comodulation masking release, respectively.  相似文献   

15.
This study compared the ability of 5 listeners with normal hearing and 12 listeners with moderate to moderately severe sensorineural hearing loss to discriminate complementary two-component complex tones (TCCTs). The TCCTs consist of two pure tone components (f1 and f2) which differ in frequency by delta f (Hz) and in level by delta L (dB). In one of the complementary tones, the level of the component f1 is greater than the level of component f2 by the increment delta L; in the other tone, the level of component f2 exceeds that of component f1 by delta L. Five stimulus conditions were included in this study: fc = 1000 Hz, delta L = 3 dB; fc = 1000 Hz, delta L = 1 dB; fc = 2000 Hz, delta L = 3 dB; fc = 2000 Hz, delta L = 1 dB; and fc = 4000 Hz, delta L = 3 dB. In listeners with normal hearing, discrimination of complementary TCCTs (with a fixed delta L and a variable delta f) is described by an inverted U-shaped psychometric function in which discrimination improves as delta f increases, is (nearly) perfect for a range of delta f's, and then decreases again as delta f increases. In contrast, group psychometric functions for listeners with hearing loss are shifted to the right such that above chance performance occurs at larger values of delta f than in listeners with normal hearing. Group psychometric functions for listeners with hearing loss do not show a decrease in performance at the largest values of delta f included in this study. Decreased TCCT discrimination is evident when listeners with hearing loss are compared to listeners with normal hearing at both equal SPLs and at equal sensation levels. In both groups of listeners, TCCT discrimination is significantly worse at high center frequencies. Results from normal-hearing listeners are generally consistent with a temporal model of TCCT discrimination. Listeners with hearing loss may have deficits in using phase locking in the TCCT discrimination task and so may rely more on place cues in TCCT discrimination.  相似文献   

16.
A group of prelinguistically hearing impaired children, between 7 and 11 years of age, were trained in the perception of vowel duration and place, the fricative /s/, and manner of articulation (/m/ vs /b/ and /s/ vs /t/) distinctions, using information provided by a multiple-channel electrotactile aid (Tickle Talker), and through aided hearing. Training was provided in the tactile-plus-aided hearing (TA) and tactile (T) conditions. Speech feature recognition tests were conducted in the TA, T, and aided hearing (A) conditions, during pretraining, training, and post-training phases. Test scores in the TA and T conditions were significantly greater than scores in the A condition for all tests, suggesting that perception of these features was improved when the tactile aid was worn. Test scores in the training and post-training phases were significantly greater than in the pretraining phase, suggesting that the training provided was responsible for the improvement in feature perception. Statistical analyses demonstrated a significant interaction between the main effects of condition and phase, suggesting that training improved perception in the TA and T conditions, but not in the A condition. Post-training and training test scores were similar suggesting that the perceptual skills acquired during training were retained after the removal of training. Recognition of trained features improved for trained, as well as for untrained words.  相似文献   

17.
Annoyance ratings in speech intelligibility tests at 45 dB(A) and 55 dB(A) traffic noise were investigated in a laboratory study. Subjects were chosen according to their hearing acuity to be representative of 70-year-old men and women, and of noise-induced hearing losses typical for a great number of industrial workers. These groups were compared with normal hearing subjects of the same sex and, when possible, the same age. The subjects rated their annoyance on an open 100 mm scale. Significant correlations were found between annoyance expressed in millimetres and speech intelligibility in percent when all subjects were taken as one sample. Speech intelligibility was also calculated from physical measurements of speech and noise by using the articulation index method. Observed and calculated speech intelligibility scores are compared and discussed. Also treated is the estimation of annoyance by traffic noise at moderate noise levels via speech intelligibility scores.  相似文献   

18.
Binaural performance was measured as a function of stimulus frequency for four impaired listeners, each with bilaterally symmetric audiograms. The subjects had various degrees and configurations of audiometric losses: two had high-frequency, sensorineural losses; one had a flat sensorineural loss; and one had multiple sclerosis with normal audiometric thresholds. Just noticeable differences (jnd's) in interaural time, interaural intensity, and interaural correlation as well as detection thresholds for NoSo and NoS pi conditions were obtained for narrow-band noise stimuli at octave frequencies from 250-4000 Hz. Performance of the impaired listeners was generally poorer than that of normal-hearing listeners, although it was comparable to normal in a few instances. The patterns of binaural performance showed no apparent relation to the audiometric patterns; even the two subjects with similar degree and configuration of hearing loss have very different binaural performance, both in the level and frequency dependence of their performance. The frequency dependence of performance on individual tests is irregular enough that one cannot confidently interpolate between octaves. In addition, it appears that no subset of the measurements is adequate to characterize the performance in the rest of the measurements with the exception that, within limits, interaural correlation discrimination and NoS pi detection performance are related.  相似文献   

19.
The purpose of this experiment was to determine the applicability of the Articulation Index (AI) model for characterizing the speech recognition performance of listeners with mild-to-moderate hearing loss. Performance-intensity functions were obtained from five normal-hearing listeners and 11 hearing-impaired listeners using a closed-set nonsense syllable test for two frequency responses (uniform and high-frequency emphasis). For each listener, the fitting constant Q of the nonlinear transfer function relating AI and speech recognition was estimated. Results indicated that the function mapping AI onto performance was approximately the same for normal and hearing-impaired listeners with mild-to-moderate hearing loss and high speech recognition scores. For a hearing-impaired listener with poor speech recognition ability, the AI procedure was a poor predictor of performance. The AI procedure as presently used is inadequate for predicting performance of individuals with reduced speech recognition ability and should be used conservatively in applications predicting optimal or acceptable frequency response characteristics for hearing-aid amplification systems.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号