首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Consonant recognition in quiet using the Nonsense Syllable Test (NST) [Resnick et al., J. Acoust. Soc. Am. Suppl. 1 58, S114 (1975)] was investigated in 62 normal hearing subjects 20 to 65 years of age at their most comfortable listening levels (MCLs) and at 8 dB above and below MCL. Although overall consonant recognition performance was high (as expected for normal listeners), the effects of age decade, relative presentation level, and NST subsets were all significant, as was the interaction of age X level. The interactions of age X NST subset, and age X subset X level were nonsignificant. These findings suggest that consonant recognition decreases with normal aging, particularly below MCL. However, the relative perceptual difficulty of the seven subtests is the same across age groups. Confusion matrices were similar across levels and age groups. Percent information transmitted for several consonant features was calculated from the confusion matrices. Older subjects showed decrements in performance primarily for the features recognized relatively less accurately by the younger subjects. The results suggest that normal hearing older individuals listening in quiet have decreased consonant recognition ability, but that their confusions are similar to those of younger persons.  相似文献   

2.
The goal of this study was to determine the extent to which the difficulty experienced by impaired listeners in understanding noisy speech can be explained on the basis of elevated tone-detection thresholds. Twenty-one impaired ears of 15 subjects, spanning a variety of audiometric configurations with average hearing losses to 75 dB, were tested for reception of consonants in a speech-spectrum noise. Speech level, noise level, and frequency-gain characteristic were varied to generate a range of listening conditions. Results for impaired listeners were compared to those of normal-hearing listeners tested under the same conditions with extra noise added to approximate the impaired listeners' detection thresholds. Results for impaired and normal listeners were also compared on the basis of articulation indices. Consonant recognition by this sample of impaired listeners was generally comparable to that of normal-hearing listeners with similar threshold shifts listening under the same conditions. When listening conditions were equated for articulation index, there was no clear dependence of consonant recognition on average hearing loss. Assuming that the primary consequence of the threshold simulation in normals is loss of audibility (as opposed to suprathreshold discrimination or resolution deficits), it is concluded that the primary source of difficulty in listening in noise for listeners with moderate or milder hearing impairments, aside from the noise itself, is the loss of audibility.  相似文献   

3.
Spectral peak resolution was investigated in normal hearing (NH), hearing impaired (HI), and cochlear implant (CI) listeners. The task involved discriminating between two rippled noise stimuli in which the frequency positions of the log-spaced peaks and valleys were interchanged. The ripple spacing was varied adaptively from 0.13 to 11.31 ripples/octave, and the minimum ripple spacing at which a reversal in peak and trough positions could be detected was determined as the spectral peak resolution threshold for each listener. Spectral peak resolution was best, on average, in NH listeners, poorest in CI listeners, and intermediate for HI listeners. There was a significant relationship between spectral peak resolution and both vowel and consonant recognition in quiet across the three listener groups. The results indicate that the degree of spectral peak resolution required for accurate vowel and consonant recognition in quiet backgrounds is around 4 ripples/octave, and that spectral peak resolution poorer than around 1-2 ripples/octave may result in highly degraded speech recognition. These results suggest that efforts to improve spectral peak resolution for HI and CI users may lead to improved speech recognition.  相似文献   

4.
Speech intelligibility (PB words) in traffic-like noise was investigated in a laboratory situation simulating three common listening situations, indoors at 1 and 4 m and outdoors at 1 m. The maximum noise levels still permitting 75% intelligibility of PB words in these three listening situations were also defined. A total of 269 persons were examined. Forty-six had normal hearing, 90 a presbycusis-type hearing loss, 95 a noise-induced hearing loss and 38 a conductive hearing loss. In the indoor situation the majority of the groups with impaired hearing retained good speech intelligibility in 40 dB(A) masking noise. Lowering the noise level to less than 40 dB(A) resulted in a minor, usually insignificant, improvement in speech intelligibility. Listeners with normal hearing maintained good speech intelligibility in the outdoor listening situation at noise levels up to 60 dB(A), without lip-reading (i.e., using non-auditory information). For groups with impaired hearing due to age and/or noise, representing 8% of the population in Sweden, the noise level outdoors had to be lowered to less than 50 dB(A), in order to achieve good speech intelligibility at 1 m without lip-reading.  相似文献   

5.
Perceptual coherence, the process by which the individual elements of complex sounds are bound together, was examined in adult listeners with longstanding childhood hearing losses, listeners with adult-onset hearing losses, and listeners with normal hearing. It was hypothesized that perceptual coherence would vary in strength between the groups due to their substantial differences in hearing history. Bisyllabic words produced by three talkers as well as comodulated three-tone complexes served as stimuli. In the first task, the second formant of each word was isolated and presented for recognition. In the second task, an isolated formant was paired with an intact word and listeners indicated whether or not the isolated second formant was a component of the intact word. In the third task, the middle component of the three-tone complex was presented in the same manner. For the speech stimuli, results indicate normal perceptual coherence in the listeners with adult-onset hearing loss but significantly weaker coherence in the listeners with childhood hearing losses. No differences were observed across groups for the nonspeech stimuli. These results suggest that perceptual coherence is relatively unaffected by hearing loss acquired during adulthood but appears to be impaired when hearing loss is present in early childhood.  相似文献   

6.
This study examined the effect of noise on the identification of four synthetic speech continua (/ra/-/la/, /wa/-/ja/, /i/-/u/, and say-stay) by adults with cochlea implants (CIs) and adults with normal-hearing (NH) sensitivity in quiet and noise. Significant group-by-SNR interactions were found for endpoint identification accuracy for all continua except /i/-/u/. The CI listeners showed the least NH-like identification functions for the /ra/-/la/ and /wa/-/ja/ continua. In a second experiment, NH adults identified four- and eight-band cochlear implant stimulations of the four continua, to examine whether group differences in frequency selectivity could account for the group differences in the first experiment. Number of bands and SNR interacted significantly for /ra/-/la/, /wa/-/ja/, and say-stay endpoint identification; strongest effects were found for the /ra/-/la/ and say-stay continua. Results suggest that the speech features that are most vulnerable to misperception in noise by listeners with CIs are those whose acoustic cues are rapidly changing spectral patterns, like the formant transitions in the /wa/-/ja/ and /ra/-/la/ continua. However, the group differences in the first experiment cannot be wholly attributable to frequency selectivity differences, as the number of bands in the second experiment affected performance differently than suggested by group differences in the first experiment.  相似文献   

7.
This study compared the ability of 5 listeners with normal hearing and 12 listeners with moderate to moderately severe sensorineural hearing loss to discriminate complementary two-component complex tones (TCCTs). The TCCTs consist of two pure tone components (f1 and f2) which differ in frequency by delta f (Hz) and in level by delta L (dB). In one of the complementary tones, the level of the component f1 is greater than the level of component f2 by the increment delta L; in the other tone, the level of component f2 exceeds that of component f1 by delta L. Five stimulus conditions were included in this study: fc = 1000 Hz, delta L = 3 dB; fc = 1000 Hz, delta L = 1 dB; fc = 2000 Hz, delta L = 3 dB; fc = 2000 Hz, delta L = 1 dB; and fc = 4000 Hz, delta L = 3 dB. In listeners with normal hearing, discrimination of complementary TCCTs (with a fixed delta L and a variable delta f) is described by an inverted U-shaped psychometric function in which discrimination improves as delta f increases, is (nearly) perfect for a range of delta f's, and then decreases again as delta f increases. In contrast, group psychometric functions for listeners with hearing loss are shifted to the right such that above chance performance occurs at larger values of delta f than in listeners with normal hearing. Group psychometric functions for listeners with hearing loss do not show a decrease in performance at the largest values of delta f included in this study. Decreased TCCT discrimination is evident when listeners with hearing loss are compared to listeners with normal hearing at both equal SPLs and at equal sensation levels. In both groups of listeners, TCCT discrimination is significantly worse at high center frequencies. Results from normal-hearing listeners are generally consistent with a temporal model of TCCT discrimination. Listeners with hearing loss may have deficits in using phase locking in the TCCT discrimination task and so may rely more on place cues in TCCT discrimination.  相似文献   

8.
The purpose of this study was to explore the potential advantages, both theoretical and applied, of preserving low-frequency acoustic hearing in cochlear implant patients. Several hypotheses are presented that predict that residual low-frequency acoustic hearing along with electric stimulation for high frequencies will provide an advantage over traditional long-electrode cochlear implants for the recognition of speech in competing backgrounds. A simulation experiment in normal-hearing subjects demonstrated a clear advantage for preserving low-frequency residual acoustic hearing for speech recognition in a background of other talkers, but not in steady noise. Three subjects with an implanted "short-electrode" cochlear implant and preserved low-frequency acoustic hearing were also tested on speech recognition in the same competing backgrounds and compared to a larger group of traditional cochlear implant users. Each of the three short-electrode subjects performed better than any of the traditional long-electrode implant subjects for speech recognition in a background of other talkers, but not in steady noise, in general agreement with the simulation studies. When compared to a subgroup of traditional implant users matched according to speech recognition ability in quiet, the short-electrode patients showed a 9-dB advantage in the multitalker background. These experiments provide strong preliminary support for retaining residual low-frequency acoustic hearing in cochlear implant patients. The results are consistent with the idea that better perception of voice pitch, which can aid in separating voices in a background of other talkers, was responsible for this advantage.  相似文献   

9.
Speech recognition in noise is harder in second (L2) than first languages (L1). This could be because noise disrupts speech processing more in L2 than L1, or because L1 listeners recover better though disruption is equivalent. Two similar prior studies produced discrepant results: Equivalent noise effects for L1 and L2 (Dutch) listeners, versus larger effects for L2 (Spanish) than L1. To explain this, the latter experiment was presented to listeners from the former population. Larger noise effects on consonant identification emerged for L2 (Dutch) than L1 listeners, suggesting that task factors rather than L2 population differences underlie the results discrepancy.  相似文献   

10.
This study investigated the effects of age and hearing loss on perception of accented speech presented in quiet and noise. The relative importance of alterations in phonetic segments vs. temporal patterns in a carrier phrase with accented speech also was examined. English sentences recorded by a native English speaker and a native Spanish speaker, together with hybrid sentences that varied the native language of the speaker of the carrier phrase and the final target word of the sentence were presented to younger and older listeners with normal hearing and older listeners with hearing loss in quiet and noise. Effects of age and hearing loss were observed in both listening environments, but varied with speaker accent. All groups exhibited lower recognition performance for the final target word spoken by the accented speaker compared to that spoken by the native speaker, indicating that alterations in segmental cues due to accent play a prominent role in intelligibility. Effects of the carrier phrase were minimal. The findings indicate that recognition of accented speech, especially in noise, is a particularly challenging communication task for older people.  相似文献   

11.
Cochlear nonlinearity was estimated over a wide range of center frequencies and levels in listeners with normal hearing, using a forward-masking method. For a fixed low-level probe, the masker level required to mask the probe was measured as a function of the masker-probe interval, to produce a temporal masking curve (TMC). TMCs were measured for probe frequencies of 500, 1000, 2000, 4000, and 8000 Hz, and for masker frequencies 0.5, 0.7, 0.9, 1.0 (on frequency), 1.1, and 1.6 times the probe frequency. Across the range of probe frequencies, the TMCs for on-frequency maskers showed two or three segments with clearly distinct slopes. If it is assumed that the rate of decay of the internal effect of the masker is constant across level and frequency, the variations in the slopes of the TMCs can be attributed to variations in cochlear compression. Compression-ratio estimates for on-frequency maskers were between 3:1 and 5:1 across the range of probe frequencies. Compression did not decrease at low frequencies. The slopes of the TMCs for the lowest frequency probe (500 Hz) did not change with masker frequency. This suggests that compression extends over a wide range of stimulus frequencies relative to characteristic frequency in the apical region of the cochlea.  相似文献   

12.
Three experiments were conducted to determine whether listeners with a sensorineural hearing loss exhibited greater than normal amounts of masking at frequencies above the frequency of the masker. Excess masking was defined as the difference (in dB) between the masked thresholds actually obtained from a hearing-impaired listener and the expected thresholds calculated for the same individual. The expected thresholds were the power sum of the listener's thresholds in quiet and the average masked thresholds obtained from a group of normal-hearing subjects at the test frequency. Hearing-impaired listeners, with thresholds in quiet ranging from approximately 35-70 dB SPL (at test frequencies between 500-3000 Hz), displayed approximately 12-15 dB of maximum excess masking. The maximum amount of excess masking occurred in the region where the threshold in quiet of the hearing-impaired listener and the average normal masked threshold were equal. These findings indicate that listeners with a sensorineural hearing loss display one form of reduced frequency selectivity (i.e., abnormal upward spread of masking) even when their thresholds in quiet are taken into account.  相似文献   

13.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

14.
A loss of cochlear compression may underlie many of the difficulties experienced by hearing-impaired listeners. Two behavioral forward-masking paradigms that have been used to estimate the magnitude of cochlear compression are growth of masking (GOM) and temporal masking (TM). The aim of this study was to determine whether these two measures produce within-subjects results that are consistent across a range of signal frequencies and, if so, to compare them in terms of reliability or efficiency. GOM and TM functions were measured in a group of five normal-hearing and five hearing-impaired listeners at signal frequencies of 1000, 2000, and 4000 Hz. Compression values were derived from the masking data and confidence intervals were constructed around these estimates. Both measures produced comparable estimates of compression, but both measures have distinct advantages and disadvantages, so that the more appropriate measure depends on factors such as the frequency region of interest and the degree of hearing loss. Because of the long testing times needed, neither measure is suitable for clinical use in its current form.  相似文献   

15.
The present study examined the application of the articulation index (AI) as a predictor of the speech-recognition performance of normal and hearing-impaired listeners with and without hearing protection. The speech-recognition scores of 12 normal and 12 hearing-impaired subjects were measured for a wide range of conditions designed to be representative of those in the workplace. Conditions included testing in quiet, in two types of background noise (white versus speech spectrum), at three signal-to-noise ratios (+ 5, 0, - 5 dB), and in three conditions of protection (unprotected, earplugs, earmuffs). The mean results for all 21 listening conditions and both groups of subjects were accurately described by the AI. Moreover, a single transfer-function relating performance to the AI could describe all the data from both groups.  相似文献   

16.
The purpose of this study is to specify the contribution of certain frequency regions to consonant place perception for normal-hearing listeners and listeners with high-frequency hearing loss, and to characterize the differences in stop-consonant place perception among these listeners. Stop-consonant recognition and error patterns were examined at various speech-presentation levels and under conditions of low- and high-pass filtering. Subjects included 18 normal-hearing listeners and a homogeneous group of 10 young, hearing-impaired individuals with high-frequency sensorineural hearing loss. Differential filtering effects on consonant place perception were consistent with the spectral composition of acoustic cues. Differences in consonant recognition and error patterns between normal-hearing and hearing-impaired listeners were observed when the stimulus bandwidth included regions of threshold elevation for the hearing-impaired listeners. Thus place-perception differences among listeners are, for the most part, associated with stimulus bandwidths corresponding to regions of hearing loss.  相似文献   

17.
English consonant recognition in undegraded and degraded listening conditions was compared for listeners whose primary language was either Japanese or American English. There were ten subjects in each of the two groups, termed the non-native (Japanese) and the native (American) subjects, respectively. The Modified Rhyme Test was degraded either by a babble of voices (S/N = -3 dB) or by a room reverberation (reverberation time, T = 1.2 s). The Japanese subjects performed at a lower level than the American subjects in both noise and reverberation, although the performance difference in the undegraded, quiet condition was relatively small. There was no difference between the scores obtained in noise and in reverberation for either group. A limited-error analysis revealed some differences in type of errors for the groups of listeners. Implications of the results are discussed in terms of the effects of degraded listening conditions on non-native listeners' speech perception.  相似文献   

18.
The study was designed to test the validity of the American Academy of Ophthalmology and Otolaryngology's (AAOO) 26-dB average hearing threshold level at 500, 1000, and 2000 Hz as a predictor of hearing handicap. To investigate this criterion the performance of a normal-hearing group was compared with that of two groups, categorized according to the AAOO [Trans. Am. Acad. Ophthal. Otolaryng. 63, 236-238 (1959)] guidelines as having no handicap. The latter groups, however, had significant hearing losses in the frequencies above 2000 Hz. Mean hearing threshold levels for 3000, 4000, and 6000 Hz were 54 dB for group II and 63 dB for group III. Two kinds of speech stimuli were presented at an A-weighted sound level of 60 dB in quiet and in three different levels of noise. The resulting speech recognition scores were significantly lower for the hearing-impaired groups than for the normal-hearing group on both kinds of speech materials and in all three noise conditions. Mean scores for group III were significantly lower than those of the normal-hearing group, even in the quiet condition. Speech recognition scores showed significantly better correlation with hearing levels for frequency combinations including frequencies above 2000 Hz than for the 500-, 1000-, and 2000-Hz combination. On the basis of these results the author recommends that the 26-dB fence should be somewhat lower, and that frequencies above 2000 Hz should be included in any scheme for evaluating hearing handicap.  相似文献   

19.
The classic [MN55] confusion matrix experiment (16 consonants, white noise masker) was repeated by using computerized procedures, similar to those of Phatak and Allen (2007). ["Consonant and vowel confusions in speech-weighted noise," J. Acoust. Soc. Am. 121, 2312-2316]. The consonant scores in white noise can be categorized in three sets: low-error set [/m/, /n/], average-error set [/p/, /t/, /k/, /s/, /[please see text]/, /d/, /g/, /z/, /Z/], and high-error set /f/theta/b/, /v/, /E/,/theta/]. The consonant confusions match those from MN55, except for the highly asymmetric voicing confusions of fricatives, biased in favor of voiced consonants. Masking noise cannot only reduce the recognition of a consonant, but also perceptually morph it into another consonant. There is a significant and systematic variability in the scores and confusion patterns of different utterances of the same consonant, which can be characterized as (a) confusion heterogeneity, where the competitors in the confusion groups of a consonant vary, and (b) threshold variability, where confusion threshold [i.e., signal-to-noise ratio (SNR) and score at which the confusion group is formed] varies. The average consonant error and errors for most of the individual consonants and consonant sets can be approximated as exponential functions of the articulation index (AI). An AI that is based on the peak-to-rms ratios of speech can explain the SNR differences across experiments.  相似文献   

20.
The purpose of this experiment was to determine the applicability of the Articulation Index (AI) model for characterizing the speech recognition performance of listeners with mild-to-moderate hearing loss. Performance-intensity functions were obtained from five normal-hearing listeners and 11 hearing-impaired listeners using a closed-set nonsense syllable test for two frequency responses (uniform and high-frequency emphasis). For each listener, the fitting constant Q of the nonlinear transfer function relating AI and speech recognition was estimated. Results indicated that the function mapping AI onto performance was approximately the same for normal and hearing-impaired listeners with mild-to-moderate hearing loss and high speech recognition scores. For a hearing-impaired listener with poor speech recognition ability, the AI procedure was a poor predictor of performance. The AI procedure as presently used is inadequate for predicting performance of individuals with reduced speech recognition ability and should be used conservatively in applications predicting optimal or acceptable frequency response characteristics for hearing-aid amplification systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号