首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the present study, speech-recognition performance was measured in four hearing-impaired subjects and twelve normal hearers. The normal hearers were divided into four groups of three subjects each. Speech-recognition testing for the normal hearers was accomplished in a background of spectrally shaped noise in which the noise was shaped to produce masked thresholds identical to the quiet thresholds of one of the hearing-impaired subjects. The question addressed in this study is whether normal hearers with a hearing loss simulated through a shaped masking noise demonstrate speech-recognition difficulties similar to those of listeners with actual hearing impairment. Regarding overall percent-correct scores, the results indicated that two of the four hearing-impaired subjects performed better than their corresponding subgroup of noise-masked normal hearers, whereas the other two impaired listeners performed like the noise-masked normal listeners. A gross analysis of the types of errors made suggested that subjects with actual and simulated losses frequently made different types of errors.  相似文献   

2.
To examine the association between frequency resolution and speech recognition, auditory filter parameters and stop-consonant recognition were determined for 9 normal-hearing and 24 hearing-impaired subjects. In an earlier investigation, the relationship between stop-consonant recognition and the articulation index (AI) had been established on normal-hearing listeners. Based on AI predictions, speech-presentation levels for each subject in this experiment were selected to obtain a wide range of recognition scores. This strategy provides a method of interpreting speech-recognition performance among listeners who vary in magnitude and configuration of hearing loss by assuming that conditions which yield equal audible spectra will result in equivalent performance. It was reasoned that an association between frequency resolution and consonant recognition may be more appropriately estimated if hearing-impaired listeners' performance was measured under conditions that assured equivalent audibility of the speech stimuli. Derived auditory filter parameters indicated that filter widths and dynamic ranges were strongly associated with threshold. Stop-consonant recognition scores for most hearing-impaired listeners were not significantly poorer than predicted by the AI model. Furthermore, differences between observed recognition scores and those predicted by the AI were not associated with auditory filter characteristics, suggesting that frequency resolution and speech recognition may appear to be associated primarily because both are degraded by threshold elevation.  相似文献   

3.
Performance on tests of pure-tone thresholds, speech-recognition thresholds, and speech-recognition scores for the two ears of each subject were evaluated in two groups of adults with bilateral hearing losses. One group was composed of individuals fitted with binaural hearing aids, and the other group included persons with monaural hearing aids. Performance prior to the use of hearing aids was compared to performance after 4-5 years of hearing aid use in order to determine whether the unaided ear would show effects of auditory deprivation. There were no differences over time for pure-tone thresholds or speech-recognition thresholds for both ears of both groups. Nevertheless, the results revealed that the speech-recognition difference scores of the binaurally fitted subjects remained stable over time whereas they increased for the monaurally fitted subjects. The findings reveal an auditory deprivation effect for the unfitted ears of the subjects with monaural hearing aids.  相似文献   

4.
Three investigations were conducted to determine the application of the articulation index (AI) to the prediction of speech performance of hearing-impaired subjects as well as of normal-hearing listeners. Speech performance was measured in quiet and in the presence of two interfering signals for items from the Speech Perception in Noise test in which target words are either highly predictable from contextual cues in the sentence or essentially contextually neutral. As expected, transfer functions relating the AI to speech performance were different depending on the type of contextual speech material. The AI transfer function for probability-high items rises steeply, much as for sentence materials, while the function for probability-low items rises more slowly, as for monosyllabic words. Different transfer functions were also found for tests conducted in quiet or white noise rather than in a babble background. A majority of the AI predictions for ten individuals with moderate sensorineural loss fell within +/- 2 standard deviations of normal listener performance for both quiet and babble conditions.  相似文献   

5.
The purpose of this experiment was to determine the applicability of the Articulation Index (AI) model for characterizing the speech recognition performance of listeners with mild-to-moderate hearing loss. Performance-intensity functions were obtained from five normal-hearing listeners and 11 hearing-impaired listeners using a closed-set nonsense syllable test for two frequency responses (uniform and high-frequency emphasis). For each listener, the fitting constant Q of the nonlinear transfer function relating AI and speech recognition was estimated. Results indicated that the function mapping AI onto performance was approximately the same for normal and hearing-impaired listeners with mild-to-moderate hearing loss and high speech recognition scores. For a hearing-impaired listener with poor speech recognition ability, the AI procedure was a poor predictor of performance. The AI procedure as presently used is inadequate for predicting performance of individuals with reduced speech recognition ability and should be used conservatively in applications predicting optimal or acceptable frequency response characteristics for hearing-aid amplification systems.  相似文献   

6.
Modeling sensorineural hearing loss. I. Model and retrospective evaluation   总被引:1,自引:0,他引:1  
The present article describes an approach to the evaluation of psychoacoustic data from the hearing impaired. The results obtained from the hearing impaired in several studies of frequency resolution, temporal resolution, and speech recognition are compared to the results expected for noise-masked normal listeners. It is presumed in this approach that the hypothetical noise-masked normal listeners have masked thresholds that agree perfectly with the quiet thresholds of the hearing-impaired subjects. Using this approach, most of the results obtained from impaired ears on spectral-resolution and speech-recognition tasks could be accurately predicted, an exception being results from spectral-resolution paradigms using fixed-level signals. Some of the data from hearing-impaired listeners on temporal-resolution tasks, on the other hand, could not be adequately described with this approach. The latter data, however, were much more limited. Additional data are needed to better evaluate the adequacy of this approach in describing the performance of the hearing impaired on temporal-resolution tasks.  相似文献   

7.
Recent studies with adults have suggested that amplification at 4 kHz and above fails to improve speech recognition and may even degrade performance when high-frequency thresholds exceed 50-60 dB HL. This study examined the extent to which high frequencies can provide useful information for fricative perception for normal-hearing and hearing-impaired children and adults. Eighty subjects (20 per group) participated. Nonsense syllables containing the phonemes /s/, /f/, and /O/, produced by a male, female, and child talker, were low-pass filtered at 2, 3, 4, 5, 6, and 9 kHz. Frequency shaping was provided for the hearing-impaired subjects only. Results revealed significant differences in recognition between the four groups of subjects. Specifically, both groups of children performed more poorly than their adult counterparts at similar bandwidths. Likewise, both hearing-impaired groups performed more poorly than their normal-hearing counterparts. In addition, significant talker effects for /s/ were observed. For the male talker, optimum performance was reached at a bandwidth of approximately 4-5 kHz, whereas optimum performance for the female and child talkers did not occur until a bandwidth of 9 kHz.  相似文献   

8.
Speech-reception thresholds (SRT) were measured for 17 normal-hearing and 17 hearing-impaired listeners in conditions simulating free-field situations with between one and six interfering talkers. The stimuli, speech and noise with identical long-term average spectra, were recorded with a KEMAR manikin in an anechoic room and presented to the subjects through headphones. The noise was modulated using the envelope fluctuations of the speech. Several conditions were simulated with the speaker always in front of the listener and the maskers either also in front, or positioned in a symmetrical or asymmetrical configuration around the listener. Results show that the hearing impaired have significantly poorer performance than the normal hearing in all conditions. The mean SRT differences between the groups range from 4.2-10 dB. It appears that the modulations in the masker act as an important cue for the normal-hearing listeners, who experience up to 5-dB release from masking, while being hardly beneficial for the hearing impaired listeners. The gain occurring when maskers are moved from the frontal position to positions around the listener varies from 1.5 to 8 dB for the normal hearing, and from 1 to 6.5 dB for the hearing impaired. It depends strongly on the number of maskers and their positions, but less on hearing impairment. The difference between the SRTs for binaural and best-ear listening (the "cocktail party effect") is approximately 3 dB in all conditions for both the normal-hearing and the hearing-impaired listeners.  相似文献   

9.
The present study compared the abilities of normal and hearing-impaired subjects to discriminate differences in the spectral shapes of speechlike sounds. The minimum detectable change in amplitude of a second-formant spectral peak was determined for steady-state stimuli across a range of presentation levels. In many cases, the hearing-impaired subjects required larger spectral peaks than did the normal-hearing subjects. The performance of all subjects showed a dependence upon presentation level. For some hearing-impaired subjects, high presentation levels resulted in discrimination values similar to that of normal-hearing subjects, while for other hearing-loss subjects, increases in presentation level did not yield normal values, even when the second-formant spectral region was presented at levels above the subject's sensitivity thresholds. These results demonstrate that under certain conditions, some sensorineural hearing-impaired subjects require more prominent spectral peaks in certain speech sounds than normal subjects for equivalent performance. For the group of subjects who did not achieve normal discrimination results at any presentation level, application of high-frequency amplification to the stimuli was successful in returning those subjects' performance to within normal values.  相似文献   

10.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

11.
"Masking release" (MR), the improvement of speech intelligibility in modulated compared with unmodulated maskers, is typically smaller than normal for hearing-impaired listeners. The extent to which this is due to reduced audibility or to suprathreshold processing deficits is unclear. Here, the effects of audibility were controlled by using stimuli restricted to the low- (≤1.5 kHz) or mid-frequency (1-3 kHz) region for normal-hearing listeners and hearing-impaired listeners with near-normal hearing in the tested region. Previous work suggests that the latter may have suprathreshold deficits. Both spectral and temporal MR were measured. Consonant identification was measured in quiet and in the presence of unmodulated, amplitude-modulated, and spectrally modulated noise at three signal-to-noise ratios (the same ratios for the two groups). For both frequency regions, consonant identification was poorer for the hearing-impaired than for the normal-hearing listeners in all conditions. The results suggest the presence of suprathreshold deficits for the hearing-impaired listeners, despite near-normal audiometric thresholds over the tested frequency regions. However, spectral MR and temporal MR were similar for the two groups. Thus, the suprathreshold deficits for the hearing-impaired group did not lead to reduced MR.  相似文献   

12.
An articulation index calculation procedure developed for use with individual normal-hearing listeners [C. Pavlovic and G. Studebaker, J. Acoust. Soc. Am. 75, 1606-1612 (1984)] was modified to account for the deterioration in suprathreshold speech processing produced by sensorineural hearing impairment. Data from four normal-hearing and four hearing-impaired subjects were used to relate the loss in hearing sensitivity to the deterioration in speech processing in quiet and in noise. The new procedure only requires hearing threshold measurements and consists of the following two modifications of the original AI procedure of Pavlovic and Studebaker (1984): The speech and noise spectrum densities are integrated over bandwidths which are, when expressed in decibels, larger than the critical bandwidths by 10% of the hearing loss. This is in contrast to the unmodified procedure where integration is performed over critical bandwidths. The contribution of each frequency to the AI is the product of its contribution in the unmodified AI procedure and a "speech desensitization factor." The desensitization factor is specified as a function of the hearing loss. The predictive accuracies of both the unmodified and the modified calculation procedures were assessed by comparing the expected and observed speech recognition scores of four hearing-impaired subjects under various conditions of speech filtering and noise masking. The modified procedure appears accurate for general applications. In contrast, the unmodified procedure appears accurate only for applications where results obtained under various conditions on a single listener are compared to each other.  相似文献   

13.
Effects of age and mild hearing loss on speech recognition in noise   总被引:5,自引:0,他引:5  
Using an adaptive strategy, the effects of mild sensorineural hearing loss and adult listeners' chronological age on speech recognition in babble were evaluated. The signal-to-babble ratio required to achieve 50% recognition was measured for three speech materials presented at soft to loud conversational speech levels. Four groups of subjects were tested: (1) normal-hearing listeners less than 44 years of age, (2) subjects less than 44 years old with mild sensorineural hearing loss and excellent speech recognition in quiet, (3) normal-hearing listeners greater than 65 with normal hearing, and (4) subjects greater than 65 years old with mild hearing loss and excellent performance in quiet. Groups 1 and 3, and groups 2 and 4 were matched on the basis of pure-tone thresholds, and thresholds for each of the three speech materials presented in quiet. In addition, groups 1 and 2 were similar in terms of mean age and age range, as were groups 3 and 4. Differences in performance in noise as a function of age were observed for both normal-hearing and hearing-impaired listeners despite equivalent performance in quiet. Subjects with mild hearing loss performed significantly worse than their normal-hearing counterparts. These results and their implications are discussed.  相似文献   

14.
Auditory filter shapes were measured for two groups of hearing-impaired subjects, young and elderly, matched for audiometric loss, for center frequencies (fc) of 100, 200, 400, and 800 Hz using a modified notched-noise method [B. R. Glasberg and B. C. J. Moore, Hear. Res. 47, 103-138 (1990)]. Two noise bands, each 0.4fc wide, were used; they were placed both symmetrically and asymmetrically about the signal frequency to allow the measurement of filter asymmetry. The overall noise level was either 77 or 87 dB SPL. Stimuli were delivered monaurally using Sennheiser HD424 earphones. Although auditory filters for the hearing-impaired subjects were generally broader than for normally hearing subjects [Moore et al., J. Acoust. Soc. Am. 87, 132-140 (1990)], some hearing-impaired subjects with mild losses had normal filters. The filters tended to broaden with increasing hearing loss. There were not any clear differences in filter characteristics between young and elderly hearing-impaired subjects. The signal-to-noise ratios at the outputs of the auditory filters required for threshold (K) tended to be lower than normal for the young hearing-impaired subjects, but were not significantly different from normal for the elderly hearing-impaired subjects. The lower K values for the young hearing-impaired subjects may occur because broadened auditory filters reduce the deleterious effects on signal detection of fluctuations in the noise.  相似文献   

15.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners discriminated between a standard stimulus and a signal stimulus in which half of the standard components were decreased in level and half were increased in level. In one condition, the standard stimulus was the sum of six equal-amplitude tones (equal-SPL), and in another the standard stimulus was the sum of six tones at equal sensation levels re: audiometric thresholds for individual subjects (equal-SL). Spectral weights were estimated in conditions where the amplitudes of the individual tones were perturbed slightly on every presentation. Sensitivity was similar in all conditions for normal-hearing and hearing-impaired listeners. The presence of perturbation and equal-SL components increased thresholds for both groups, but only small differences in weighting strategy were measured between the groups depending on whether the equal-SPL or equal-SL condition was tested. The average data suggest that normal-hearing listeners may rely more on the central components of the spectrum whereas hearing-impaired listeners may have been more likely to use the edges. However, individual weighting functions were quite variable, especially for the HI listeners, perhaps reflecting difficulty in processing changes in spectral shape due to hearing loss. Differences in weighting strategy without changes in sensitivity suggest that factors other than spectral weights, such as internal noise or difficulty encoding a reference stimulus, also may dominate performance.  相似文献   

16.
Reduced binaural performance of hearing-impaired listeners may not only be caused by raised hearing thresholds (reduced audibility), but also by supra-threshold coding deficits in signal cues. This question was investigated in the present study using binaural intelligibility level difference (BILD) comparisons: the improvement of speech-reception threshold scores for N(0)S(π) relative to N(0)S(0) presentation conditions. Investigated was what types of supra-threshold deficits play a role in reducing BILDs in hearing-impaired subjects. BILDs were investigated for 25 mild to moderate sensorineural hearing-impaired listeners, under conditions where optimal audibility was assured. All stimuli were bandpass filtered (250-4000 Hz). A distortion-sensitivity approach was used to investigate the sensitivity of subjects BILDs to external stimulus perturbations in the phase, frequency, time, and intensity domains. The underlying assumption of this approach was that an auditory coding deficit occurring in a signal cue in a particular domain will result in a low sensitivity to external perturbations applied in that domain. Compared to reference data for listeners with normal BILDs, distortion-sensitivity data for a subgroup of eight listeners with reduced BILDs suggests that these reductions in BILD were caused by coding deficits in the phase and time domains.  相似文献   

17.
Two experiments are reported which explore variables that may complicate the interpretation of phoneme boundary data from hearing-impaired listeners. Fourteen synthetic consonant-vowel syllables comprising a/ba-da-ga/ continuum were used as stimuli. The first experiment examined the influence of presentation level and ear of presentation in normal-hearing subjects. Only small differences in the phoneme boundaries and labeling functions were observed between ears and across presentation levels. Thus monaural presentation and relatively high signal level do not appear to be complicating factors in research with hearing-impaired listeners, at least for these stimuli. The second experiment described a test procedure for obtaining phoneme boundaries in some hearing-impaired listeners that controlled for between-subject sources of variation unrelated to hearing impairment and delineated the effects of spectral shaping imposed by the hearing impairment on the labeling functions. Labeling data were obtained from unilaterally hearing-impaired listeners under three test conditions: in the normal ear without any signal distortion; in the normal ear listening through a spectrum shaper that was set to match the subject's suprathreshold audiometric configuration; and in the impaired ear. The reduction in the audibility of the distinctive acoustic/phonetic cues seemed to explain all or part of the effects of the hearing impairment on the labeling functions of some subjects. For many other subjects, however, other forms of distortion in addition to reduced audibility seemed to affect their labeling behavior.  相似文献   

18.
Many of the 9 million workers exposed to average noise levels of 85 dB (A) and above are required to wear hearing protection devices, and many of these workers have already developed noise-induced hearing impairments. There is some evidence in the literature that hearing-impaired users may not receive as much attenuation from hearing protectors as normal-hearing users. This study assessed real-ear attenuation at threshold for ten normal-hearing and ten hearing-impaired subjects using a set of David Clark 10A earmuffs. Testing procedures followed the specifications of ANSI S12.6-1984. The results showed that the hearing-impaired subjects received slightly more attenuation than the normal-hearing subjects at all frequencies, but these differences were not statistically significant. These results provide additional support to the finding that hearing protection devices are capable of providing as much attenuation to hearing-impaired users as they do to normal-hearing individuals.  相似文献   

19.
The Articulation Index (AI) and Speech Intelligibility Index (SII) predict intelligibility scores from measurements of speech and hearing parameters. One component in the prediction is the "importance function," a weighting function that characterizes contributions of particular spectral regions of speech to speech intelligibility. Previous work with SII predictions for hearing-impaired subjects suggests that prediction accuracy might improve if importance functions for individual subjects were available. Unfortunately, previous importance function measurements have required extensive intelligibility testing with groups of subjects, using speech processed by various fixed-bandwidth low-pass and high-pass filters. A more efficient approach appropriate to individual subjects is desired. The purpose of this study was to evaluate the feasibility of measuring importance functions for individual subjects with adaptive-bandwidth filters. In two experiments, ten subjects with normal-hearing listened to vowel-consonant-vowel (VCV) nonsense words processed by low-pass and high-pass filters whose bandwidths were varied adaptively to produce specified performance levels in accordance with the transformed up-down rules of Levitt [(1971). J. Acoust. Soc. Am. 49, 467-477]. Local linear psychometric functions were fit to resulting data and used to generate an importance function for VCV words. Results indicate that the adaptive method is reliable and efficient, and produces importance function data consistent with that of the corresponding AI/SII importance function.  相似文献   

20.
Spectro-temporal analysis in normal-hearing and cochlear-impaired listeners   总被引:1,自引:0,他引:1  
Detection thresholds for a 1.0-kHz pure tone were determined in unmodulated noise and in noise modulated by a 15-Hz square wave. Comodulation masking release (CMR) was calculated as the difference in threshold between the modulated and unmodulated conditions. The noise bandwidth varied between 100 and 1000 Hz. Frequency selectivity was also examined using an abbreviated notched-noise masking method. The subjects in the main experiment consisted of 12 normal-hearing and 12 hearing-impaired subjects with hearing loss of cochlear origin. The most discriminating conditions were repeated on 16 additional hearing-impaired subjects. The CMR of the hearing-impaired group was reduced for the 1000-Hz noise bandwidth. The reduced CMR at this bandwidth correlated significantly with reduced frequency selectivity, consistent with the hypothesis that the across-frequency difference cue used in CMR is diminished by poor frequency selectivity. The results indicated that good frequency selectivity is a prerequisite, but not a guarantee, of large CMR.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号