首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Overshoot was measured in both ears of four subjects with normal hearing and in five subjects with permanent, sensorineural hearing loss (two with a unilateral loss). The masker was a 400-ms broadband noise presented at a spectrum level of 20, 30, or 40 dB SPL. The signal was a 10-ms sinusoid presented 1 or 195 ms after the onset of the masker. Signal frequency was 1.0 or 4.0 kHz, which placed the signal in a region of normal (1.0 kHz) or impaired (4.0 kHz) absolute sensitivity for the impaired ears. For the normal-hearing subjects, the effects of signal frequency and masker level were similar to those published previously. In particular, overshoot was larger at 4.0 than at 1.0 kHz, and overshoot at 4.0 kHz tended to decrease with increasing masker level. At 4.0 kHz, overshoot values were significantly larger in the normal ears: Maximum values ranged from about 7-26 dB in the normal ears, but were always less than 5 dB in the impaired ears. The smaller overshoot values resulted from the fact that thresholds in the short-delay condition were considerably better in the hearing-impaired subjects than in the normal-hearing subjects. At 1.0 kHz, overshoot values for the two groups of subjects more or less overlapped. The results suggest that permanent, sensorineural hearing loss disrupts the mechanisms responsible for a large overshoot effect.  相似文献   

2.
The Speech Reception Threshold for sentences in stationary noise and in several amplitude-modulated noises was measured for 8 normal-hearing listeners, 29 sensorineural hearing-impaired listeners, and 16 normal-hearing listeners with simulated hearing loss. This approach makes it possible to determine whether the reduced benefit from masker modulations, as often observed for hearing-impaired listeners, is due to a loss of signal audibility, or due to suprathreshold deficits, such as reduced spectral and temporal resolution, which were measured in four separate psychophysical tasks. Results show that the reduced masking release can only partly be accounted for by reduced audibility, and that, when considering suprathreshold deficits, the normal effects associated with a raised presentation level should be taken into account. In this perspective, reduced spectral resolution does not appear to qualify as an actual suprathreshold deficit, while reduced temporal resolution does. Temporal resolution and age are shown to be the main factors governing masking release for speech in modulated noise, accounting for more than half of the intersubject variance. Their influence appears to be related to the processing of mainly the higher stimulus frequencies. Results based on calculations of the Speech Intelligibility Index in modulated noise confirm these conclusions.  相似文献   

3.
"Masking release" (MR), the improvement of speech intelligibility in modulated compared with unmodulated maskers, is typically smaller than normal for hearing-impaired listeners. The extent to which this is due to reduced audibility or to suprathreshold processing deficits is unclear. Here, the effects of audibility were controlled by using stimuli restricted to the low- (≤1.5 kHz) or mid-frequency (1-3 kHz) region for normal-hearing listeners and hearing-impaired listeners with near-normal hearing in the tested region. Previous work suggests that the latter may have suprathreshold deficits. Both spectral and temporal MR were measured. Consonant identification was measured in quiet and in the presence of unmodulated, amplitude-modulated, and spectrally modulated noise at three signal-to-noise ratios (the same ratios for the two groups). For both frequency regions, consonant identification was poorer for the hearing-impaired than for the normal-hearing listeners in all conditions. The results suggest the presence of suprathreshold deficits for the hearing-impaired listeners, despite near-normal audiometric thresholds over the tested frequency regions. However, spectral MR and temporal MR were similar for the two groups. Thus, the suprathreshold deficits for the hearing-impaired group did not lead to reduced MR.  相似文献   

4.
Confusion matrices for seven synthetic steady-state vowels were obtained from ten normal and three hearing-impaired subjects. The vowels were identified at greater than 96% accuracy by the normals, and less accurately by the impaired subjects. Shortened versions of selected vowels then were used as maskers, and vowel masking patterns (VMPs) consisting of forward-masked threshold for sinusoidal probes at all vowel masker harmonics were obtained from the impaired subjects and from one normal subject. Vowel-masked probe thresholds were transformed using growth-of-masking functions obtained with flat-spectrum noise. VMPs of the impaired subjects, relative to those of the normal, were characterized by smaller dynamic range, poorer peak resolution, and poorer preservation of the vowel formant structure. These VMP characteristics, however, did not necessarily coincide with inaccurate vowel recognition. Vowel identification appeared to be related primarily to VMP peak frequencies rather than to the levels at the peaks or to between-peak characteristics of the patterns.  相似文献   

5.
Three vibrotactile vocoders were compared in a training study involving several different speech perception tasks. Vocoders were: (1) the Central Institute for the Deaf version of the Queen's University vocoder, with 1/3-oct filter spacing and logarithmic output scaling (CIDLog) [Engebretson and O'Connell, IEEE Trans. Biomed. Eng. BME-33, 712-716 (1986)]; (2) the same vocoder with linear output scaling (CIDLin); and (3) the Gallaudet University vocoder designed with greater resolution in the second formant region, relative to the CID vocoders, and linear output scaling (GULin). Four normal-hearing subjects were assigned to either of two control groups, visual-only control and vocoder control, for which they received the CIDLog vocoder. Five normal-hearing and four hearing-impaired subjects were assigned to the linear vocoders. Results showed that the three vocoders provided equivalent information in word-initial and word-final tactile-only consonant identification. However, GULin was the only vocoder significantly effective in enhancing lipreading of isolated prerecorded sentences. Individual subject analyses showed significantly enhanced lipreading by the three normal-hearing and two hearing-impaired subjects who received the GULin vocoder. Over the entire training period of the experiment, the mean difference between aided and unaided lipreading of sentences by the GULin aided hearing-impaired subjects was approximately 6% words correct. Possible explanations for failure to confirm previous success with the CIDLog vocoder [Weisenberger et al., J. Acoust. Soc. Am. 86, 1764-1775 (1989)] are discussed.  相似文献   

6.
Many of the 9 million workers exposed to average noise levels of 85 dB (A) and above are required to wear hearing protection devices, and many of these workers have already developed noise-induced hearing impairments. There is some evidence in the literature that hearing-impaired users may not receive as much attenuation from hearing protectors as normal-hearing users. This study assessed real-ear attenuation at threshold for ten normal-hearing and ten hearing-impaired subjects using a set of David Clark 10A earmuffs. Testing procedures followed the specifications of ANSI S12.6-1984. The results showed that the hearing-impaired subjects received slightly more attenuation than the normal-hearing subjects at all frequencies, but these differences were not statistically significant. These results provide additional support to the finding that hearing protection devices are capable of providing as much attenuation to hearing-impaired users as they do to normal-hearing individuals.  相似文献   

7.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners discriminated between a standard stimulus and a signal stimulus in which half of the standard components were decreased in level and half were increased in level. In one condition, the standard stimulus was the sum of six equal-amplitude tones (equal-SPL), and in another the standard stimulus was the sum of six tones at equal sensation levels re: audiometric thresholds for individual subjects (equal-SL). Spectral weights were estimated in conditions where the amplitudes of the individual tones were perturbed slightly on every presentation. Sensitivity was similar in all conditions for normal-hearing and hearing-impaired listeners. The presence of perturbation and equal-SL components increased thresholds for both groups, but only small differences in weighting strategy were measured between the groups depending on whether the equal-SPL or equal-SL condition was tested. The average data suggest that normal-hearing listeners may rely more on the central components of the spectrum whereas hearing-impaired listeners may have been more likely to use the edges. However, individual weighting functions were quite variable, especially for the HI listeners, perhaps reflecting difficulty in processing changes in spectral shape due to hearing loss. Differences in weighting strategy without changes in sensitivity suggest that factors other than spectral weights, such as internal noise or difficulty encoding a reference stimulus, also may dominate performance.  相似文献   

8.
This investigation examined whether listeners with mild-moderate sensorineural hearing impairment have a deficit in the ability to integrate synchronous spectral information in the perception of speech. In stage 1, the bandwidth of filtered speech centered either on 500 or 2500 Hz was varied adaptively to determine the width required for approximately 15%-25% correct recognition. In stage 2, these criterion bandwidths were presented simultaneously and percent correct performance was determined in fixed block trials. Experiment 1 tested normal-hearing listeners in quiet and in masking noise. The main findings were (1) there was no correlation between the criterion bandwidths at 500 and 2500 Hz; (2) listeners achieved a high percent correct in stage 2 (approximately 80%); and (3) performance in quiet and noise was similar. Experiment 2 tested listeners with mild-moderate sensorineural hearing impairment. The main findings were (1) the impaired listeners showed high variability in stage 1, with some listeners requiring narrower and others requiring wider bandwidths than normal, and (2) hearing-impaired listeners achieved percent correct performance in stage 2 that was comparable to normal. The results indicate that listeners with mild-moderate sensorineural hearing loss do not have an essential deficit in the ability to integrate across-frequency speech information.  相似文献   

9.
In a multiple observation, sample discrimination experiment normal-hearing (NH) and hearing-impaired (HI) listeners heard two multitone complexes each consisting of six simultaneous tones with nominal frequencies spaced evenly on an ERB(N) logarithmic scale between 257 and 6930 Hz. On every trial, the frequency of each tone was sampled from a normal distribution centered near its nominal frequency. In one interval of a 2IFC task, all tones were sampled from distributions lower in mean frequency and in the other interval from distributions higher in mean frequency. Listeners had to identify the latter interval. Decision weights were obtained from multiple regression analysis of the between- interval frequency differences for each tone and listeners' responses. Frequency difference limens (an index of sensorineural resolution) and decision weights for each tone were used to predict the sensitivity of different decision-theoretic models. Results indicate that low-frequency tones were given much greater perceptual weight than high-frequency tones by both groups of listeners. This tendency increased as hearing loss increased and as sensorineural resolution decreased, resulting in significantly less efficient weighting strategies for the HI listeners. Overall, results indicate that HI listeners integrated frequency information less optimally than NH listeners, even after accounting for differences in sensorineural resolution.  相似文献   

10.
Spectral-shape discrimination thresholds were measured in the presence and absence of noise to determine whether normal-hearing and hearing-impaired listeners rely primarily on spectral peaks in the excitation pattern when discriminating between stimuli with different spectral shapes. Standard stimuli were the sum of 2, 4, 6, 8, 10, 20, or 30 equal-amplitude tones with frequencies fixed between 200 and 4000 Hz. Signal stimuli were generated by increasing and decreasing the levels of every other standard component. The function relating the spectral-shape discrimination threshold to the number of components (N) showed an initial decrease in threshold with increasing N and then an increase in threshold when the number of components reached 10 and 6, for normal-hearing and hearing-impaired listeners, respectively. The presence of a 50-dB SPL/Hz noise led to a 1.7 dB increase in threshold for normal-hearing listeners and a 3.5 dB increase for hearing-impaired listeners. Multichannel modeling and the relatively small influence of noise suggest that both normal-hearing and hearing-impaired listeners rely on the peaks in the excitation pattern for spectral-shape discrimination. The greater influence of noise in the data from hearing-impaired listeners is attributed to a poorer representation of spectral peaks.  相似文献   

11.
Two signal-processing algorithms, designed to separate the voiced speech of two talkers speaking simultaneously at similar intensities in a single channel, were compared and evaluated. Both algorithms exploit the harmonic structure of voiced speech and require a difference in fundamental frequency (F0) between the voices to operate successfully. One attenuates the interfering voice by filtering the cepstrum of the combined signal. The other uses the method of harmonic selection [T. W. Parsons, J. Acoust. Soc. Am. 60, 911-918 (1976)] to resynthesize the target voice from fragmentary spectral information. Two perceptual evaluations were carried out. One involved the separation of pairs of vowels synthesized on static F0's; the other involved the recovery of consonant-vowel (CV) words masked by a synthesized vowel. Normal-hearing listeners and four listeners with moderate-to-severe, bilateral, symmetrical, sensorineural hearing impairments were tested. All listeners showed increased accuracy of identification when the target voice was enhanced by processing. The vowel-identification data show that intelligibility enhancement is possible over a range of F0 separations between the target and interfering voice. The recovery of CV words demonstrates that the processing is valid not only for spectrally static vowels but also for less intense time-varying voiced consonants. The results for the impaired listeners suggest that the algorithms may be applicable as components of a noise-reduction system in future digital signal-processing hearing aids. The vowel-separation test, and subjective listening, suggest that harmonic selection, which is the more computationally expensive method, produces the more effective voice separation.  相似文献   

12.
The word recognition ability of 4 normal-hearing and 13 cochlearly hearing-impaired listeners was evaluated. Filtered and unfiltered speech in quiet and in noise were presented monaurally through headphones. The noise varied over listening situations with regard to spectrum, level, and temporal envelope. Articulation index theory was applied to predict the results. Two calculation methods were used, both based on the ANSI S3.5-1969 20-band method [S3.5-1969 (American National Standards Institute, New York)]. Method I was almost identical to the ANSI method. Method II included a level- and hearing-loss-dependent calculation of masking of stationary and on-off gated noise signals and of self-masking of speech. Method II provided the best prediction capability, and it is concluded that speech intelligibility of cochlearly hearing-impaired listeners may also, to a first approximation, be predicted from articulation index theory.  相似文献   

13.
Speakers may adapt the phonetic details of their productions when they anticipate perceptual difficulty or comprehension failure on the part of a listener. Previous research suggests that a speaking style known as clear speech is more intelligible overall than casual, conversational speech for a variety of listener populations. However, it is unknown whether clear speech improves the intelligibility of fricative consonants specifically, or how its effects on fricative perception might differ depending on listener population. The primary goal of this study was to determine whether clear speech enhances fricative intelligibility for normal-hearing listeners and listeners with simulated impairment. Two experiments measured babble signal-to-noise ratio thresholds for fricative minimal pair distinctions for 14 normal-hearing listeners and 14 listeners with simulated sloping, recruiting impairment. Results indicated that clear speech helped both groups overall. However, for impaired listeners, reliable clear speech intelligibility advantages were not found for non-sibilant pairs. Correlation analyses comparing acoustic and perceptual data indicated that a shift of energy concentration toward higher frequency regions and greater source strength contributed to the clear speech effect for normal-hearing listeners. Correlations between acoustic and perceptual data were less consistent for listeners with simulated impairment, and suggested that lower-frequency information may play a role.  相似文献   

14.
Two studies investigating gap-detection thresholds were conducted with cochlear-implant subjects whose onset of profound hearing loss was very early in life. The Cochlear Limited multiple-electrode prosthesis was used. The first study investigated the effects of pulse rate (200, 500, and 1000 pulses/s) and stimulus duration (500 and 1000 ms) on gap thresholds in 15 subjects. Average gap thresholds were 1.8 to 32.1 ms. There was essentially no effect of pulse rate and for almost all subjects, no effect of stimulus duration. For two subjects, performance was poorer for the 1000-ms stimulus duration. The second study investigated the relationships between gap thresholds, subject variables, and speech-perception scores. Data from the first study were combined with those from previous studies [Busby et al., Audiology 31, 95-111 (1992); Tong et al., J. Acoust. Soc. Am. 84, 951-962 (1988)], providing data from 27 subjects. A significant negative correlation was found between age at onset of deafness and gap thresholds and most variability in gap thresholds was for the congenitally deaf subjects. Significant negative correlations were found between gap thresholds and word scores for open-set Bamford-Kowal-Bench (BKB) sentences in the auditory-visual condition and lipreading enhancement scores for the same test.  相似文献   

15.
Several studies have demonstrated that when talkers are instructed to speak clearly, the resulting speech is significantly more intelligible than speech produced in ordinary conversation. These speech intelligibility improvements are accompanied by a wide variety of acoustic changes. The current study explored the relationship between acoustic properties of vowels and their identification in clear and conversational speech, for young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. Monosyllabic words excised from sentences spoken either clearly or conversationally by a male talker were presented in 12-talker babble for vowel identification. While vowel intelligibility was significantly higher in clear speech than in conversational speech for the YNH listeners, no clear speech advantage was found for the EHI group. Regression analyses were used to assess the relative importance of spectral target, dynamic formant movement, and duration information for perception of individual vowels. For both listener groups, all three types of information emerged as primary cues to vowel identity. However, the relative importance of the three cues for individual vowels differed greatly for the YNH and EHI listeners. This suggests that hearing loss alters the way acoustic cues are used for identifying vowels.  相似文献   

16.
To determine the minimum difference in amplitude between spectral peaks and troughs sufficient for vowel identification by normal-hearing and hearing-impaired listeners, four vowel-like complex sounds were created by summing the first 30 harmonics of a 100-Hz tone. The amplitudes of all harmonics were equal, except for two consecutive harmonics located at each of three "formant" locations. The amplitudes of these harmonics were equal and ranged from 1-8 dB more than the remaining components. Normal-hearing listeners achieved greater than 75% accuracy when peak-to-trough differences were 1-2 dB. Normal-hearing listeners who were tested in a noise background sufficient to raise their thresholds to the level of a flat, moderate hearing loss needed a 4-dB difference for identification. Listeners with a moderate, flat hearing loss required a 6- to 7-dB difference for identification. The results suggest, for normal-hearing listeners, that the peak-to-trough amplitude difference required for identification of this set of vowels is very near the threshold for detection of a change in the amplitude spectrum of a complex signal. Hearing-impaired listeners may have difficulty using closely spaced formants for vowel identification due to abnormal smoothing of the internal representation of the spectrum by broadened auditory filters.  相似文献   

17.
Forward-masking growth functions for on-frequency (6-kHz) and off-frequency (3-kHz) sinusoidal maskers were measured in quiet and in a high-pass noise just above the 6-kHz probe frequency. The data show that estimates of response-growth rates obtained from those functions in quiet, which have been used to infer cochlear compression, are strongly dependent on the spread of probe excitation toward higher frequency regions. Therefore, an alternative procedure for measuring response-growth rates was proposed, one that employs a fixed low-level probe and avoids level-dependent spread of probe excitation. Fixed-probe-level temporal masking curves (TMCs) were obtained from normal-hearing listeners at a test frequency of 1 kHz, where the short 1-kHz probe was fixed in level at about 10 dB SL. The level of the preceding forward masker was adjusted to obtain masked threshold as a function of the time delay between masker and probe. The TMCs were obtained for an on-frequency masker (1 kHz) and for other maskers with frequencies both below and above the probe frequency. From these measurements, input/output response-growth curves were derived for individual ears. Response-growth slopes varied from >1.0 at low masker levels to <0.2 at mid masker levels. In three subjects, response growth increased again at high masker levels (>80 dB SPL). For the fixed-level probe, the TMC slopes changed very little in the presence of a high-pass noise masking upward spread of probe excitation. A greater effect on the TMCs was observed when a high-frequency cueing tone was used with the masking tone. In both cases, however, the net effects on the estimated rate of response growth were minimal.  相似文献   

18.
Distortion product otoacoustic emission (DPOAE) suppression measurements were made in 20 subjects with normal hearing and 21 subjects with mild-to-moderate hearing loss. The probe consisted of two primary tones (f2, f1), with f2 held constant at 4 kHz and f2/f1 = 1.22. Primary levels (L1, L2) were set according to the equation L1 = 0.4 L2 + 39 dB [Kummer et al., J. Acoust. Soc. Am. 103, 3431-3444 (1998)], with L2 ranging from 20 to 70 dB SPL (normal-hearing subjects) and 50-70 dB SPL (subjects with hearing loss). Responses elicited by the probe were suppressed by a third tone (f3), varying in frequency from 1 octave below to 1/2 octave above f2. Suppressor level (L3) varied from 5 to 85 dB SPL. Responses in the presence of the suppressor were subtracted from the unsuppressed condition in order to convert the data into decrements (amount of suppression). The slopes of the decrement versus L3 functions were less steep for lower frequency suppressors and more steep for higher frequency suppressors in impaired ears. Suppression tuning curves, constructed by selecting the L3 that resulted in 3 dB of suppression as a function of f3, resulted in tuning curves that were similar in appearance for normal and impaired ears. Although variable, Q10 and Q(ERB) were slightly larger in impaired ears regardless of whether the comparisons were made at equivalent SPL or equivalent sensation levels (SL). Larger tip-to-tail differences were observed in ears with normal hearing when compared at either the same SPL or the same SL, with a much larger effect at similar SL. These results are consistent with the view that subjects with normal hearing and mild-to-moderate hearing loss have similar tuning around a frequency for which the hearing loss exists, but reduced cochlear-amplifier gain.  相似文献   

19.
Binaural speech intelligibility of individual listeners under realistic conditions was predicted using a model consisting of a gammatone filter bank, an independent equalization-cancellation (EC) process in each frequency band, a gammatone resynthesis, and the speech intelligibility index (SII). Hearing loss was simulated by adding uncorrelated masking noises (according to the pure-tone audiogram) to the ear channels. Speech intelligibility measurements were carried out with 8 normal-hearing and 15 hearing-impaired listeners, collecting speech reception threshold (SRT) data for three different room acoustic conditions (anechoic, office room, cafeteria hall) and eight directions of a single noise source (speech in front). Artificial EC processing errors derived from binaural masking level difference data using pure tones were incorporated into the model. Except for an adjustment of the SII-to-intelligibility mapping function, no model parameter was fitted to the SRT data of this study. The overall correlation coefficient between predicted and observed SRTs was 0.95. The dependence of the SRT of an individual listener on the noise direction and on room acoustics was predicted with a median correlation coefficient of 0.91. The effect of individual hearing impairment was predicted with a median correlation coefficient of 0.95. However, for mild hearing losses the release from masking was overestimated.  相似文献   

20.
Tone-burst-evoked otoacoustic emissions were measured as a function of tone-burst sound pressure level and frequency in normally hearing ears. Although the spectral and temporal properties varied across individual ears, there was a close correspondence between stimulus and response spectra. Both the spectral and latency characteristics of tone-burst-evoked emissions are consistent with the hypothesis that they are generated at sites along the cochlear partition corresponding to their frequency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号