首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Forward-masking growth functions for on-frequency (6-kHz) and off-frequency (3-kHz) sinusoidal maskers were measured in quiet and in a high-pass noise just above the 6-kHz probe frequency. The data show that estimates of response-growth rates obtained from those functions in quiet, which have been used to infer cochlear compression, are strongly dependent on the spread of probe excitation toward higher frequency regions. Therefore, an alternative procedure for measuring response-growth rates was proposed, one that employs a fixed low-level probe and avoids level-dependent spread of probe excitation. Fixed-probe-level temporal masking curves (TMCs) were obtained from normal-hearing listeners at a test frequency of 1 kHz, where the short 1-kHz probe was fixed in level at about 10 dB SL. The level of the preceding forward masker was adjusted to obtain masked threshold as a function of the time delay between masker and probe. The TMCs were obtained for an on-frequency masker (1 kHz) and for other maskers with frequencies both below and above the probe frequency. From these measurements, input/output response-growth curves were derived for individual ears. Response-growth slopes varied from >1.0 at low masker levels to <0.2 at mid masker levels. In three subjects, response growth increased again at high masker levels (>80 dB SPL). For the fixed-level probe, the TMC slopes changed very little in the presence of a high-pass noise masking upward spread of probe excitation. A greater effect on the TMCs was observed when a high-frequency cueing tone was used with the masking tone. In both cases, however, the net effects on the estimated rate of response growth were minimal.  相似文献   

2.
Temporal masking curves were obtained from 12 normal-hearing and 16 hearing-impaired listeners using 200-ms, 1000-Hz pure-tone maskers and 20-ms, 1000-Hz fixed-level probe tones. For the delay times used here (greater than 40 ms), temporal masking curves obtained from both groups can be well described by an exponential function with a single level-independent time constant for each listener. Normal-hearing listeners demonstrated time constants that ranged between 37 and 67 ms, with a mean of 50 ms. Most hearing-impaired listeners, with significant hearing loss at the probe frequency, demonstrated longer time constants (range 58-114 ms) than those obtained from normal-hearing listeners. Time constants were found to grow exponentially with hearing loss according to the function tau = 52e0.011(HL), when the slope of the growth of masking is unity. The longest individual time constant was larger than normal by a factor of 2.3 for a hearing loss of 52 dB. The steep slopes of the growth of masking functions typically observed at long delay times in hearing-impaired listeners' data appear to be a direct result of longer time constants. When iterative fitting procedures included a slope parameter, the slopes of the growth of masking from normal-hearing listeners varied around unity, while those from hearing-impaired listeners tended to be less (flatter) than normal. Predictions from the results of these fixed-probe-level experiments are consistent with the results of previous fixed-masker-level experiments, and they indicate that deficiencies in the ability to detect sequential stimuli should be considerable in hearing-impaired listeners, partially because of extended time constants, but mostly because forward masking involves a recovery process that depends upon the sensory response evoked by the masking stimulus. Large sensitivity losses reduce the sensory response to high SPL maskers so that the recovery process is slower, much like the recovery process for low-level stimuli in normal-hearing listeners.  相似文献   

3.
The ability of five profoundly hearing-impaired subjects to "track" connected speech and to make judgments about the intonation and stress in spoken sentences was evaluated under a variety of auditory-visual conditions. These included speechreading alone, speechreading plus speech (low-pass filtered at 4 kHz), and speechreading plus a tone whose frequency, intensity, and temporal characteristics were matched to the speaker's fundamental frequency (F0). In addition, several frequency transfer functions were applied to the normal F0 range resulting in new ranges that were both transposed and expanded with respect to the original F0 range. Three of the five subjects were able to use several of the tonal representations of F0 nearly as well as speech to improve their speechreading rates and to make appropriate judgments concerning sentence intonation and stress. The remaining two subjects greatly improved their identification performance for intonation and stress patterns when expanded F0 signals were presented alone (i.e., without speechreading), but had difficulty integrating visual and auditory information at the connected discourse level, despite intensive training in the connected discourse tracking procedure lasting from 27.8-33.8 h.  相似文献   

4.
A methodology for the estimation of individual loudness growth functions using tone-burst otoacoustic emissions (TBOAEs) and tone-burst auditory brainstem responses (TBABRs) was proposed by Silva and Epstein [J. Acoust. Soc. Am. 127, 3629-3642 (2010)]. This work attempted to investigate the application of such technique to the more challenging cases of hearing-impaired listeners. The specific aims of this study were to (1) verify the accuracy of this technique with eight hearing-impaired listeners for 1- and 4-kHz tone-burst stimuli, (2) investigate the effect of residual noise levels from the TBABRs on the quality of the loudness growth estimation, and (3) provide a public dataset of physiological and psychoacoustical responses to a wide range of stimuli intensity. The results show that some of the physiological loudness growth estimates were within the mean-square-error range for standard psychoacoustical procedures, with closer agreement at 1 kHz. The median residual noise in the TBABRs was found to be related to the performance of the estimation, with some listeners showing strong improvements in the estimated loudness growth function when controlling for noise levels. This suggests that future studies using evoked potentials to estimate loudness growth should control for the estimated averaged residual noise levels of the TBABRs.  相似文献   

5.
The goal of this study was to measure the ability of adult hearing-impaired listeners to discriminate formant frequency for vowels in isolation, syllables, and sentences. Vowel formant discrimination for F1 and F2 for the vowels /I epsilon ae / was measured. Four experimental factors were manipulated including linguistic context (isolated vowels, syllables, and sentences), signal level (70 and 95 dB SPL), formant frequency, and cognitive load. A complex identification task was added to the formant discrimination task only for sentences to assess effects of cognitive load. Results showed significant elevation in formant thresholds as formant frequency and linguistic context increased. Higher signal level also elevated formant thresholds primarily for F2. However, no effect of the additional identification task on the formant discrimination was observed. In comparable conditions, these hearing-impaired listeners had elevated thresholds for formant discrimination compared to young normal-hearing listeners primarily for F2. Altogether, poorer performance for formant discrimination for these adult hearing-impaired listeners was mainly caused by hearing loss rather than cognitive difficulty for tasks implemented in this study.  相似文献   

6.
Binaural speech intelligibility in noise for hearing-impaired listeners   总被引:2,自引:0,他引:2  
The effect of head-induced interaural time delay (ITD) and interaural level differences (ILD) on binaural speech intelligibility in noise was studied for listeners with symmetrical and asymmetrical sensorineural hearing losses. The material, recorded with a KEMAR manikin in an anechoic room, consisted of speech, presented from the front (0 degree), and noise, presented at azimuths of 0 degree, 30 degrees, and 90 degrees. Derived noise signals, containing either only ITD or only ILD, were generated using a computer. For both groups of subjects, speech-reception thresholds (SRT) for sentences in noise were determined as a function of: (1) noise azimuth, (2) binaural cue, and (3) an interaural difference in overall presentation level, simulating the effect of a monaural hearing acid. Comparison of the mean results with corresponding data obtained previously from normal-hearing listeners shows that the hearing impaired have a 2.5 dB higher SRT in noise when both speech and noise are presented from the front, and 2.6-5.1 dB less binaural gain when the noise azimuth is changed from 0 degree to 90 degrees. The gain due to ILD varies among the hearing-impaired listeners between 0 dB and normal values of 7 dB or more. It depends on the high-frequency hearing loss at the side presented with the most favorable signal-to-noise (S/N) ratio. The gain due to ITD is nearly normal for the symmetrically impaired (4.2 dB, compared with 4.7 dB for the normal hearing), but only 2.5 dB in the case of asymmetrical impairment. When ITD is introduced in noise already containing ILD, the resulting gain is 2-2.5 dB for all groups. The only marked effect of the interaural difference in overall presentation level is a reduction of the gain due to ILD when the level at the ear with the better S/N ratio is decreased. This implies that an optimal monaural hearing aid (with a moderate gain) will hardly interfere with unmasking through ITD, while it may increase the gain due to ILD by preventing or diminishing threshold effects.  相似文献   

7.
The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.  相似文献   

8.
9.
Young normal-hearing listeners, elderly normal-hearing listeners, and elderly hearing-impaired listeners were tested on a variety of phonetic identification tasks. Where identity was cued by stimulus duration, the elderly hearing-impaired listeners evidenced normal identification functions. On a task in which there were multiple cues to vowel identity, performance was also normal. On a/b d g/identification task in which the starting frequency of the second formant was varied, performance was abnormal for both the elderly hearing-impaired listeners and the elderly normal-hearing listeners. We conclude that errors in phonetic identification among elderly hearing-impaired listeners with mild to moderate, sloping hearing impairment do not stem from abnormalities in processing stimulus duration. The results with the /b d g/continuum suggest that one factor underlying errors may be an inability to base identification on dynamic spectral information when relatively static information, which is normally characteristic of a phonetic segment, is unavailable.  相似文献   

10.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners discriminated between a standard stimulus and a signal stimulus in which half of the standard components were decreased in level and half were increased in level. In one condition, the standard stimulus was the sum of six equal-amplitude tones (equal-SPL), and in another the standard stimulus was the sum of six tones at equal sensation levels re: audiometric thresholds for individual subjects (equal-SL). Spectral weights were estimated in conditions where the amplitudes of the individual tones were perturbed slightly on every presentation. Sensitivity was similar in all conditions for normal-hearing and hearing-impaired listeners. The presence of perturbation and equal-SL components increased thresholds for both groups, but only small differences in weighting strategy were measured between the groups depending on whether the equal-SPL or equal-SL condition was tested. The average data suggest that normal-hearing listeners may rely more on the central components of the spectrum whereas hearing-impaired listeners may have been more likely to use the edges. However, individual weighting functions were quite variable, especially for the HI listeners, perhaps reflecting difficulty in processing changes in spectral shape due to hearing loss. Differences in weighting strategy without changes in sensitivity suggest that factors other than spectral weights, such as internal noise or difficulty encoding a reference stimulus, also may dominate performance.  相似文献   

11.
In a multiple observation, sample discrimination experiment normal-hearing (NH) and hearing-impaired (HI) listeners heard two multitone complexes each consisting of six simultaneous tones with nominal frequencies spaced evenly on an ERB(N) logarithmic scale between 257 and 6930 Hz. On every trial, the frequency of each tone was sampled from a normal distribution centered near its nominal frequency. In one interval of a 2IFC task, all tones were sampled from distributions lower in mean frequency and in the other interval from distributions higher in mean frequency. Listeners had to identify the latter interval. Decision weights were obtained from multiple regression analysis of the between- interval frequency differences for each tone and listeners' responses. Frequency difference limens (an index of sensorineural resolution) and decision weights for each tone were used to predict the sensitivity of different decision-theoretic models. Results indicate that low-frequency tones were given much greater perceptual weight than high-frequency tones by both groups of listeners. This tendency increased as hearing loss increased and as sensorineural resolution decreased, resulting in significantly less efficient weighting strategies for the HI listeners. Overall, results indicate that HI listeners integrated frequency information less optimally than NH listeners, even after accounting for differences in sensorineural resolution.  相似文献   

12.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listener's responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.  相似文献   

13.
To examine the association between frequency resolution and speech recognition, auditory filter parameters and stop-consonant recognition were determined for 9 normal-hearing and 24 hearing-impaired subjects. In an earlier investigation, the relationship between stop-consonant recognition and the articulation index (AI) had been established on normal-hearing listeners. Based on AI predictions, speech-presentation levels for each subject in this experiment were selected to obtain a wide range of recognition scores. This strategy provides a method of interpreting speech-recognition performance among listeners who vary in magnitude and configuration of hearing loss by assuming that conditions which yield equal audible spectra will result in equivalent performance. It was reasoned that an association between frequency resolution and consonant recognition may be more appropriately estimated if hearing-impaired listeners' performance was measured under conditions that assured equivalent audibility of the speech stimuli. Derived auditory filter parameters indicated that filter widths and dynamic ranges were strongly associated with threshold. Stop-consonant recognition scores for most hearing-impaired listeners were not significantly poorer than predicted by the AI model. Furthermore, differences between observed recognition scores and those predicted by the AI were not associated with auditory filter characteristics, suggesting that frequency resolution and speech recognition may appear to be associated primarily because both are degraded by threshold elevation.  相似文献   

14.
Two experiments were conducted to assess whether hearing-impaired listeners have a reduced ability to process suprathreshold complex patterns of modulation applied to a 4-kHz sinusoidal carrier. Experiment 1 examined the ability to "hear out" the modulation frequency of the central component of a three-component modulator, using the method described by Sek and Moore [J. Acoust. Soc. Am. 113, 2801-2811 (2003)]. Scores were around 70-80% correct when the components in the three-component modulator were widely spaced and when the frequencies of the target and comparison different sufficiently, but decreased when the components in the modulator were closely spaced. Experiment 2 examined the ability to hear a change in the relative phase of the components in a three-component modulator with harmonically spaced components. The frequency of the central component, f, was either 50 or 100 Hz. Scores were about 70% correct when the component spacing was < or = 0.5fc, but decreased markedly for greater spacings. Performance was only slightly impaired by randomizing the overall modulation depth from one stimulus to the next. For both experiments, performance was only slightly worse than for normally hearing listeners, indicating that cochlear hearing loss does not markedly affect the ability to process suprathreshold complex patterns of modulation.  相似文献   

15.
Speakers may adapt the phonetic details of their productions when they anticipate perceptual difficulty or comprehension failure on the part of a listener. Previous research suggests that a speaking style known as clear speech is more intelligible overall than casual, conversational speech for a variety of listener populations. However, it is unknown whether clear speech improves the intelligibility of fricative consonants specifically, or how its effects on fricative perception might differ depending on listener population. The primary goal of this study was to determine whether clear speech enhances fricative intelligibility for normal-hearing listeners and listeners with simulated impairment. Two experiments measured babble signal-to-noise ratio thresholds for fricative minimal pair distinctions for 14 normal-hearing listeners and 14 listeners with simulated sloping, recruiting impairment. Results indicated that clear speech helped both groups overall. However, for impaired listeners, reliable clear speech intelligibility advantages were not found for non-sibilant pairs. Correlation analyses comparing acoustic and perceptual data indicated that a shift of energy concentration toward higher frequency regions and greater source strength contributed to the clear speech effect for normal-hearing listeners. Correlations between acoustic and perceptual data were less consistent for listeners with simulated impairment, and suggested that lower-frequency information may play a role.  相似文献   

16.
This investigation examined whether listeners with mild-moderate sensorineural hearing impairment have a deficit in the ability to integrate synchronous spectral information in the perception of speech. In stage 1, the bandwidth of filtered speech centered either on 500 or 2500 Hz was varied adaptively to determine the width required for approximately 15%-25% correct recognition. In stage 2, these criterion bandwidths were presented simultaneously and percent correct performance was determined in fixed block trials. Experiment 1 tested normal-hearing listeners in quiet and in masking noise. The main findings were (1) there was no correlation between the criterion bandwidths at 500 and 2500 Hz; (2) listeners achieved a high percent correct in stage 2 (approximately 80%); and (3) performance in quiet and noise was similar. Experiment 2 tested listeners with mild-moderate sensorineural hearing impairment. The main findings were (1) the impaired listeners showed high variability in stage 1, with some listeners requiring narrower and others requiring wider bandwidths than normal, and (2) hearing-impaired listeners achieved percent correct performance in stage 2 that was comparable to normal. The results indicate that listeners with mild-moderate sensorineural hearing loss do not have an essential deficit in the ability to integrate across-frequency speech information.  相似文献   

17.
The word recognition ability of 4 normal-hearing and 13 cochlearly hearing-impaired listeners was evaluated. Filtered and unfiltered speech in quiet and in noise were presented monaurally through headphones. The noise varied over listening situations with regard to spectrum, level, and temporal envelope. Articulation index theory was applied to predict the results. Two calculation methods were used, both based on the ANSI S3.5-1969 20-band method [S3.5-1969 (American National Standards Institute, New York)]. Method I was almost identical to the ANSI method. Method II included a level- and hearing-loss-dependent calculation of masking of stationary and on-off gated noise signals and of self-masking of speech. Method II provided the best prediction capability, and it is concluded that speech intelligibility of cochlearly hearing-impaired listeners may also, to a first approximation, be predicted from articulation index theory.  相似文献   

18.
The bandwidths for summation at threshold were measured for subjects with normal hearing and subjects with sensorineural hearing loss. Thresholds in quiet and in the presence of a masking noise were measured for complex stimuli consisting of 1 to 40 pure-tone components spaced 20 Hz apart. The single component condition consisted of a single pure tone at 1100 Hz; additional components were added below this frequency, in a replication of the G?ssler [Acustica 4, 408-414 (1954)] procedure. For the normal subjects, thresholds increased approximately 3 dB per doubling of bandwidth for signal bandwidths exceeding the critical bandwidth. This slope was less for the hearing-impaired subjects. Summation bandwidths, as estimated from two-line fits, were wider for the hearing-impaired than for the normal subjects. These findings provide evidence that hearing-impaired subjects integrate sound energy over a wider-than-normal frequency range for the detection of complex signals. A second experiment used stimuli similar to those of Spiegel [J. Acoust. Soc. Am. 66, 1356-1363 (1979)], and added components both above and below the frequency of the initial component. Using these stimuli, the slope of the threshold increase beyond the critical bandwidth was approximately 1.5 dB per doubling of bandwidth, thus replicating the Spiegel (1979) experiment. It is concluded that the differences between the G?ssler (1954) and Spiegel (1979) studies were due to the different frequency content of the stimuli used in each study. Based upon the present results, it would appear that the slope of threshold increase is dependent upon the direction of signal expansion, and the size of the critical bands into which the signal is expanded.  相似文献   

19.
A digital processing method is described for altering spectral contrast (the difference in amplitude between spectral peaks and valleys) in natural utterances. Speech processed with programs implementing the contrast alteration procedure was presented to listeners with moderate to severe sensorineural hearing loss. The task was a three alternative (/b/,/d/, or /g/) stop consonant identification task for consonants at a fixed location in short nonsense utterances. Overall, tokens with enhanced contrast showed moderate gains in percentage correct stop consonant identification when compared to unaltered tokens. Conversely, reducing spectral contrast generally reduced percent correct stop consonant identification. Contrast alteration effects were inconsistent for utterances containing /d/. The observed contrast effects also interacted with token intelligibility.  相似文献   

20.
Two signal-processing algorithms, designed to separate the voiced speech of two talkers speaking simultaneously at similar intensities in a single channel, were compared and evaluated. Both algorithms exploit the harmonic structure of voiced speech and require a difference in fundamental frequency (F0) between the voices to operate successfully. One attenuates the interfering voice by filtering the cepstrum of the combined signal. The other uses the method of harmonic selection [T. W. Parsons, J. Acoust. Soc. Am. 60, 911-918 (1976)] to resynthesize the target voice from fragmentary spectral information. Two perceptual evaluations were carried out. One involved the separation of pairs of vowels synthesized on static F0's; the other involved the recovery of consonant-vowel (CV) words masked by a synthesized vowel. Normal-hearing listeners and four listeners with moderate-to-severe, bilateral, symmetrical, sensorineural hearing impairments were tested. All listeners showed increased accuracy of identification when the target voice was enhanced by processing. The vowel-identification data show that intelligibility enhancement is possible over a range of F0 separations between the target and interfering voice. The recovery of CV words demonstrates that the processing is valid not only for spectrally static vowels but also for less intense time-varying voiced consonants. The results for the impaired listeners suggest that the algorithms may be applicable as components of a noise-reduction system in future digital signal-processing hearing aids. The vowel-separation test, and subjective listening, suggest that harmonic selection, which is the more computationally expensive method, produces the more effective voice separation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号