首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A conditional-on-a-single-stimulus (COSS) analysis procedure [B. G. Berg, J. Acoust. Soc. Am. 86, 1743-1746 (1989)] was used to estimate how well normal-hearing and hearing-impaired listeners selectively attend to individual spectral components of a broadband signal in a level discrimination task. On each trial, two multitone complexes consisting of six octave frequencies from 250 to 8000 Hz were presented to listeners. The levels of the individual tones were chosen independently and at random on each presentation. The target tone was selected, within a block of trials, as the 250-, 1000-, or 4000-Hz component. On each trial, listeners were asked to indicate which of the two complex sounds contained the higher level target. As a group, normal-hearing listeners exhibited greater selectivity than hearing-impaired listeners to the 250-Hz target, while hearing-impaired listeners showed greater selectivity than normal-hearing listeners to the 4000-Hz target, which is in the region of their hearing loss. Both groups of listeners displayed large variability in their ability to selectively weight the 1000-Hz target. Trial-by-trial analysis showed a decrease in weighting efficiency with increasing frequency for normal-hearing listeners, but a relatively constant weighting efficiency across frequency for hearing-impaired listeners. Interestingly, hearing-impaired listeners selectively weighted the 4000-Hz target, which was in the region of their hearing loss, more efficiently than did the normal-hearing listeners.  相似文献   

2.
A functional simulation of hearing loss was evaluated in its ability to reproduce the temporal modulation transfer functions (TMTFs) for nine listeners with mild to profound sensorineural hearing loss. Each hearing loss was simulated in a group of three age-matched normal-hearing listeners through spectrally shaped masking noise or a combination of masking noise and multiband expansion. TMTFs were measured for both groups of listeners using a broadband noise carrier as a function of modulation rate in the range 2 to 1024 Hz. The TMTFs were fit with a lowpass filter function that provided estimates of overall modulation-depth sensitivity and modulation cutoff frequency. Although the simulations were capable of accurately reproducing the threshold elevations of the hearing-impaired listeners, they were not successful in reproducing the TMTFs. On average, the simulations resulted in lower sensitivity and higher cutoff frequency than were observed in the TMTFs of the hearing-impaired listeners. Discrepancies in performance between listeners with real and simulated hearing loss are possibly related to inaccuracies in the simulation of recruitment.  相似文献   

3.
An analysis of psychophysical tuning curves in normal and pathological ears   总被引:2,自引:0,他引:2  
Simultaneous psychophysical tuning curves were obtained from normal-hearing and hearing-impaired listeners, using probe tones that were either at similar sound pressure levels or at similar sensation levels for the two types of listeners. Tuning curves from the hearing-impaired listeners were flat, erratic, broad, and/or inverted, depending upon the frequency region of the probe tone and the frequency characteristics of the hearing loss. Tuning curves from the normal-hearing listeners at low-SPL's were sharp as expected; tuning curves at high-SPL's were discontinuous. An analysis of high-SPL tuning curves suggests that tuning curves from normal-hearing listeners reflect low-pass filter characteristics instead of the sharp bandpass filter characteristics seen with low-SPL probe tones. Tuning curves from hearing-impaired listeners at high-SPL probe levels appear to reflect similar low-pass filter characteristics, but with much more gradual high-frequency slopes than in the normal ear. This appeared as abnormal downward spread of masking. Relatively good temporal resolution and broader tuning mechanisms were proposed to explain inverted tuning curves in the hearing-impaired listeners.  相似文献   

4.
Thresholds for 2-kHz sinusoidal signals were determined in the presence of a notched-noise masker, for six normal-hearing listeners and 12 listeners with cochlear hearing losses. Following Patterson and Nimmo Smith [J. Acoust. Soc. Am. 67, 229-245 (1980)], conditions were used where the notch was placed both symmetrically and asymmetrically about the signal frequency. The auditory filter shape for both the low- and high-frequency side of the filter was calculated using the rounded-exponential form of the filter. In six hearing-impaired listeners, the auditory filter shape showed a shallow low-frequency skirt indicating pronounced susceptibility to the upward spread of masking. In two hearing-impaired listeners, the filter shape showed a shallow high-frequency skirt, indicating pronounced susceptibility to the downward spread of masking. Two other listeners with mild threshold losses had steeper and more symmetric filters than normal, suggesting either a small conductive loss or an attenuation factor of sensorineural origin not associated with a degradation of frequency resolution. In the remaining two listeners, the auditory filter had too little selectivity for its shape to be reliably determined.  相似文献   

5.
Psychophysical estimates of cochlear function suggest that normal-hearing listeners exhibit a compressive basilar-membrane (BM) response. Listeners with moderate to severe sensorineural hearing loss may exhibit a linearized BM response along with reduced gain, suggesting the loss of an active cochlear mechanism. This study investigated how the BM response changes with increasing hearing loss by comparing psychophysical measures of BM compression and gain for normal-hearing listeners with those for listeners who have mild to moderate sensorineural hearing loss. Data were collected from 16 normal-hearing listeners and 12 ears from 9 hearing-impaired listeners. The forward masker level required to mask a fixed low-level, 4000-Hz signal was measured as a function of the masker-signal interval using a masker frequency of either 2200 or 4000 Hz. These plots are known as temporal masking curves (TMCs). BM response functions derived from the TMCs showed a systematic reduction in gain with degree of hearing loss. Contrary to current thinking, however, no clear relationship was found between maximum compression and absolute threshold.  相似文献   

6.
Two experiments are reported which explore variables that may complicate the interpretation of phoneme boundary data from hearing-impaired listeners. Fourteen synthetic consonant-vowel syllables comprising a/ba-da-ga/ continuum were used as stimuli. The first experiment examined the influence of presentation level and ear of presentation in normal-hearing subjects. Only small differences in the phoneme boundaries and labeling functions were observed between ears and across presentation levels. Thus monaural presentation and relatively high signal level do not appear to be complicating factors in research with hearing-impaired listeners, at least for these stimuli. The second experiment described a test procedure for obtaining phoneme boundaries in some hearing-impaired listeners that controlled for between-subject sources of variation unrelated to hearing impairment and delineated the effects of spectral shaping imposed by the hearing impairment on the labeling functions. Labeling data were obtained from unilaterally hearing-impaired listeners under three test conditions: in the normal ear without any signal distortion; in the normal ear listening through a spectrum shaper that was set to match the subject's suprathreshold audiometric configuration; and in the impaired ear. The reduction in the audibility of the distinctive acoustic/phonetic cues seemed to explain all or part of the effects of the hearing impairment on the labeling functions of some subjects. For many other subjects, however, other forms of distortion in addition to reduced audibility seemed to affect their labeling behavior.  相似文献   

7.
Temporal masking curves were obtained from 12 normal-hearing and 16 hearing-impaired listeners using 200-ms, 1000-Hz pure-tone maskers and 20-ms, 1000-Hz fixed-level probe tones. For the delay times used here (greater than 40 ms), temporal masking curves obtained from both groups can be well described by an exponential function with a single level-independent time constant for each listener. Normal-hearing listeners demonstrated time constants that ranged between 37 and 67 ms, with a mean of 50 ms. Most hearing-impaired listeners, with significant hearing loss at the probe frequency, demonstrated longer time constants (range 58-114 ms) than those obtained from normal-hearing listeners. Time constants were found to grow exponentially with hearing loss according to the function tau = 52e0.011(HL), when the slope of the growth of masking is unity. The longest individual time constant was larger than normal by a factor of 2.3 for a hearing loss of 52 dB. The steep slopes of the growth of masking functions typically observed at long delay times in hearing-impaired listeners' data appear to be a direct result of longer time constants. When iterative fitting procedures included a slope parameter, the slopes of the growth of masking from normal-hearing listeners varied around unity, while those from hearing-impaired listeners tended to be less (flatter) than normal. Predictions from the results of these fixed-probe-level experiments are consistent with the results of previous fixed-masker-level experiments, and they indicate that deficiencies in the ability to detect sequential stimuli should be considerable in hearing-impaired listeners, partially because of extended time constants, but mostly because forward masking involves a recovery process that depends upon the sensory response evoked by the masking stimulus. Large sensitivity losses reduce the sensory response to high SPL maskers so that the recovery process is slower, much like the recovery process for low-level stimuli in normal-hearing listeners.  相似文献   

8.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners discriminated between a standard stimulus and a signal stimulus in which half of the standard components were decreased in level and half were increased in level. In one condition, the standard stimulus was the sum of six equal-amplitude tones (equal-SPL), and in another the standard stimulus was the sum of six tones at equal sensation levels re: audiometric thresholds for individual subjects (equal-SL). Spectral weights were estimated in conditions where the amplitudes of the individual tones were perturbed slightly on every presentation. Sensitivity was similar in all conditions for normal-hearing and hearing-impaired listeners. The presence of perturbation and equal-SL components increased thresholds for both groups, but only small differences in weighting strategy were measured between the groups depending on whether the equal-SPL or equal-SL condition was tested. The average data suggest that normal-hearing listeners may rely more on the central components of the spectrum whereas hearing-impaired listeners may have been more likely to use the edges. However, individual weighting functions were quite variable, especially for the HI listeners, perhaps reflecting difficulty in processing changes in spectral shape due to hearing loss. Differences in weighting strategy without changes in sensitivity suggest that factors other than spectral weights, such as internal noise or difficulty encoding a reference stimulus, also may dominate performance.  相似文献   

9.
This study considered consequences of sensorineural hearing loss in ten listeners. The characterization of individual hearing loss was based on psychoacoustic data addressing audiometric pure-tone sensitivity, cochlear compression, frequency selectivity, temporal resolution, and intensity discrimination. In the experiments it was found that listeners with comparable audiograms can show very different results in the supra-threshold measures. In an attempt to account for the observed individual data, a model of auditory signal processing and perception [Jepsen et al., J. Acoust. Soc. Am. 124, 422-438 (2008)] was used as a framework. The parameters of the cochlear processing stage of the model were adjusted to account for behaviorally estimated individual basilar-membrane input-output functions and the audiogram, from which the amounts of inner hair-cell and outer hair-cell losses were estimated as a function of frequency. All other model parameters were left unchanged. The predictions showed a reasonably good agreement with the measured individual data in the frequency selectivity and forward masking conditions while the variation of intensity discrimination thresholds across listeners was underestimated by the model. The model and the associated parameters for individual hearing-impaired listeners might be useful for investigating effects of individual hearing impairment in more complex conditions, such as speech intelligibility in noise.  相似文献   

10.
Masking period patterns (MPPs) were measured in listeners with normal and impaired hearing using amplitude-modulated tonal maskers and short tonal probes. The frequency of the masker was either the same as the frequency of the probe (on-frequency masking) or was one octave below the frequency of the probe (off-frequency masking). In experiment 1, MPPs were measured for listeners with normal hearing using different masker levels. Carrier frequencies of 3 and 6 kHz were used for the masker. The probe had a frequency of 6 kHz. For all masker levels, the off-frequency MPPs exhibited deeper and longer valleys compared with the on-frequency MPPs. Hearing-impaired listeners were tested in experiment 2. For some hearing-impaired subjects, masker frequencies of 1.5 kHz and 3 kHz were paired with a probe frequency of 3 kHz. MPPs measured for listeners with hearing loss had similar shapes for on- and off-frequency maskers. It was hypothesized that the shapes of MPPs reflect nonlinear processing at the level of the basilar membrane in normal hearing and more linear processing in impaired hearing. A model assuming different cochlear gains for normal versus impaired hearing and similar parameters of the temporal integrator for both groups of listeners successfully predicted the MPPs.  相似文献   

11.
The purpose of this experiment was to determine the applicability of the Articulation Index (AI) model for characterizing the speech recognition performance of listeners with mild-to-moderate hearing loss. Performance-intensity functions were obtained from five normal-hearing listeners and 11 hearing-impaired listeners using a closed-set nonsense syllable test for two frequency responses (uniform and high-frequency emphasis). For each listener, the fitting constant Q of the nonlinear transfer function relating AI and speech recognition was estimated. Results indicated that the function mapping AI onto performance was approximately the same for normal and hearing-impaired listeners with mild-to-moderate hearing loss and high speech recognition scores. For a hearing-impaired listener with poor speech recognition ability, the AI procedure was a poor predictor of performance. The AI procedure as presently used is inadequate for predicting performance of individuals with reduced speech recognition ability and should be used conservatively in applications predicting optimal or acceptable frequency response characteristics for hearing-aid amplification systems.  相似文献   

12.
The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.  相似文献   

13.
This study examined vowel perception by young normal-hearing (YNH) adults, in various listening conditions designed to simulate mild-to-moderate sloping sensorineural hearing loss. YNH listeners were individually age- and gender-matched to young hearing-impaired (YHI) listeners tested in a previous study [Richie et al., J. Acoust. Soc. Am. 114, 2923-2933 (2003)]. YNH listeners were tested in three conditions designed to create equal audibility with the YHI listeners; a low signal level with and without a simulated hearing loss, and a high signal level with a simulated hearing loss. Listeners discriminated changes in synthetic vowel tokens /I e epsilon alpha ae/ when Fl or F2 varied in frequency. Comparison of YNH with YHI results failed to reveal significant differences between groups in terms of performance on vowel discrimination, in conditions of similar audibility by using both noise masking to elevate the hearing thresholds of the YNH and applying frequency-specific gain to the YHI listeners. Further, analysis of learning curves suggests that while the YHI listeners completed an average of 46% more test blocks than YNH listeners, the YHI achieved a level of discrimination similar to that of the YNH within the same number of blocks. Apparently, when age and gender are closely matched between young hearing-impaired and normal-hearing adults, performance on vowel tasks may be explained by audibility alone.  相似文献   

14.
This study investigated the effect of mild-to-moderate sensorineural hearing loss on the ability to identify speech in noise for vowel-consonant-vowel tokens that were either unprocessed, amplitude modulated synchronously across frequency, or amplitude modulated asynchronously across frequency. One goal of the study was to determine whether hearing-impaired listeners have a particular deficit in the ability to integrate asynchronous spectral information in the perception of speech. Speech tokens were presented at a high, fixed sound level and the level of a speech-shaped noise was changed adaptively to estimate the masked speech identification threshold. The performance of the hearing-impaired listeners was generally worse than that of the normal-hearing listeners, but the impaired listeners showed particularly poor performance in the synchronous modulation condition. This finding suggests that integration of asynchronous spectral information does not pose a particular difficulty for hearing-impaired listeners with mild/moderate hearing losses. Results are discussed in terms of common mechanisms that might account for poor speech identification performance of hearing-impaired listeners when either the masking noise or the speech is synchronously modulated.  相似文献   

15.
The purpose of this study is to specify the contribution of certain frequency regions to consonant place perception for normal-hearing listeners and listeners with high-frequency hearing loss, and to characterize the differences in stop-consonant place perception among these listeners. Stop-consonant recognition and error patterns were examined at various speech-presentation levels and under conditions of low- and high-pass filtering. Subjects included 18 normal-hearing listeners and a homogeneous group of 10 young, hearing-impaired individuals with high-frequency sensorineural hearing loss. Differential filtering effects on consonant place perception were consistent with the spectral composition of acoustic cues. Differences in consonant recognition and error patterns between normal-hearing and hearing-impaired listeners were observed when the stimulus bandwidth included regions of threshold elevation for the hearing-impaired listeners. Thus place-perception differences among listeners are, for the most part, associated with stimulus bandwidths corresponding to regions of hearing loss.  相似文献   

16.
The speech-reception threshold (SRT) for sentences presented in a fluctuating interfering background sound of 80 dBA SPL is measured for 20 normal-hearing listeners and 20 listeners with sensorineural hearing impairment. The interfering sounds range from steady-state noise, via modulated noise, to a single competing voice. Two voices are used, one male and one female, and the spectrum of the masker is shaped according to these voices. For both voices, the SRT is measured as well in noise spectrally shaped according to the target voice as shaped according to the other voice. The results show that, for normal-hearing listeners, the SRT for sentences in modulated noise is 4-6 dB lower than for steady-state noise; for sentences masked by a competing voice, this difference is 6-8 dB. For listeners with moderate sensorineural hearing loss, elevated thresholds are obtained without an appreciable effect of masker fluctuations. The implications of these results for estimating a hearing handicap in everyday conditions are discussed. By using the articulation index (AI), it is shown that hearing-impaired individuals perform poorer than suggested by the loss of audibility for some parts of the speech signal. Finally, three mechanisms are discussed that contribute to the absence of unmasking by masker fluctuations in hearing-impaired listeners. The low sensation level at which the impaired listeners receive the masker seems a major determinant. The second and third factors are: reduced temporal resolution and a reduction in comodulation masking release, respectively.  相似文献   

17.
Thresholds of ongoing interaural time difference (ITD) were obtained from normal-hearing and hearing-impaired listeners who had high-frequency, sensorineural hearing loss. Several stimuli (a 500-Hz sinusoid, a narrow-band noise centered at 500 Hz, a sinusoidally amplitude-modulated 4000-Hz tone, and a narrow-band noise centered at 4000 Hz) and two criteria [equal sound-pressure level (Eq SPL) and equal sensation level (Eq SL)] for determining the level of stimuli presented to each listener were employed. The ITD thresholds and slopes of the psychometric functions were elevated for hearing-impaired listeners for the two high-frequency stimuli in comparison to: the listener's own low-frequency thresholds; and data obtained from normal-hearing listeners for stimuli presented with Eq SPL interaurally. The two groups of listeners required similar ITDs to reach threshold when stimuli were presented at Eq SLs to each ear. For low-frequency stimuli, the ITD thresholds of the hearing-impaired listener were generally slightly greater than those obtained from the normal-hearing listeners. Whether these stimuli were presented at either Eq SPL or Eq SL did not differentially affect the ITD thresholds across groups.  相似文献   

18.
There is limited documentation available on how sensorineurally hearing-impaired listeners use the various sources of phonemic information that are known to be distributed across time in the speech waveform. In this investigation, a group of normally hearing listeners and a group of sensorineurally hearing-impaired listeners (with and without the benefit of amplification) identified various consonant and vowel productions that had been systematically varied in duration. The consonants (presented in a /haCa/ environment) and the vowels (presented in a /bVd/ environment) were truncated in steps to eliminate various segments from the end of the stimulus. The results indicated that normally hearing listeners could extract more phonemic information, especially cues to consonant place, from the earlier occurring portions of the stimulus waveforms than could the hearing-impaired listeners. The use of amplification partially decreased the performance differences between the normally hearing listeners and the unaided hearing-impaired listeners. The results are relevant to current models of normal speech perception that emphasize the need for the listener to make phonemic identifications as quickly as possible.  相似文献   

19.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

20.
The purpose of this study was to determine whether the perceived sensory dissonance of pairs of pure tones (PT dyads) or pairs of harmonic complex tones (HC dyads) is altered due to sensorineural hearing loss. Four normal-hearing (NH) and four hearing-impaired (HI) listeners judged the sensory dissonance of PT dyads geometrically centered at 500 and 2000 Hz, and of HC dyads with fundamental frequencies geometrically centered at 500 Hz. The frequency separation of the members of the dyads varied from 0 Hz to just over an octave. In addition, frequency selectivity was assessed at 500 and 2000 Hz for each listener. Maximum dissonance was perceived at frequency separations smaller than the auditory filter bandwidth for both groups of listners, but maximum dissonance for HI listeners occurred at a greater proportion of their bandwidths at 500 Hz than at 2000 Hz. Further, their auditory filter bandwidths at 500 Hz were significantly wider than those of the NH listeners. For both the PT and HC dyads, curves displaying dissonance as a function of frequency separation were more compressed for the HI listeners, possibly reflecting less contrast between their perceptions of consonance and dissonance compared with the NH listeners.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号