首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
Frequency resolution and three tasks of frequency discrimination were measured at 500 and 4000 Hz in 12 normal and 12 hearing-impaired listeners. A three-interval, two-alternative forced-choice procedure was used. Frequency resolution was measured with an abbreviated psychoacoustical tuning curve. Frequency discrimination was measured for (1) a fixed-frequency standard and target, (2) a fixed-frequency standard and a frequency-transition target, and (3) frequency-transition standard and a frequency-transition target. The 50-ms frequency transitions had the same final frequency as the standards, but the initial frequency was lowered to obtain about 79% discrimination performance. There was a strong relationship between poor frequency resolution and elevated pure-tone thresholds, but only a very weak relationship between poor frequency discrimination and elevated pure-tone thresholds. Several hearing-impaired listeners had normal discrimination performance together with pure-tone thresholds of 80-90 dB HL. A slight correlation was found between word recognition and frequency discrimination, but a detailed comparison of the phonetic errors and either the frequency-discrimination or frequency-resolution tasks failed to suggest any consistent interdependencies. These results are consistent with previous work that has suggested that frequency resolution and frequency discrimination are independent processes.  相似文献   

2.
The ability to discriminate changes in the length of vowels and tonal complexes (filled intervals) and in the duration of closure in stop consonants and gaps in tonal complexes (unfilled intervals) was studied in three normally hearing and seven severely hearing-impaired listeners. The speech stimuli consisted of the vowels (i, I, u, U, a, A) and the consonants (p, t, k), and the tonal complexes consisted of digitally generated sinusoids at 0.5, 1, and 2 kHz. The signals were presented at conversational levels for each listener group, and a 3IFC adaptive procedure was used to estimate difference limens (DLs). The DLs for speech were similar to those for tonal complex stimuli in both the filled and unfilled conditions. Both normally and impaired-hearing listeners demonstrated greater acuity for changes in the duration of filled than unfilled intervals. Mean thresholds for filled intervals obtained from normally hearing listeners were smaller than those obtained from hearing-impaired listeners. For unfilled intervals, however, the difference between listener groups was not significant. A few hearing-impaired listeners demonstrated temporal acuity comparable to that of normally hearing listeners for several listening conditions. Implications of these results are discussed with regard to speech perception in normally and impaired-hearing individuals.  相似文献   

3.
Twenty normal hearing younger and twenty older adults in the early stages of presbycusis, but with relatively normal hearing at 2 kHz, were asked to discriminate between the presence versus absence of a gap between two equal-duration tonal markers. The duration of each marker was constant within a block of trials but varied between 0.83 and 500 ms across blocks. Notched-noise, centered at 2 kHz, was used to mask on- and off-transients. Gap detection thresholds of older adults were markedly higher than those of younger adults for marker durations of less than 250 ms but converged on those of younger adults at 500 ms. For both age groups, gap detection thresholds were independent of audiometric thresholds. These results indicate that older adults have more difficulty detecting a gap than younger adults when short marker durations (i.e., durations characteristic of speech sounds) are employed. It is shown that these results cannot be explained by linear models of temporal processing but are consistent with differential adaptation effects in younger and older adults.  相似文献   

4.
The effects of intensity on the difference limen for frequency (DLF) in normal-hearing and in hearing-impaired listeners are incorporated into the temporal model of frequency discrimination proposed by Goldstein and Srulovicz [Psychophysics and Physiology of Hearing, edited by E. F. Evans and J.P. Wilson (Academic, New York, 1977)]. A simple extension of the temporal mode, which includes the dependence of phase locking on intensity, is sufficient to predict the effects of intensity on the DLF in normal-hearing listeners. To account for elevated DLFs in hearing-impaired listeners the impairment is modeled as a reduction in the synchrony of the discharge from VIIIth-nerve fibers that innervate the region of hearing loss. Constraints on the optimal processor and the validity of the temporal model at high frequencies are discussed.  相似文献   

5.
6.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners discriminated between a standard stimulus and a signal stimulus in which half of the standard components were decreased in level and half were increased in level. In one condition, the standard stimulus was the sum of six equal-amplitude tones (equal-SPL), and in another the standard stimulus was the sum of six tones at equal sensation levels re: audiometric thresholds for individual subjects (equal-SL). Spectral weights were estimated in conditions where the amplitudes of the individual tones were perturbed slightly on every presentation. Sensitivity was similar in all conditions for normal-hearing and hearing-impaired listeners. The presence of perturbation and equal-SL components increased thresholds for both groups, but only small differences in weighting strategy were measured between the groups depending on whether the equal-SPL or equal-SL condition was tested. The average data suggest that normal-hearing listeners may rely more on the central components of the spectrum whereas hearing-impaired listeners may have been more likely to use the edges. However, individual weighting functions were quite variable, especially for the HI listeners, perhaps reflecting difficulty in processing changes in spectral shape due to hearing loss. Differences in weighting strategy without changes in sensitivity suggest that factors other than spectral weights, such as internal noise or difficulty encoding a reference stimulus, also may dominate performance.  相似文献   

7.
In a multiple observation, sample discrimination experiment normal-hearing (NH) and hearing-impaired (HI) listeners heard two multitone complexes each consisting of six simultaneous tones with nominal frequencies spaced evenly on an ERB(N) logarithmic scale between 257 and 6930 Hz. On every trial, the frequency of each tone was sampled from a normal distribution centered near its nominal frequency. In one interval of a 2IFC task, all tones were sampled from distributions lower in mean frequency and in the other interval from distributions higher in mean frequency. Listeners had to identify the latter interval. Decision weights were obtained from multiple regression analysis of the between- interval frequency differences for each tone and listeners' responses. Frequency difference limens (an index of sensorineural resolution) and decision weights for each tone were used to predict the sensitivity of different decision-theoretic models. Results indicate that low-frequency tones were given much greater perceptual weight than high-frequency tones by both groups of listeners. This tendency increased as hearing loss increased and as sensorineural resolution decreased, resulting in significantly less efficient weighting strategies for the HI listeners. Overall, results indicate that HI listeners integrated frequency information less optimally than NH listeners, even after accounting for differences in sensorineural resolution.  相似文献   

8.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listener's responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.  相似文献   

9.
Spectral-shape discrimination thresholds were measured in the presence and absence of noise to determine whether normal-hearing and hearing-impaired listeners rely primarily on spectral peaks in the excitation pattern when discriminating between stimuli with different spectral shapes. Standard stimuli were the sum of 2, 4, 6, 8, 10, 20, or 30 equal-amplitude tones with frequencies fixed between 200 and 4000 Hz. Signal stimuli were generated by increasing and decreasing the levels of every other standard component. The function relating the spectral-shape discrimination threshold to the number of components (N) showed an initial decrease in threshold with increasing N and then an increase in threshold when the number of components reached 10 and 6, for normal-hearing and hearing-impaired listeners, respectively. The presence of a 50-dB SPL/Hz noise led to a 1.7 dB increase in threshold for normal-hearing listeners and a 3.5 dB increase for hearing-impaired listeners. Multichannel modeling and the relatively small influence of noise suggest that both normal-hearing and hearing-impaired listeners rely on the peaks in the excitation pattern for spectral-shape discrimination. The greater influence of noise in the data from hearing-impaired listeners is attributed to a poorer representation of spectral peaks.  相似文献   

10.
Young normal-hearing listeners, elderly normal-hearing listeners, and elderly hearing-impaired listeners were tested on a variety of phonetic identification tasks. Where identity was cued by stimulus duration, the elderly hearing-impaired listeners evidenced normal identification functions. On a task in which there were multiple cues to vowel identity, performance was also normal. On a/b d g/identification task in which the starting frequency of the second formant was varied, performance was abnormal for both the elderly hearing-impaired listeners and the elderly normal-hearing listeners. We conclude that errors in phonetic identification among elderly hearing-impaired listeners with mild to moderate, sloping hearing impairment do not stem from abnormalities in processing stimulus duration. The results with the /b d g/continuum suggest that one factor underlying errors may be an inability to base identification on dynamic spectral information when relatively static information, which is normally characteristic of a phonetic segment, is unavailable.  相似文献   

11.
The bandwidths for summation at threshold were measured for subjects with normal hearing and subjects with sensorineural hearing loss. Thresholds in quiet and in the presence of a masking noise were measured for complex stimuli consisting of 1 to 40 pure-tone components spaced 20 Hz apart. The single component condition consisted of a single pure tone at 1100 Hz; additional components were added below this frequency, in a replication of the G?ssler [Acustica 4, 408-414 (1954)] procedure. For the normal subjects, thresholds increased approximately 3 dB per doubling of bandwidth for signal bandwidths exceeding the critical bandwidth. This slope was less for the hearing-impaired subjects. Summation bandwidths, as estimated from two-line fits, were wider for the hearing-impaired than for the normal subjects. These findings provide evidence that hearing-impaired subjects integrate sound energy over a wider-than-normal frequency range for the detection of complex signals. A second experiment used stimuli similar to those of Spiegel [J. Acoust. Soc. Am. 66, 1356-1363 (1979)], and added components both above and below the frequency of the initial component. Using these stimuli, the slope of the threshold increase beyond the critical bandwidth was approximately 1.5 dB per doubling of bandwidth, thus replicating the Spiegel (1979) experiment. It is concluded that the differences between the G?ssler (1954) and Spiegel (1979) studies were due to the different frequency content of the stimuli used in each study. Based upon the present results, it would appear that the slope of threshold increase is dependent upon the direction of signal expansion, and the size of the critical bands into which the signal is expanded.  相似文献   

12.
A conditional-on-a-single-stimulus (COSS) analysis procedure [B. G. Berg, J. Acoust. Soc. Am. 86, 1743-1746 (1989)] was used to estimate how well normal-hearing and hearing-impaired listeners selectively attend to individual spectral components of a broadband signal in a level discrimination task. On each trial, two multitone complexes consisting of six octave frequencies from 250 to 8000 Hz were presented to listeners. The levels of the individual tones were chosen independently and at random on each presentation. The target tone was selected, within a block of trials, as the 250-, 1000-, or 4000-Hz component. On each trial, listeners were asked to indicate which of the two complex sounds contained the higher level target. As a group, normal-hearing listeners exhibited greater selectivity than hearing-impaired listeners to the 250-Hz target, while hearing-impaired listeners showed greater selectivity than normal-hearing listeners to the 4000-Hz target, which is in the region of their hearing loss. Both groups of listeners displayed large variability in their ability to selectively weight the 1000-Hz target. Trial-by-trial analysis showed a decrease in weighting efficiency with increasing frequency for normal-hearing listeners, but a relatively constant weighting efficiency across frequency for hearing-impaired listeners. Interestingly, hearing-impaired listeners selectively weighted the 4000-Hz target, which was in the region of their hearing loss, more efficiently than did the normal-hearing listeners.  相似文献   

13.
Frequency resolution was evaluated for two normal-hearing and seven hearing-impaired subjects with moderate, flat sensorineural hearing loss by measuring percent correct detection of a 2000-Hz tone as the width of a notch in band-reject noise increased. The level of the tone was fixed for each subject at a criterion performance level in broadband noise. Discrimination of synthetic speech syllables that differed in spectral content in the 2000-Hz region was evaluated as a function of the notch width in the same band-reject noise. Recognition of natural speech consonant/vowel syllables in quiet was also tested; results were analyzed for percent correct performance and relative information transmitted for voicing and place features. In the hearing-impaired subjects, frequency resolution at 2000 Hz was significantly correlated with the discrimination of synthetic speech information in the 2000-Hz region and was not related to the recognition of natural speech nonsense syllables unless (a) the speech stimuli contained the vowel /i/ rather than /a/, and (b) the score reflected information transmitted for place of articulation rather than percent correct.  相似文献   

14.
Level discrimination of tones as a function of duration   总被引:1,自引:0,他引:1  
Difference limens for level [delta Ls (dB) = 20 log[p + delta p)/p), where p is the pressure] were measured as a function of duration for tones at 250, 500, and 8000 Hz. Stimulus durations ranged from 2 ms to 2 s, and the stimulus power was held constant. Rise and fall times were 1 ms. The interstimulus interval was 250 ms. At each frequency, three levels were tested: 85, 65, and approximately 40 dB SPL. An adaptive two-alternative forced-choice procedure with feedback was used. For three normal listeners, delta Ls decreased as duration increased, up to at least 2 s, except at 250 Hz. At 250 Hz, delta L stopped decreasing at durations between 0.5 and 1 s. In a double logarithmic plot of delta L versus duration, the rate of decrease is generally well fitted by a sloping line. The average slope is -0.28; it is steeper at high levels than at low levels. Because the average slope is shallower than the -0.5 slope predicted for an optimum detector, it may be that fast adaptation of auditory-nerve activity and/or memory effects interfere with level discrimination of long-duration tones. Finally, the delta Ls at 8 kHz decreased nonmonotonically with increasing level.  相似文献   

15.
The purpose of this experiment was to determine the applicability of the Articulation Index (AI) model for characterizing the speech recognition performance of listeners with mild-to-moderate hearing loss. Performance-intensity functions were obtained from five normal-hearing listeners and 11 hearing-impaired listeners using a closed-set nonsense syllable test for two frequency responses (uniform and high-frequency emphasis). For each listener, the fitting constant Q of the nonlinear transfer function relating AI and speech recognition was estimated. Results indicated that the function mapping AI onto performance was approximately the same for normal and hearing-impaired listeners with mild-to-moderate hearing loss and high speech recognition scores. For a hearing-impaired listener with poor speech recognition ability, the AI procedure was a poor predictor of performance. The AI procedure as presently used is inadequate for predicting performance of individuals with reduced speech recognition ability and should be used conservatively in applications predicting optimal or acceptable frequency response characteristics for hearing-aid amplification systems.  相似文献   

16.
Growth-of-masking functions were obtained from 19 normal and 5 hearing-impaired listeners using a simultaneous-masking paradigm. When masker and probe frequency are identical, the slope of masking approximates 1.0 for both normal-hearing and impaired listeners. For masker frequencies less than or greater than probe frequency, the slopes for impaired listeners are shallower than those of normals. These findings are consistent with previously reported physiological data (single-fiber rate versus level and AP masking functions) for animals with induced cochlear lesions. Results are discussed in terms of a potential masking technique to estimate the growth of response in normal and impaired ears.  相似文献   

17.
Simultaneous-masked psychophysical tuning curves (PTCs) were obtained from normal-hearing and sensorineural hearing-impaired listeners. The 20-ms signal was presented at the onset or at the temporal center of the 400-ms masker. For the normal-hearing listeners, as shown previously [S. P. Bacon and B. C. J. Moore, J. Acoust. Soc. Am. 80, 1638-1645 (1986)], the PTCs were sharper on the high-frequency side for a signal in the temporal center of the masker. For the hearing-impaired listeners, however, the shape of the PTC was virtually independent of the temporal position of the signal. These data suggest that the mechanisms responsible for sharpening the PTC with time in normal-hearing listeners are ineffective in listeners with moderate-to-severe sensorineural hearing loss.  相似文献   

18.
In the present study, speech-recognition performance was measured in four hearing-impaired subjects and twelve normal hearers. The normal hearers were divided into four groups of three subjects each. Speech-recognition testing for the normal hearers was accomplished in a background of spectrally shaped noise in which the noise was shaped to produce masked thresholds identical to the quiet thresholds of one of the hearing-impaired subjects. The question addressed in this study is whether normal hearers with a hearing loss simulated through a shaped masking noise demonstrate speech-recognition difficulties similar to those of listeners with actual hearing impairment. Regarding overall percent-correct scores, the results indicated that two of the four hearing-impaired subjects performed better than their corresponding subgroup of noise-masked normal hearers, whereas the other two impaired listeners performed like the noise-masked normal listeners. A gross analysis of the types of errors made suggested that subjects with actual and simulated losses frequently made different types of errors.  相似文献   

19.
A triadic comparisons task and an identification task were used to evaluate normally hearing listeners' and hearing-impaired listeners' perceptions of synthetic CV stimuli in the presence of competition. The competing signals included multitalker babble, continuous speech spectrum noise, a CV masker, and a brief noise masker shaped to resemble the onset spectrum of the CV masker. All signals and maskers were presented monotically. Interference by competition was assessed by comparing Multidimensional Scaling solutions derived from each masking condition to that derived from the baseline (quiet) condition. Analysis of the effects of continuous maskers revealed that multitalker babble and continuous noise caused the same amount of change in performance, as compared to the baseline condition, for all listeners. CV masking changed performance significantly more than did brief noise masking, and the hearing-impaired listeners experienced more degradation in performance than normals. Finally, the velar CV maskers (g epsilon and k epsilon) caused significantly greater masking effects than the bilabial CV maskers (b epsilon and p epsilon), and were most resistant to masking by other competing stimuli. The results suggest that speech intelligibility difficulties in the presence of competing segments of speech are primarily attributable to phonetic interference rather than to spectral masking. Individual differences in hearing-impaired listeners' performances are also discussed.  相似文献   

20.
The purpose of this investigation was to study the effects of consonant environment on vowel duration for normally hearing males, hearing-impaired males with intelligible speech, and hearing-impaired males with semi-intelligible speech. The results indicated that the normally hearing and intelligible hearing-impaired speakers exhibited similar trends with respect to consonant influence on vowel duration; i.e., vowels were longer in duration, in a voiced environment as compared with a voiceless, and in a fricative environment as compared with a plosive. The semi-intelligible hearing-impaired speakers, however, failed to demonstrate a consonant effect on vowel duration, and produced the vowels with significantly longer durations when compared with the other two groups of speakers. These data provide information regarding temporal conditions which may contribute to the decreased intelligibility of hearing-impaired persons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号