首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Spectral-shape discrimination thresholds were measured in the presence and absence of noise to determine whether normal-hearing and hearing-impaired listeners rely primarily on spectral peaks in the excitation pattern when discriminating between stimuli with different spectral shapes. Standard stimuli were the sum of 2, 4, 6, 8, 10, 20, or 30 equal-amplitude tones with frequencies fixed between 200 and 4000 Hz. Signal stimuli were generated by increasing and decreasing the levels of every other standard component. The function relating the spectral-shape discrimination threshold to the number of components (N) showed an initial decrease in threshold with increasing N and then an increase in threshold when the number of components reached 10 and 6, for normal-hearing and hearing-impaired listeners, respectively. The presence of a 50-dB SPL/Hz noise led to a 1.7 dB increase in threshold for normal-hearing listeners and a 3.5 dB increase for hearing-impaired listeners. Multichannel modeling and the relatively small influence of noise suggest that both normal-hearing and hearing-impaired listeners rely on the peaks in the excitation pattern for spectral-shape discrimination. The greater influence of noise in the data from hearing-impaired listeners is attributed to a poorer representation of spectral peaks.  相似文献   

2.
The purpose of this study was to examine the effect of spectral-cue audibility on the recognition of stop consonants in normal-hearing and hearing-impaired adults. Subjects identified six synthetic CV speech tokens in a closed-set response task. Each syllable differed only in the initial 40-ms consonant portion of the stimulus. In order to relate performance to spectral-cue audibility, the initial 40 ms of each CV were analyzed via FFT and the resulting spectral array was passed through a sliding-filter model of the human auditory system to account for logarithmic representation of frequency and the summation of stimulus energy within critical bands. This allowed the spectral data to be displayed in comparison to a subject's sensitivity thresholds. For normal-hearing subjects, an orderly function relating the percentage of audible stimulus to recognition performance was found, with perfect discrimination performance occurring when the bulk of the stimulus spectrum was presented at suprathreshold levels. For the hearing-impaired subjects, however, it was found in many instances that suprathreshold presentation of stop-consonant spectral cues did not yield recognition equivalent to that found for the normal-hearing subjects. These results demonstrate that while the audibility of individual stop consonants is an important factor influencing recognition performance in hearing-impaired subjects, it is not always sufficient to explain the effects of sensorineural hearing loss.  相似文献   

3.
Two experiments are reported which explore variables that may complicate the interpretation of phoneme boundary data from hearing-impaired listeners. Fourteen synthetic consonant-vowel syllables comprising a/ba-da-ga/ continuum were used as stimuli. The first experiment examined the influence of presentation level and ear of presentation in normal-hearing subjects. Only small differences in the phoneme boundaries and labeling functions were observed between ears and across presentation levels. Thus monaural presentation and relatively high signal level do not appear to be complicating factors in research with hearing-impaired listeners, at least for these stimuli. The second experiment described a test procedure for obtaining phoneme boundaries in some hearing-impaired listeners that controlled for between-subject sources of variation unrelated to hearing impairment and delineated the effects of spectral shaping imposed by the hearing impairment on the labeling functions. Labeling data were obtained from unilaterally hearing-impaired listeners under three test conditions: in the normal ear without any signal distortion; in the normal ear listening through a spectrum shaper that was set to match the subject's suprathreshold audiometric configuration; and in the impaired ear. The reduction in the audibility of the distinctive acoustic/phonetic cues seemed to explain all or part of the effects of the hearing impairment on the labeling functions of some subjects. For many other subjects, however, other forms of distortion in addition to reduced audibility seemed to affect their labeling behavior.  相似文献   

4.
Speech-understanding difficulties observed in elderly hearing-impaired listeners are predominantly errors in the recognition of consonants, particularly within consonants that share the same manner of articulation. Spectral shape is an important acoustic cue that serves to distinguish such consonants. The present study examined whether individual differences in speech understanding among elderly hearing-impaired listeners could be explained by individual differences in spectral-shape discrimination ability. This study included a group of 20 elderly hearing-impaired listeners, as well as a group of young normal-hearing adults for comparison purposes. All subjects were tested on speech-identification tasks, with natural and computer-synthesized speech stimuli, and on a series of spectral-shape discrimination tasks. As expected, the young normal-hearing adults performed better than the elderly listeners on many of the identification tasks and on all but two discrimination tasks. Regression analyses of the data from the elderly listeners revealed moderate predictive relationships between some of the spectral-shape discrimination thresholds and speech-identification performance. The results indicated that when all stimuli were at least minimally audible, some of the individual differences in the identification of natural and synthetic speech tokens by elderly hearing-impaired listeners were associated with corresponding differences in their spectral-shape discrimination abilities for similar sounds.  相似文献   

5.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listener's responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.  相似文献   

6.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners discriminated between a standard stimulus and a signal stimulus in which half of the standard components were decreased in level and half were increased in level. In one condition, the standard stimulus was the sum of six equal-amplitude tones (equal-SPL), and in another the standard stimulus was the sum of six tones at equal sensation levels re: audiometric thresholds for individual subjects (equal-SL). Spectral weights were estimated in conditions where the amplitudes of the individual tones were perturbed slightly on every presentation. Sensitivity was similar in all conditions for normal-hearing and hearing-impaired listeners. The presence of perturbation and equal-SL components increased thresholds for both groups, but only small differences in weighting strategy were measured between the groups depending on whether the equal-SPL or equal-SL condition was tested. The average data suggest that normal-hearing listeners may rely more on the central components of the spectrum whereas hearing-impaired listeners may have been more likely to use the edges. However, individual weighting functions were quite variable, especially for the HI listeners, perhaps reflecting difficulty in processing changes in spectral shape due to hearing loss. Differences in weighting strategy without changes in sensitivity suggest that factors other than spectral weights, such as internal noise or difficulty encoding a reference stimulus, also may dominate performance.  相似文献   

7.
A conditional-on-a-single-stimulus (COSS) analysis procedure [B. G. Berg, J. Acoust. Soc. Am. 86, 1743-1746 (1989)] was used to estimate how well normal-hearing and hearing-impaired listeners selectively attend to individual spectral components of a broadband signal in a level discrimination task. On each trial, two multitone complexes consisting of six octave frequencies from 250 to 8000 Hz were presented to listeners. The levels of the individual tones were chosen independently and at random on each presentation. The target tone was selected, within a block of trials, as the 250-, 1000-, or 4000-Hz component. On each trial, listeners were asked to indicate which of the two complex sounds contained the higher level target. As a group, normal-hearing listeners exhibited greater selectivity than hearing-impaired listeners to the 250-Hz target, while hearing-impaired listeners showed greater selectivity than normal-hearing listeners to the 4000-Hz target, which is in the region of their hearing loss. Both groups of listeners displayed large variability in their ability to selectively weight the 1000-Hz target. Trial-by-trial analysis showed a decrease in weighting efficiency with increasing frequency for normal-hearing listeners, but a relatively constant weighting efficiency across frequency for hearing-impaired listeners. Interestingly, hearing-impaired listeners selectively weighted the 4000-Hz target, which was in the region of their hearing loss, more efficiently than did the normal-hearing listeners.  相似文献   

8.
The purpose of this experiment was to evaluate the utilization of short-term spectral cues for recognition of initial plosive consonants (/b,d,g/) by normal-hearing and by hearing-impaired listeners differing in audiometric configuration. Recognition scores were obtained for these consonants paired with three vowels (/a,i,u/) while systematically reducing the duration (300 to 10 ms) of the synthetic consonant-vowel syllables. Results from 10 normal-hearing and 15 hearing-impaired listeners suggest that audiometric configuration interacts in a complex manner with the identification of short-duration stimuli. For consonants paired with the vowels /a/ and /u/, performance deteriorated as the slope of the audiometric configuration increased. The one exception to this result was a subject who had significantly elevated pure-tone thresholds relative to the other hearing-impaired subjects. Despite the changes in the shape of the onset spectral cues imposed by hearing loss, with increasing duration, consonant recognition in the /a/ and /u/ context for most hearing-impaired subjects eventually approached that of the normal-hearing listeners. In contrast, scores for consonants paired with /i/ were poor for a majority of hearing-impaired listeners for stimuli of all durations.  相似文献   

9.
Vowels are mainly classified by the positions of peaks in their frequency spectra, the formants. For normal-hearing subjects, change detection and direction discrimination were measured for linear glides in the center frequency (CF) of formantlike sounds. A CF rove was used to prevent subjects from using either the start or end points of the glides as cues. In addition, change detection and starting-phase (start-direction) discrimination were measured for similar stimuli with a sinusoidal 5-Hz formant-frequency modulation. The stimuli consisted of single formants generated using a number of different stimulus parameters including fundamental frequency, spectral slope, frequency region, and position of the formant relative to the harmonic spectrum. The change detection thresholds were in good agreement with the predictions of a model which analyzed and combined the effects of place-of-excitation and temporal cues. For most stimuli, thresholds were approximately equal for change detection and start-direction discrimination. Exceptions were found for stimuli that consisted of only one or two harmonics. In a separate experiment, it was shown that change detection and start-direction discrimination of linear and sinusoidal formant-frequency modulations were impaired by off-frequency frequency-modulated interferers. This frequency modulation detection interference was larger for formants with shallow than for those with steep spectral slopes.  相似文献   

10.
The ability of normally hearing and hearing-impaired subjects to use temporal fine structure information in complex tones was measured. Subjects were required to discriminate a harmonic complex tone from a tone in which all components were shifted upwards by the same amount in Hz, in a three-alternative, forced-choice task. The tones either contained five equal-amplitude components (non-shaped stimuli) or contained many components, but were passed through a fixed bandpass filter to reduce excitation pattern changes (shaped stimuli). Components were centered at nominal harmonic numbers (N) 7, 11, and 18. For the shaped stimuli, hearing-impaired subjects performed much more poorly than normally hearing subjects, with most of the former scoring no better than chance when N=11 or 18, suggesting that they could not access the temporal fine structure information. Performance for the hearing-impaired subjects was significantly improved for the non-shaped stimuli, presumably because they could benefit from spectral cues. It is proposed that normal-hearing subjects can use temporal fine structure information provided the spacing between fine structure peaks is not too small relative to the envelope period, but subjects with moderate cochlear hearing loss make little use of temporal fine structure information for unresolved components.  相似文献   

11.
Frequency resolution was evaluated for two normal-hearing and seven hearing-impaired subjects with moderate, flat sensorineural hearing loss by measuring percent correct detection of a 2000-Hz tone as the width of a notch in band-reject noise increased. The level of the tone was fixed for each subject at a criterion performance level in broadband noise. Discrimination of synthetic speech syllables that differed in spectral content in the 2000-Hz region was evaluated as a function of the notch width in the same band-reject noise. Recognition of natural speech consonant/vowel syllables in quiet was also tested; results were analyzed for percent correct performance and relative information transmitted for voicing and place features. In the hearing-impaired subjects, frequency resolution at 2000 Hz was significantly correlated with the discrimination of synthetic speech information in the 2000-Hz region and was not related to the recognition of natural speech nonsense syllables unless (a) the speech stimuli contained the vowel /i/ rather than /a/, and (b) the score reflected information transmitted for place of articulation rather than percent correct.  相似文献   

12.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

13.
"Masking release" (MR), the improvement of speech intelligibility in modulated compared with unmodulated maskers, is typically smaller than normal for hearing-impaired listeners. The extent to which this is due to reduced audibility or to suprathreshold processing deficits is unclear. Here, the effects of audibility were controlled by using stimuli restricted to the low- (≤1.5 kHz) or mid-frequency (1-3 kHz) region for normal-hearing listeners and hearing-impaired listeners with near-normal hearing in the tested region. Previous work suggests that the latter may have suprathreshold deficits. Both spectral and temporal MR were measured. Consonant identification was measured in quiet and in the presence of unmodulated, amplitude-modulated, and spectrally modulated noise at three signal-to-noise ratios (the same ratios for the two groups). For both frequency regions, consonant identification was poorer for the hearing-impaired than for the normal-hearing listeners in all conditions. The results suggest the presence of suprathreshold deficits for the hearing-impaired listeners, despite near-normal audiometric thresholds over the tested frequency regions. However, spectral MR and temporal MR were similar for the two groups. Thus, the suprathreshold deficits for the hearing-impaired group did not lead to reduced MR.  相似文献   

14.
Thresholds of ongoing interaural time difference (ITD) were obtained from normal-hearing and hearing-impaired listeners who had high-frequency, sensorineural hearing loss. Several stimuli (a 500-Hz sinusoid, a narrow-band noise centered at 500 Hz, a sinusoidally amplitude-modulated 4000-Hz tone, and a narrow-band noise centered at 4000 Hz) and two criteria [equal sound-pressure level (Eq SPL) and equal sensation level (Eq SL)] for determining the level of stimuli presented to each listener were employed. The ITD thresholds and slopes of the psychometric functions were elevated for hearing-impaired listeners for the two high-frequency stimuli in comparison to: the listener's own low-frequency thresholds; and data obtained from normal-hearing listeners for stimuli presented with Eq SPL interaurally. The two groups of listeners required similar ITDs to reach threshold when stimuli were presented at Eq SLs to each ear. For low-frequency stimuli, the ITD thresholds of the hearing-impaired listener were generally slightly greater than those obtained from the normal-hearing listeners. Whether these stimuli were presented at either Eq SPL or Eq SL did not differentially affect the ITD thresholds across groups.  相似文献   

15.
Detection and discrimination of spectral peaks and notches at 1 and 8 kHz   总被引:1,自引:0,他引:1  
The ability of subjects to detect and discriminate spectral peaks and notches in noise stimuli was determined for center frequencies fc of 1 and 8 kHz. The signals were delivered using an insert earphone designed to produce a flat frequency response at the eardrum for frequencies up to 14 kHz. In experiment I, subjects were required to distinguish a broadband reference noise with a flat spectrum from a noise with either a peak or a notch at fc. The threshold peak height or notch depth was determined as a function of bandwidth of the peak or notch (0.125, 0.25, or 0.5 times fc). Thresholds increased with decreasing bandwidth, particularly for the notches. In experiment II, subjects were required to detect an increase in the height of a spectral peak or a decrease in the depth of a notch as a function of bandwidth. Performance was worse for notches than for peaks, particularly at narrow bandwidths. For both experiments I and II, randomizing (roving) the overall level of the stimuli had little effect at 1 kHz, but tended to impair performance at 8 kHz, particularly for notches. Experiments III-VI measured thresholds for detecting changes in center frequency of sinusoids, bands of noise, and spectral peaks or notches in a broadband background. Thresholds were lowest for the sinusoids and highest for the peaks and notches. The width of the bands, peaks, or notches had only a small effect on thresholds. For the notches at 8 kHz, thresholds for detecting glides in center frequency were lower than thresholds for detecting a difference in center frequency between two steady sounds. Randomizing the overall level of the stimuli made frequency discrimination of the sinusoids worse, but had little or no effect for the noise stimuli. In all six experiments, performance was generally worse at 8 kHz than at 1 kHz. The results are discussed in terms of their implications for the detectability of spectral cues introduced by the pinnae.  相似文献   

16.
The forward-masking properties of inharmonic complex stimuli were measured both for normal and hearing-impaired subjects. The signal threshold for a 1000-Hz pure-tone probe was obtained for six different maskers, which varied in the number of pure-tone components. The masking stimuli consisted of 1, 3, 5, 7, 9, or 11 components, logarithmically spaced in frequency surrounding the signal and presented at a fixed level of 80 dB SPL per component. In most normal-hearing subjects, the threshold for the probe decreased as the number of masking components was increased, demonstrating that stimuli with more components tended to be less effective maskers. Results from hearing-impaired subjects showed no decrease in threshold with increasing number of masking components. Instead, the thresholds increased as more components were added to the first masker. These results appear to be consistent with suppression effects within the multicomponent maskers for the normal subjects and a lack of suppression effects for the hearing-impaired subjects. The results from the normal-hearing subjects are also consistent with "across-channel" cuing.  相似文献   

17.
Algorithms designed to improve speech intelligibility for those with sensorineural hearing loss (SNHL) by enhancing peaks in a spectrum have had limited success. Since testing of such algorithms cannot separate the theory of the design from the implementation itself, the contribution of each of these potentially limiting factors is not clear. Therefore, psychophysical paradigms were used to test subjects with either normal hearing or SNHL in detection tasks using well controlled stimuli to predict and assess the limits in performance gain from a spectrally enhancing algorithm. A group of normal-hearing (NH) and hearing-impaired (HI) subjects listened in two experiments: auditory filter measurements and detection of incremented harmonics in a harmonic spectrum. The results show that NH and HI subjects have an improved ability to detect incremented harmonics when there are spectral decrements surrounding the increment. Various decrement widths and depths were compared against subjects' equivalent rectangular bandwidths (ERBs). NH subjects effectively used the available energy cue in their auditory filters. Some HI subjects, while showing significant improvements, underutilized the energy reduction in their auditory filters.  相似文献   

18.
Frequency difference limens were determined as a function of stimulus duration in five normal-hearing and seven hearing-impaired subjects. The frequency DL duration functions obtained from normal-hearing subjects were similar to those reported by Liang and Chistovich [Sov. Phys. Acoust. 6, 75-80 (1961)]. As duration increased, the DL's improved rapidly over a range of short durations, improved more gradually over a middle range of durations, and reached an asymptote around 200 ms. The functions obtained from the hearing-impaired subjects were similar to those from normal subjects over the middle and longer duration, but did not display the rapid changes at short durations. The paper examines the ability of a variation of Zwicker's excitation-pattern model of frequency discrimination to explain these duration effects. Most, although not all, of the effects can be adequately explained by the model.  相似文献   

19.
The effects of intensity on monosyllabic word recognition were studied in adults with normal hearing and mild-to-moderate sensorineural hearing loss. The stimuli were bandlimited NU#6 word lists presented in quiet and talker-spectrum-matched noise. Speech levels ranged from 64 to 99 dB SPL and S/N ratios from 28 to -4 dB. In quiet, the performance of normal-hearing subjects remained essentially constant in noise, at a fixed S/N ratio, it decreased as a linear function of speech level. Hearing-impaired subjects performed like normal-hearing subjects tested in noise when the data were corrected for the effects of audibility loss. From these and other results, it was concluded that: (1) speech intelligibility in noise decreases when speech levels exceed 69 dB SPL and the S/N ratio remains constant; (2) the effects of speech and noise level are synergistic; (3) the deterioration in intelligibility can be modeled as a relative increase in the effective masking level; (4) normal-hearing and hearing-impaired subjects are affected similarly by increased signal level when differences in speech audibility are considered; (5) the negative effects of increasing speech and noise levels on speech recognition are similar for all adult subjects, at least up to 80 years; and (6) the effective dynamic range of speech may be larger than the commonly assumed value of 30 dB.  相似文献   

20.
The effect of tone duration and presentation rate on the discrimination of the temporal order of the middle two tones of a four-tone sequence was investigated in young normal-hearing (YNH) and older hearing-impaired (OHI) listeners. The frequencies and presentation level of the tone sequences were selected to minimize the effect of hearing loss on the performance of the OHI listeners. Tone durations varied from 20 to 400 ms and presentation rates from 2.5 to 25 toness. Two experiments were conducted with anisochronous (nonuniform duration and rate across entire sequence) and isochronous (uniform rate and duration) sequences, respectively. For the YNH listeners, performance for both isochronous and anisochronous sequences was determined primarily by presentation rate such that performance decreased at rates faster than 5 toness. For anisochronous tone sequences alone, the effects of rate were more pronounced at short tone durations. For the OHI listeners, both presentation rate and tone duration had an impact on performance for both isochronous and anisochronous sequences such that performance decreased as rate increased above 5 toness or duration decreased below 40 ms. Temporal masking was offered as an explanation for the interaction of short durations and fast rates on temporal order discrimination for the anisochronous sequences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号