首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ability to discriminate changes in the length of vowels and tonal complexes (filled intervals) and in the duration of closure in stop consonants and gaps in tonal complexes (unfilled intervals) was studied in three normally hearing and seven severely hearing-impaired listeners. The speech stimuli consisted of the vowels (i, I, u, U, a, A) and the consonants (p, t, k), and the tonal complexes consisted of digitally generated sinusoids at 0.5, 1, and 2 kHz. The signals were presented at conversational levels for each listener group, and a 3IFC adaptive procedure was used to estimate difference limens (DLs). The DLs for speech were similar to those for tonal complex stimuli in both the filled and unfilled conditions. Both normally and impaired-hearing listeners demonstrated greater acuity for changes in the duration of filled than unfilled intervals. Mean thresholds for filled intervals obtained from normally hearing listeners were smaller than those obtained from hearing-impaired listeners. For unfilled intervals, however, the difference between listener groups was not significant. A few hearing-impaired listeners demonstrated temporal acuity comparable to that of normally hearing listeners for several listening conditions. Implications of these results are discussed with regard to speech perception in normally and impaired-hearing individuals.  相似文献   

2.
A conditional-on-a-single-stimulus (COSS) analysis procedure [B. G. Berg, J. Acoust. Soc. Am. 86, 1743-1746 (1989)] was used to estimate how well normal-hearing and hearing-impaired listeners selectively attend to individual spectral components of a broadband signal in a level discrimination task. On each trial, two multitone complexes consisting of six octave frequencies from 250 to 8000 Hz were presented to listeners. The levels of the individual tones were chosen independently and at random on each presentation. The target tone was selected, within a block of trials, as the 250-, 1000-, or 4000-Hz component. On each trial, listeners were asked to indicate which of the two complex sounds contained the higher level target. As a group, normal-hearing listeners exhibited greater selectivity than hearing-impaired listeners to the 250-Hz target, while hearing-impaired listeners showed greater selectivity than normal-hearing listeners to the 4000-Hz target, which is in the region of their hearing loss. Both groups of listeners displayed large variability in their ability to selectively weight the 1000-Hz target. Trial-by-trial analysis showed a decrease in weighting efficiency with increasing frequency for normal-hearing listeners, but a relatively constant weighting efficiency across frequency for hearing-impaired listeners. Interestingly, hearing-impaired listeners selectively weighted the 4000-Hz target, which was in the region of their hearing loss, more efficiently than did the normal-hearing listeners.  相似文献   

3.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners discriminated between a standard stimulus and a signal stimulus in which half of the standard components were decreased in level and half were increased in level. In one condition, the standard stimulus was the sum of six equal-amplitude tones (equal-SPL), and in another the standard stimulus was the sum of six tones at equal sensation levels re: audiometric thresholds for individual subjects (equal-SL). Spectral weights were estimated in conditions where the amplitudes of the individual tones were perturbed slightly on every presentation. Sensitivity was similar in all conditions for normal-hearing and hearing-impaired listeners. The presence of perturbation and equal-SL components increased thresholds for both groups, but only small differences in weighting strategy were measured between the groups depending on whether the equal-SPL or equal-SL condition was tested. The average data suggest that normal-hearing listeners may rely more on the central components of the spectrum whereas hearing-impaired listeners may have been more likely to use the edges. However, individual weighting functions were quite variable, especially for the HI listeners, perhaps reflecting difficulty in processing changes in spectral shape due to hearing loss. Differences in weighting strategy without changes in sensitivity suggest that factors other than spectral weights, such as internal noise or difficulty encoding a reference stimulus, also may dominate performance.  相似文献   

4.
In a multiple observation, sample discrimination experiment normal-hearing (NH) and hearing-impaired (HI) listeners heard two multitone complexes each consisting of six simultaneous tones with nominal frequencies spaced evenly on an ERB(N) logarithmic scale between 257 and 6930 Hz. On every trial, the frequency of each tone was sampled from a normal distribution centered near its nominal frequency. In one interval of a 2IFC task, all tones were sampled from distributions lower in mean frequency and in the other interval from distributions higher in mean frequency. Listeners had to identify the latter interval. Decision weights were obtained from multiple regression analysis of the between- interval frequency differences for each tone and listeners' responses. Frequency difference limens (an index of sensorineural resolution) and decision weights for each tone were used to predict the sensitivity of different decision-theoretic models. Results indicate that low-frequency tones were given much greater perceptual weight than high-frequency tones by both groups of listeners. This tendency increased as hearing loss increased and as sensorineural resolution decreased, resulting in significantly less efficient weighting strategies for the HI listeners. Overall, results indicate that HI listeners integrated frequency information less optimally than NH listeners, even after accounting for differences in sensorineural resolution.  相似文献   

5.
The goal of this study was to measure the ability of adult hearing-impaired listeners to discriminate formant frequency for vowels in isolation, syllables, and sentences. Vowel formant discrimination for F1 and F2 for the vowels /I epsilon ae / was measured. Four experimental factors were manipulated including linguistic context (isolated vowels, syllables, and sentences), signal level (70 and 95 dB SPL), formant frequency, and cognitive load. A complex identification task was added to the formant discrimination task only for sentences to assess effects of cognitive load. Results showed significant elevation in formant thresholds as formant frequency and linguistic context increased. Higher signal level also elevated formant thresholds primarily for F2. However, no effect of the additional identification task on the formant discrimination was observed. In comparable conditions, these hearing-impaired listeners had elevated thresholds for formant discrimination compared to young normal-hearing listeners primarily for F2. Altogether, poorer performance for formant discrimination for these adult hearing-impaired listeners was mainly caused by hearing loss rather than cognitive difficulty for tasks implemented in this study.  相似文献   

6.
7.
Spectral-shape discrimination thresholds were measured in the presence and absence of noise to determine whether normal-hearing and hearing-impaired listeners rely primarily on spectral peaks in the excitation pattern when discriminating between stimuli with different spectral shapes. Standard stimuli were the sum of 2, 4, 6, 8, 10, 20, or 30 equal-amplitude tones with frequencies fixed between 200 and 4000 Hz. Signal stimuli were generated by increasing and decreasing the levels of every other standard component. The function relating the spectral-shape discrimination threshold to the number of components (N) showed an initial decrease in threshold with increasing N and then an increase in threshold when the number of components reached 10 and 6, for normal-hearing and hearing-impaired listeners, respectively. The presence of a 50-dB SPL/Hz noise led to a 1.7 dB increase in threshold for normal-hearing listeners and a 3.5 dB increase for hearing-impaired listeners. Multichannel modeling and the relatively small influence of noise suggest that both normal-hearing and hearing-impaired listeners rely on the peaks in the excitation pattern for spectral-shape discrimination. The greater influence of noise in the data from hearing-impaired listeners is attributed to a poorer representation of spectral peaks.  相似文献   

8.
Frequency discrimination of spectral envelopes of complex stimuli, frequency selectivity measured with psychophysical tuning curves, and speech perception were determined in hearing-impaired subjects each having a relatively flat, sensory-neural loss. Both the frequency discrimination and speech perception measures were obtained in quiet and noise. Most of these subjects showed abnormal susceptibility to ambient noise with regard to speech perception. Frequency discrimination in quiet and frequency selectivity did not correlate significantly. At low signal-to-noise ratios, frequency discrimination correlated significantly with frequency selectivity. Speech perception in noise correlated significantly with frequency selectivity and with frequency discrimination at low signal-to-noise ratios. The frequency discrimination data are discussed in terms of an excitation-pattern model. However, they neither support nor refute the model.  相似文献   

9.
Detection and discrimination of frequency modulation were studied for harmonic signals with triangular spectral envelopes. The center frequency of the stimuli was near 2 kHz; the fundamental frequency was near 100 Hz. To prevent the possibility that the discrimination was based on differences of initial or final frequencies, these frequencies were equal within and across modulations in each individual experiment. Differences between modulations consisted of differences in the trajectories between the initial and final frequencies. Performance worsened as the slopes of the spectral envelopes decreased. Addition of noise also impaired modulation discrimination. The dependence on the signal-to-noise ratio was similar to what is found for stationary stimuli: Discrimination of frequency modulation deteriorated more rapidly with decreasing signal-to-noise ratio when stimuli had shallow spectral slopes than when they had steep spectral slopes. In spite of the precautions taken (i.e., initial and final frequency the same), the discrimination of these stimuli was more likely based on quasistationary frequency discrimination than on discrimination of modulation rate. This conclusion is consistent with previous findings for pure tones presented in quiet that frequency discrimination is more acute than modulation-rate discrimination.  相似文献   

10.
Frequency resolution was evaluated for two normal-hearing and seven hearing-impaired subjects with moderate, flat sensorineural hearing loss by measuring percent correct detection of a 2000-Hz tone as the width of a notch in band-reject noise increased. The level of the tone was fixed for each subject at a criterion performance level in broadband noise. Discrimination of synthetic speech syllables that differed in spectral content in the 2000-Hz region was evaluated as a function of the notch width in the same band-reject noise. Recognition of natural speech consonant/vowel syllables in quiet was also tested; results were analyzed for percent correct performance and relative information transmitted for voicing and place features. In the hearing-impaired subjects, frequency resolution at 2000 Hz was significantly correlated with the discrimination of synthetic speech information in the 2000-Hz region and was not related to the recognition of natural speech nonsense syllables unless (a) the speech stimuli contained the vowel /i/ rather than /a/, and (b) the score reflected information transmitted for place of articulation rather than percent correct.  相似文献   

11.
The ability of baboons to discriminate changes in the formant structures of a synthetic baboon grunt call and an acoustically similar human vowel (/epsilon/) was examined to determine how comparable baboons are to humans in discriminating small changes in vowel sounds, and whether or not any species-specific advantage in discriminability might exist when baboons discriminate their own vocalizations. Baboons were trained to press and hold down a lever to produce a pulsed train of a standard sound (e.g., /epsilon/ or a baboon grunt call), and to release the lever only when a variant of the sound occurred. Synthetic variants of each sound had the same first and third through fifth formants (F1 and F3-5), but varied in the location of the second formant (F2). Thresholds for F2 frequency changes were 55 and 67 Hz for the grunt and vowel stimuli, respectively, and were not statistically different from one another. Baboons discriminated changes in vowel formant structures comparable to those discriminated by humans. No distinct advantages in discrimination performances were observed when the baboons discriminated these synthetic grunt vocalizations.  相似文献   

12.
Detection of simple and complex changes of spectral shape   总被引:1,自引:0,他引:1  
In most of the previous studies (see Green, 1987) concerning the detection of a change in spectral shape, or "profile analysis," the listener's task was to detect an increment to a single component of an otherwise equal-amplitude, multicomponent background. An important theoretical issue is whether listeners' sensitivity to more complex spectral changes can be predicted from these results. In the present investigation, the sensitivity of a single group of listeners to a wide variety of simple and complex spectral changes was determined. After collecting the data, it was noted that almost all the thresholds could be predicted by a simple calculation scheme that assumed detection of a change in spectral shape occurs when the addition of the signal to the flat, multicomponent background produces a sufficient difference in level between only two regions of the spectrum. Unfortunately, this scheme, while successful for our limited set of data, fails to account for other "profile" data, namely, those obtained when the number of components is altered.  相似文献   

13.
Measurements are reported on the detectability of signals added to narrow-band sounds. The narrow-band sounds had a bandwidth of 20 Hz and were either Gaussian noise with flat amplitude spectra or sets of equal-amplitude sinusoidal components whose phases were chosen at random. Four different kinds of sinusoidal signals were used. Two signals produced symmetric changes in the audio spectrum adding a component either at the center of the spectrum or at both ends. The other two signals produced asymmetric changes adding a component at either end of the spectrum. The overall level of the sound was randomly varied on each presentation, so that the presence of a signal was largely unrelated to the absolute level of the signal component(s). A model is proposed that assumes the detection of the symmetric signals is based on changes in the shape of the power spectrum of the envelope. Such changes in the envelope power spectrum are probably heard as changes in the "roughness" or "smoothness" of the narrow-band sound. The predictions of this model were obtained from computer simulations. For the asymmetric signals, the most probable detection cues were changes in the pitch of the narrow-band sound. Results from a variety of different experiments using three listeners support these conjectures.  相似文献   

14.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listener's responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.  相似文献   

15.
A two-alternative forced-choice task was used to measure psychometric functions for the detection of temporal gaps in a 1-kHz, 400-ms sinusoidal signal. The signal always started and finished at a positive-going zero crossing, and the gap duration was varied from 0.5 to 6.0 ms in 0.5-ms steps. The signal level was 80 dB SPL, and a spectrally shaped noise was used to mask splatter associated with the abrupt onset and offset of the signal. Two subjects with normal hearing, two subjects with unilateral cochlear hearing loss, and two subjects with bilateral cochlear hearing loss were tested. The impaired ears had confirmed reductions in frequency selectivity at 1 kHz. For the normal ears, the psychometric functions were nonmonotonic, showing minima for gap durations corresponding to integer multiples of the signal period (n ms, where n is a positive integer) and maxima for durations corresponding to (n - 0.5) ms. For the impaired ears, the psychometric functions showed only small (nonsignificant) nonmonotonicities. Performance overall was slightly worse for the impaired than for the normal ears. The main features of the results could be accounted for using a model consisting of a bandpass filter (the auditory filter), a square-law device, and a sliding temporal integrator. Consistent with the data, the model demonstrates that, although a broader auditory filter has a faster transient response, this does not necessarily lead to improved performance in a gap detection task. The model also indicates that gap thresholds do not provide a direct measure of temporal resolution, since they depend at least partly on intensity resolution.  相似文献   

16.
The present study had two main purposes. One was to examine if listeners perceive gradually increasing durations of a voiceless fricative categorically ("fluent" versus "stuttered") or continuously (gradient perception from fluent to stuttered). The second purpose was to investigate whether there are gender differences in how listeners perceive various duration of sounds as "prolongations." Forty-four listeners were instructed to rate the duration of the // in the word "shape" produced by a normally fluent speaker. The target word was embedded in the middle of an experimental phrase and the initial // sound was digitally manipulated to create a range of fluent to stuttered sounds. This was accomplished by creating 20 ms stepwise increments for sounds ranging from 120 to 500 ms in duration. Listeners were instructed to give a rating of 1 for a fluent word and a rating of 100 for a stuttered word. The results showed listeners perceived the range of sounds continuously. Also, there was a significant gender difference in that males rated fluent sounds higher than females but female listeners rated stuttered sounds higher than males. The implications of these results are discussed.  相似文献   

17.
To determine how listeners weight different portions of the signal when integrating level information, they were presented with 1-s noise samples the levels of which randomly changed every 100 ms by repeatedly, and independently, drawing from a normal distribution. A given stimulus could be derived from one of two such distributions, a decibel apart, and listeners had to classify each sound as belonging to the "soft" or "loud" group. Subsequently, logistic regression analyses were used to determine to what extent each of the ten temporal segments contributed to the overall judgment. In Experiment 1, a nonoptimal weighting strategy was found that emphasized the beginning, and, to a lesser extent, the ending of the sounds. When listeners received trial-by-trial feedback, however, they approached equal weighting of all stimulus components. In Experiment 2, a spectral change was introduced in the middle of the stimulus sequence, changing from low-pass to high-pass noise, and vice versa. The temporal location of the stimulus change was strongly weighted, much as a new onset. These findings are not accounted for by current models of loudness or intensity discrimination, but are consistent with the idea that temporal weighting in loudness judgments is driven by salient events.  相似文献   

18.
A computational model of auditory analysis is described that is inspired by psychoacoustical and neurophysiological findings in early and central stages of the auditory system. The model provides a unified multiresolution representation of the spectral and temporal features likely critical in the perception of sound. Simplified, more specifically tailored versions of this model have already been validated by successful application in the assessment of speech intelligibility [Elhilali et al., Speech Commun. 41(2-3), 331-348 (2003); Chi et al., J. Acoust. Soc. Am. 106, 2719-2732 (1999)] and in explaining the perception of monaural phase sensitivity [R. Carlyon and S. Shamma, J. Acoust. Soc. Am. 114, 333-348 (2003)]. Here we provide a more complete mathematical formulation of the model, illustrating how complex signals are transformed through various stages of the model, and relating it to comparable existing models of auditory processing. Furthermore, we outline several reconstruction algorithms to resynthesize the sound from the model output so as to evaluate the fidelity of the representation and contribution of different features and cues to the sound percept.  相似文献   

19.
In a previous paper [R. Lutfi, J. Acoust. Soc. Am. 73, 262-267 (1983)], the following rule was proposed for predicting masking by pairs of simultaneous maskers; Xab = [XPa + XPb]1/P, where in units of power, Xa and Xb are the individual masking effects of the maskers, Xab is the combined effect, and 0.20 less than or equal to p less than or equal to 0.33. In this paper, the rule is used to predict the results of studies in the literature that have measured masking by sounds with various other complex spectra. In most of these studies, the individual maskers comprising the complex have nominally nonoverlapping power spectra. A single value of p = 0.33 yields predictions in good agreement with the data of these studies. For a study in which the component maskers overlap more appreciably, a larger value of p = 0.50 produces equally accurate predictions. The rule also predicts some general features of the results of studies in which the individual effects of the maskers in the complex are not known but can be estimated. It is suggested that the general applicability of the rule reflects a conjoint analysis by the auditory system of two or more waveform statistics.  相似文献   

20.
Frequency resolution and three tasks of frequency discrimination were measured at 500 and 4000 Hz in 12 normal and 12 hearing-impaired listeners. A three-interval, two-alternative forced-choice procedure was used. Frequency resolution was measured with an abbreviated psychoacoustical tuning curve. Frequency discrimination was measured for (1) a fixed-frequency standard and target, (2) a fixed-frequency standard and a frequency-transition target, and (3) frequency-transition standard and a frequency-transition target. The 50-ms frequency transitions had the same final frequency as the standards, but the initial frequency was lowered to obtain about 79% discrimination performance. There was a strong relationship between poor frequency resolution and elevated pure-tone thresholds, but only a very weak relationship between poor frequency discrimination and elevated pure-tone thresholds. Several hearing-impaired listeners had normal discrimination performance together with pure-tone thresholds of 80-90 dB HL. A slight correlation was found between word recognition and frequency discrimination, but a detailed comparison of the phonetic errors and either the frequency-discrimination or frequency-resolution tasks failed to suggest any consistent interdependencies. These results are consistent with previous work that has suggested that frequency resolution and frequency discrimination are independent processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号