共查询到20条相似文献,搜索用时 15 毫秒
1.
The purpose of this study was to examine the effect of spectral-cue audibility on the recognition of stop consonants in normal-hearing and hearing-impaired adults. Subjects identified six synthetic CV speech tokens in a closed-set response task. Each syllable differed only in the initial 40-ms consonant portion of the stimulus. In order to relate performance to spectral-cue audibility, the initial 40 ms of each CV were analyzed via FFT and the resulting spectral array was passed through a sliding-filter model of the human auditory system to account for logarithmic representation of frequency and the summation of stimulus energy within critical bands. This allowed the spectral data to be displayed in comparison to a subject's sensitivity thresholds. For normal-hearing subjects, an orderly function relating the percentage of audible stimulus to recognition performance was found, with perfect discrimination performance occurring when the bulk of the stimulus spectrum was presented at suprathreshold levels. For the hearing-impaired subjects, however, it was found in many instances that suprathreshold presentation of stop-consonant spectral cues did not yield recognition equivalent to that found for the normal-hearing subjects. These results demonstrate that while the audibility of individual stop consonants is an important factor influencing recognition performance in hearing-impaired subjects, it is not always sufficient to explain the effects of sensorineural hearing loss. 相似文献
2.
S G Revoile L Kozma-Spytek L Holden-Pitt J M Pickett J Droge 《The Journal of the Acoustical Society of America》1991,89(1):457-460
Moderately to profoundly hearing-impaired (n = 30) and normal-hearing (n = 6) listeners identified [p, k, t, f, theta, s] in [symbol; see text], and [symbol; see text]s tokens extracted from spoken sentences. The [symbol; see text]s were also identified in the sentences. The hearing-impaired group distinguished stop/fricative manner more poorly for [symbol; see text] in sentences than when extracted. Further, the group's performance for extracted [symbol; see text] was poorer than for extracted [symbol; see text] and [symbol; see text]. For the normal-hearing group, consonant identification was similar among the syllable and sentence contexts. 相似文献
3.
Minimum spectral contrast for vowel identification by normal-hearing and hearing-impaired listeners 总被引:1,自引:0,他引:1
M R Leek M F Dorman Q Summerfield 《The Journal of the Acoustical Society of America》1987,81(1):148-154
To determine the minimum difference in amplitude between spectral peaks and troughs sufficient for vowel identification by normal-hearing and hearing-impaired listeners, four vowel-like complex sounds were created by summing the first 30 harmonics of a 100-Hz tone. The amplitudes of all harmonics were equal, except for two consecutive harmonics located at each of three "formant" locations. The amplitudes of these harmonics were equal and ranged from 1-8 dB more than the remaining components. Normal-hearing listeners achieved greater than 75% accuracy when peak-to-trough differences were 1-2 dB. Normal-hearing listeners who were tested in a noise background sufficient to raise their thresholds to the level of a flat, moderate hearing loss needed a 4-dB difference for identification. Listeners with a moderate, flat hearing loss required a 6- to 7-dB difference for identification. The results suggest, for normal-hearing listeners, that the peak-to-trough amplitude difference required for identification of this set of vowels is very near the threshold for detection of a change in the amplitude spectrum of a complex signal. Hearing-impaired listeners may have difficulty using closely spaced formants for vowel identification due to abnormal smoothing of the internal representation of the spectrum by broadened auditory filters. 相似文献
4.
A C Walley D B Pisoni R N Aslin 《The Journal of the Acoustical Society of America》1984,75(2):581-589
According to recent theoretical accounts of place of articulation perception, global, invariant properties of the stop CV syllable onset spectrum serve as primary, innate cues to place of articulation, whereas contextually variable formant transitions constitute secondary, learned cues. By this view, one might expect that young infants would find the discrimination of place of articulation contrasts signaled by formant transition differences more difficult than those cued by gross spectral differences. Using an operant head-turning paradigm, we found that 6-month-old infants were able to discriminate two-formant stimuli contrasting in place of articulation as well as they did five-formant + burst stimuli. Apparently, neither the global properties of the onset spectrum nor simply the additional acoustic information contained in the five-formant + burst stimuli afford the infant any advantage in the discrimination task. Rather, formant transition information provides a sufficient basis for discriminating place of articulation differences. 相似文献
5.
Two signal-processing algorithms, derived from those described by Stubbs and Summerfield [R.J. Stubbs and Q. Summerfield, J. Acoust. Soc. Am. 84, 1236-1249 (1988)], were used to separate the voiced speech of two talkers speaking simultaneously, at similar intensities, in a single channel. Both algorithms use fundamental frequency (FO) as the basis for segregation. One attenuates the interfering voice by filtering the cepstrum of the signal. The other is a hybrid algorithm that combines cepstral filtering with the technique of harmonic selection [T.W. Parsons, J. Acoust. Soc. Am. 60, 911-918 (1976)]. The algorithms were evaluated and compared in perceptual experiments involving listeners with normal hearing and listeners with cochlear hearing impairments. In experiment 1 the processing was used to separate voiced sentences spoken on a monotone. Both algorithms gave significant increases in intelligibility to both groups of listeners. The improvements were equivalent to an increase of 3-4 dB in the effective signal-to-noise ratio (SNR). In experiment 2 the processing was used to separate voiced sentences spoken with time-varying intonation. For normal-hearing listeners, cepstral filtering gave a significant increase in intelligibility, while the hybrid algorithm gave an increase that was on the margins of significance (p = 0.06). The improvements were equivalent to an increase of 2-3 dB in the effective SNR. For impaired listeners, no intelligibility improvements were demonstrated with intoned sentences. The decrease in performance for intoned material is attributed to limitations of the algorithms when FO is nonstationary. 相似文献
6.
A K Náb?lek Z Czyzewski L A Krishnan 《The Journal of the Acoustical Society of America》1992,92(3):1228-1246
Vowel identification was tested in quiet, noise, and reverberation with 20 normal-hearing subjects and 20 hearing-impaired subjects. Stimuli were 15 English vowels spoken in a /b-t/context by six male talkers. Each talker produced five tokens of each vowel. In quiet, all stimuli were identified by two judges as the intended targets. The stimuli were degraded by reverberation or speech-spectrum noise. Vowel identification scores depended upon talker, listening condition, and subject type. The relationship between identification errors and spectral details of the vowels is discussed. 相似文献
7.
The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners discriminated between a standard stimulus and a signal stimulus in which half of the standard components were decreased in level and half were increased in level. In one condition, the standard stimulus was the sum of six equal-amplitude tones (equal-SPL), and in another the standard stimulus was the sum of six tones at equal sensation levels re: audiometric thresholds for individual subjects (equal-SL). Spectral weights were estimated in conditions where the amplitudes of the individual tones were perturbed slightly on every presentation. Sensitivity was similar in all conditions for normal-hearing and hearing-impaired listeners. The presence of perturbation and equal-SL components increased thresholds for both groups, but only small differences in weighting strategy were measured between the groups depending on whether the equal-SPL or equal-SL condition was tested. The average data suggest that normal-hearing listeners may rely more on the central components of the spectrum whereas hearing-impaired listeners may have been more likely to use the edges. However, individual weighting functions were quite variable, especially for the HI listeners, perhaps reflecting difficulty in processing changes in spectral shape due to hearing loss. Differences in weighting strategy without changes in sensitivity suggest that factors other than spectral weights, such as internal noise or difficulty encoding a reference stimulus, also may dominate performance. 相似文献
8.
In a multiple observation, sample discrimination experiment normal-hearing (NH) and hearing-impaired (HI) listeners heard two multitone complexes each consisting of six simultaneous tones with nominal frequencies spaced evenly on an ERB(N) logarithmic scale between 257 and 6930 Hz. On every trial, the frequency of each tone was sampled from a normal distribution centered near its nominal frequency. In one interval of a 2IFC task, all tones were sampled from distributions lower in mean frequency and in the other interval from distributions higher in mean frequency. Listeners had to identify the latter interval. Decision weights were obtained from multiple regression analysis of the between- interval frequency differences for each tone and listeners' responses. Frequency difference limens (an index of sensorineural resolution) and decision weights for each tone were used to predict the sensitivity of different decision-theoretic models. Results indicate that low-frequency tones were given much greater perceptual weight than high-frequency tones by both groups of listeners. This tendency increased as hearing loss increased and as sensorineural resolution decreased, resulting in significantly less efficient weighting strategies for the HI listeners. Overall, results indicate that HI listeners integrated frequency information less optimally than NH listeners, even after accounting for differences in sensorineural resolution. 相似文献
9.
Richie C Kewley-Port D Coughlin M 《The Journal of the Acoustical Society of America》2003,114(5):2923-2933
This study examined the effects of mild-to-moderate sensorineural hearing loss on vowel perception abilities of young, hearing-impaired (YHI) adults. Stimuli were presented at a low conversational level with a flat frequency response (approximately 60 dB SPL), and in two gain conditions: (a) high level gain with a flat frequency response (95 dB SPL), and (b) frequency-specific gain shaped according to each listener's hearing loss (designed to simulate the frequency response provided by a linear hearing aid to an input signal of 60 dB SPL). Listeners discriminated changes in the vowels /I e E inverted-v ae/ when F1 or F2 varied, and later categorized the vowels. YHI listeners performed better in the two gain conditions than in the conversational level condition. Performances in the two gain conditions were similar, suggesting that upward spread of masking was not seen at these signal levels for these tasks. Results were compared with those from a group of elderly, hearing-impaired (EHI) listeners, reported in Coughlin, Kewley-Port, and Humes [J. Acoust. Soc. Am. 104, 3597-3607 (1998)]. Comparisons revealed no significant differences between the EHI and YHI groups, suggesting that hearing impairment, not age, is the primary contributor to decreased vowel perception in these listeners. 相似文献
10.
Speakers may adapt the phonetic details of their productions when they anticipate perceptual difficulty or comprehension failure on the part of a listener. Previous research suggests that a speaking style known as clear speech is more intelligible overall than casual, conversational speech for a variety of listener populations. However, it is unknown whether clear speech improves the intelligibility of fricative consonants specifically, or how its effects on fricative perception might differ depending on listener population. The primary goal of this study was to determine whether clear speech enhances fricative intelligibility for normal-hearing listeners and listeners with simulated impairment. Two experiments measured babble signal-to-noise ratio thresholds for fricative minimal pair distinctions for 14 normal-hearing listeners and 14 listeners with simulated sloping, recruiting impairment. Results indicated that clear speech helped both groups overall. However, for impaired listeners, reliable clear speech intelligibility advantages were not found for non-sibilant pairs. Correlation analyses comparing acoustic and perceptual data indicated that a shift of energy concentration toward higher frequency regions and greater source strength contributed to the clear speech effect for normal-hearing listeners. Correlations between acoustic and perceptual data were less consistent for listeners with simulated impairment, and suggested that lower-frequency information may play a role. 相似文献
11.
Lentz JJ 《The Journal of the Acoustical Society of America》2006,120(2):945-956
Spectral-shape discrimination thresholds were measured in the presence and absence of noise to determine whether normal-hearing and hearing-impaired listeners rely primarily on spectral peaks in the excitation pattern when discriminating between stimuli with different spectral shapes. Standard stimuli were the sum of 2, 4, 6, 8, 10, 20, or 30 equal-amplitude tones with frequencies fixed between 200 and 4000 Hz. Signal stimuli were generated by increasing and decreasing the levels of every other standard component. The function relating the spectral-shape discrimination threshold to the number of components (N) showed an initial decrease in threshold with increasing N and then an increase in threshold when the number of components reached 10 and 6, for normal-hearing and hearing-impaired listeners, respectively. The presence of a 50-dB SPL/Hz noise led to a 1.7 dB increase in threshold for normal-hearing listeners and a 3.5 dB increase for hearing-impaired listeners. Multichannel modeling and the relatively small influence of noise suggest that both normal-hearing and hearing-impaired listeners rely on the peaks in the excitation pattern for spectral-shape discrimination. The greater influence of noise in the data from hearing-impaired listeners is attributed to a poorer representation of spectral peaks. 相似文献
12.
研究对成年口吃者在流畅朗读过程中的塞音进行了声学分析,测量了嗓音起始时间并且计算了塞音爆破时刻的频谱矩,并将口吃者在言语矫治前后与非口吃者进行了对比。多因素方差分析结果显示,口吃者嗓音起始时间虽稍长于非口吃者但未达到统计意义上的显著性差异水平,而且受发音部位和韵母的影响程度较大。同时还观察到矫治前口吃者和非口吃者在塞音爆破段的频谱均值呈现出显著性差异,口吃者频谱均值低于非口吃者可能是由于口吃者舌与齿龈或软硬腭形成阻塞的部位在声道中偏后所导致,还发现口吃者韵母对塞音爆破段频谱的影响较小,此结果表明口吃者表现出相对较弱的协同发音现象。口吃者经过言语矫治后塞音的嗓音起始时间和爆破段频谱有向非口吃者逼近的趋势。 相似文献
13.
A H Suter B L Lempert J R Franks 《The Journal of the Acoustical Society of America》1990,87(5):2114-2117
Many of the 9 million workers exposed to average noise levels of 85 dB (A) and above are required to wear hearing protection devices, and many of these workers have already developed noise-induced hearing impairments. There is some evidence in the literature that hearing-impaired users may not receive as much attenuation from hearing protectors as normal-hearing users. This study assessed real-ear attenuation at threshold for ten normal-hearing and ten hearing-impaired subjects using a set of David Clark 10A earmuffs. Testing procedures followed the specifications of ANSI S12.6-1984. The results showed that the hearing-impaired subjects received slightly more attenuation than the normal-hearing subjects at all frequencies, but these differences were not statistically significant. These results provide additional support to the finding that hearing protection devices are capable of providing as much attenuation to hearing-impaired users as they do to normal-hearing individuals. 相似文献
14.
Two signal-processing algorithms, designed to separate the voiced speech of two talkers speaking simultaneously at similar intensities in a single channel, were compared and evaluated. Both algorithms exploit the harmonic structure of voiced speech and require a difference in fundamental frequency (F0) between the voices to operate successfully. One attenuates the interfering voice by filtering the cepstrum of the combined signal. The other uses the method of harmonic selection [T. W. Parsons, J. Acoust. Soc. Am. 60, 911-918 (1976)] to resynthesize the target voice from fragmentary spectral information. Two perceptual evaluations were carried out. One involved the separation of pairs of vowels synthesized on static F0's; the other involved the recovery of consonant-vowel (CV) words masked by a synthesized vowel. Normal-hearing listeners and four listeners with moderate-to-severe, bilateral, symmetrical, sensorineural hearing impairments were tested. All listeners showed increased accuracy of identification when the target voice was enhanced by processing. The vowel-identification data show that intelligibility enhancement is possible over a range of F0 separations between the target and interfering voice. The recovery of CV words demonstrates that the processing is valid not only for spectrally static vowels but also for less intense time-varying voiced consonants. The results for the impaired listeners suggest that the algorithms may be applicable as components of a noise-reduction system in future digital signal-processing hearing aids. The vowel-separation test, and subjective listening, suggest that harmonic selection, which is the more computationally expensive method, produces the more effective voice separation. 相似文献
15.
For a group of 30 hearing-impaired subjects and a matched group of 15 normal-hearing subjects (age range 13-17) the following data were collected: the tone audiogram, the auditory bandwidth at 1000 Hz, and the recognition threshold of a short melody presented simultaneously with two other melodies, lower and higher in frequency, respectively. The threshold was defined as the frequency distance required to recognize the test melody. It was found that, whereas the mean recognition threshold for the normal-hearing subjects was five semitones, it was, on the average, 27 semitones for the hearing-impaired subjects. Although the interindividual spread for the latter group was large, it did not correlate with the subjects' auditory bandwidth, nor with their musical experience or education. 相似文献
16.
C Ludvigsen 《The Journal of the Acoustical Society of America》1987,82(4):1162-1171
The word recognition ability of 4 normal-hearing and 13 cochlearly hearing-impaired listeners was evaluated. Filtered and unfiltered speech in quiet and in noise were presented monaurally through headphones. The noise varied over listening situations with regard to spectrum, level, and temporal envelope. Articulation index theory was applied to predict the results. Two calculation methods were used, both based on the ANSI S3.5-1969 20-band method [S3.5-1969 (American National Standards Institute, New York)]. Method I was almost identical to the ANSI method. Method II included a level- and hearing-loss-dependent calculation of masking of stationary and on-off gated noise signals and of self-masking of speech. Method II provided the best prediction capability, and it is concluded that speech intelligibility of cochlearly hearing-impaired listeners may also, to a first approximation, be predicted from articulation index theory. 相似文献
17.
This investigation examined whether listeners with mild-moderate sensorineural hearing impairment have a deficit in the ability to integrate synchronous spectral information in the perception of speech. In stage 1, the bandwidth of filtered speech centered either on 500 or 2500 Hz was varied adaptively to determine the width required for approximately 15%-25% correct recognition. In stage 2, these criterion bandwidths were presented simultaneously and percent correct performance was determined in fixed block trials. Experiment 1 tested normal-hearing listeners in quiet and in masking noise. The main findings were (1) there was no correlation between the criterion bandwidths at 500 and 2500 Hz; (2) listeners achieved a high percent correct in stage 2 (approximately 80%); and (3) performance in quiet and noise was similar. Experiment 2 tested listeners with mild-moderate sensorineural hearing impairment. The main findings were (1) the impaired listeners showed high variability in stage 1, with some listeners requiring narrower and others requiring wider bandwidths than normal, and (2) hearing-impaired listeners achieved percent correct performance in stage 2 that was comparable to normal. The results indicate that listeners with mild-moderate sensorineural hearing loss do not have an essential deficit in the ability to integrate across-frequency speech information. 相似文献
18.
A conditional-on-a-single-stimulus (COSS) analysis procedure [B. G. Berg, J. Acoust. Soc. Am. 86, 1743-1746 (1989)] was used to estimate how well normal-hearing and hearing-impaired listeners selectively attend to individual spectral components of a broadband signal in a level discrimination task. On each trial, two multitone complexes consisting of six octave frequencies from 250 to 8000 Hz were presented to listeners. The levels of the individual tones were chosen independently and at random on each presentation. The target tone was selected, within a block of trials, as the 250-, 1000-, or 4000-Hz component. On each trial, listeners were asked to indicate which of the two complex sounds contained the higher level target. As a group, normal-hearing listeners exhibited greater selectivity than hearing-impaired listeners to the 250-Hz target, while hearing-impaired listeners showed greater selectivity than normal-hearing listeners to the 4000-Hz target, which is in the region of their hearing loss. Both groups of listeners displayed large variability in their ability to selectively weight the 1000-Hz target. Trial-by-trial analysis showed a decrease in weighting efficiency with increasing frequency for normal-hearing listeners, but a relatively constant weighting efficiency across frequency for hearing-impaired listeners. Interestingly, hearing-impaired listeners selectively weighted the 4000-Hz target, which was in the region of their hearing loss, more efficiently than did the normal-hearing listeners. 相似文献
19.
Detection of a temporal gap in low-frequency narrow-band signals by normal-hearing and hearing-impaired listeners 总被引:1,自引:0,他引:1
Temporal processing ability in the hearing impaired was investigated in a 2IFC gap-detection paradigm. The stimuli were digitally constructed 50-Hz-wide bands of noise centered at 250, 500, and 1000 Hz. On each trial, two 400-ms noise samples were paired, shaped at onset and offset, filtered, and presented in the quiet with and without a temporal gap. A modified up-down procedure with trial-by-trial feedback was used to establish threshold of detection of the gap. Approximately 4 h of practice preceded data collection; final estimate of threshold was the average of six listening blocks. There were 10 listeners, 19-25 years old. Five had normal hearing; five had a moderate congenital sensorineural hearing loss with relatively flat audiometric configuration. Near threshold (5 dB SL), all listeners performed similarly. At 15 and 25 dB SL, the normal-hearing group performed better than the hearing-impaired group. At 78 dB SPL, equal to the average intensity of the 5-dB SL condition for the hearing impaired, the normal-hearing group continued to improve and demonstrated a frequency effect not seen in the other conditions. Substantial individual differences were found in both groups, though intralistener variability was as small as expected for these narrow-bandwidth signals. 相似文献
20.
Grant KW Tufts JB Greenberg S 《The Journal of the Acoustical Society of America》2007,121(2):1164-1176
In face-to-face speech communication, the listener extracts and integrates information from the acoustic and optic speech signals. Integration occurs within the auditory modality (i.e., across the acoustic frequency spectrum) and across sensory modalities (i.e., across the acoustic and optic signals). The difficulties experienced by some hearing-impaired listeners in understanding speech could be attributed to losses in the extraction of speech information, the integration of speech cues, or both. The present study evaluated the ability of normal-hearing and hearing-impaired listeners to integrate speech information within and across sensory modalities in order to determine the degree to which integration efficiency may be a factor in the performance of hearing-impaired listeners. Auditory-visual nonsense syllables consisting of eighteen medial consonants surrounded by the vowel [a] were processed into four nonoverlapping acoustic filter bands between 300 and 6000 Hz. A variety of one, two, three, and four filter-band combinations were presented for identification in auditory-only and auditory-visual conditions: A visual-only condition was also included. Integration efficiency was evaluated using a model of optimal integration. Results showed that normal-hearing and hearing-impaired listeners integrated information across the auditory and visual sensory modalities with a high degree of efficiency, independent of differences in auditory capabilities. However, across-frequency integration for auditory-only input was less efficient for hearing-impaired listeners. These individuals exhibited particular difficulty extracting information from the highest frequency band (4762-6000 Hz) when speech information was presented concurrently in the next lower-frequency band (1890-2381 Hz). Results suggest that integration of speech information within the auditory modality, but not across auditory and visual modalities, affects speech understanding in hearing-impaired listeners. 相似文献