首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Speech-understanding difficulties observed in elderly hearing-impaired listeners are predominantly errors in the recognition of consonants, particularly within consonants that share the same manner of articulation. Spectral shape is an important acoustic cue that serves to distinguish such consonants. The present study examined whether individual differences in speech understanding among elderly hearing-impaired listeners could be explained by individual differences in spectral-shape discrimination ability. This study included a group of 20 elderly hearing-impaired listeners, as well as a group of young normal-hearing adults for comparison purposes. All subjects were tested on speech-identification tasks, with natural and computer-synthesized speech stimuli, and on a series of spectral-shape discrimination tasks. As expected, the young normal-hearing adults performed better than the elderly listeners on many of the identification tasks and on all but two discrimination tasks. Regression analyses of the data from the elderly listeners revealed moderate predictive relationships between some of the spectral-shape discrimination thresholds and speech-identification performance. The results indicated that when all stimuli were at least minimally audible, some of the individual differences in the identification of natural and synthetic speech tokens by elderly hearing-impaired listeners were associated with corresponding differences in their spectral-shape discrimination abilities for similar sounds.  相似文献   

2.
Speech-intelligibility tests auralized in a virtual classroom were used to investigate the optimal reverberation times for verbal communication for normal-hearing and hearing-impaired adults. The idealized classroom had simple geometry, uniform surface absorption, and an approximately diffuse sound field. It contained a speech source, a listener at a receiver position, and a noise source located at one of two positions. The relative output levels of the speech and noise sources were varied, along with the surface absorption and the corresponding reverberation time. The binaural impulse responses of the speech and noise sources in each classroom configuration were convolved with Modified Rhyme Test (MRT) and babble-noise signals. The resulting signals were presented to normal-hearing and hearing-impaired adult subjects to identify the configurations that gave the highest speech intelligibilities for the two groups. For both subject groups, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time included both zero and nonzero values. The results generally support previous theoretical results.  相似文献   

3.
Recent studies with adults have suggested that amplification at 4 kHz and above fails to improve speech recognition and may even degrade performance when high-frequency thresholds exceed 50-60 dB HL. This study examined the extent to which high frequencies can provide useful information for fricative perception for normal-hearing and hearing-impaired children and adults. Eighty subjects (20 per group) participated. Nonsense syllables containing the phonemes /s/, /f/, and /O/, produced by a male, female, and child talker, were low-pass filtered at 2, 3, 4, 5, 6, and 9 kHz. Frequency shaping was provided for the hearing-impaired subjects only. Results revealed significant differences in recognition between the four groups of subjects. Specifically, both groups of children performed more poorly than their adult counterparts at similar bandwidths. Likewise, both hearing-impaired groups performed more poorly than their normal-hearing counterparts. In addition, significant talker effects for /s/ were observed. For the male talker, optimum performance was reached at a bandwidth of approximately 4-5 kHz, whereas optimum performance for the female and child talkers did not occur until a bandwidth of 9 kHz.  相似文献   

4.
The effects of intensity on monosyllabic word recognition were studied in adults with normal hearing and mild-to-moderate sensorineural hearing loss. The stimuli were bandlimited NU#6 word lists presented in quiet and talker-spectrum-matched noise. Speech levels ranged from 64 to 99 dB SPL and S/N ratios from 28 to -4 dB. In quiet, the performance of normal-hearing subjects remained essentially constant in noise, at a fixed S/N ratio, it decreased as a linear function of speech level. Hearing-impaired subjects performed like normal-hearing subjects tested in noise when the data were corrected for the effects of audibility loss. From these and other results, it was concluded that: (1) speech intelligibility in noise decreases when speech levels exceed 69 dB SPL and the S/N ratio remains constant; (2) the effects of speech and noise level are synergistic; (3) the deterioration in intelligibility can be modeled as a relative increase in the effective masking level; (4) normal-hearing and hearing-impaired subjects are affected similarly by increased signal level when differences in speech audibility are considered; (5) the negative effects of increasing speech and noise levels on speech recognition are similar for all adult subjects, at least up to 80 years; and (6) the effective dynamic range of speech may be larger than the commonly assumed value of 30 dB.  相似文献   

5.
Listening to speech in competing sounds poses a major difficulty for children with impaired hearing. This study aimed to determine the ability of children (3-12 yr of age) to use spatial separation between target speech and competing babble to improve speech intelligibility. Fifty-eight children (31 with normal hearing and 27 with impaired hearing who use bilateral hearing aids) were assessed by word and sentence material. Speech reception thresholds (SRTs) were measured with speech presented from 0° azimuth, and competing babble from either 0° or ±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SRTs measured with co-located speech and babble and SRTs measured with spatially separated speech and babble. On average, hearing-impaired children attained near-normal performance when speech and babble originated from the frontal source, but performed poorer than their normal-hearing peers when babble was spatially separated from target speech. On average, normal-hearing children obtained an SRM of 3 dB whereas children with hearing loss did not demonstrate SRM. Results suggest that hearing-impaired children may need enhancement in signal-to-noise ratio to hear speech in difficult listening conditions as well as normal-hearing children.  相似文献   

6.
Speech reception thresholds (SRTs) were measured with a competing talker background for signals processed to contain variable amounts of temporal fine structure (TFS) information, using nine normal-hearing and nine hearing-impaired subjects. Signals (speech and background talker) were bandpass filtered into channels. Channel signals for channel numbers above a "cut-off channel" (CO) were vocoded to remove TFS information, while channel signals for channel numbers of CO and below were left unprocessed. Signals from all channels were combined. As a group, hearing-impaired subjects benefited less than normal-hearing subjects from the additional TFS information that was available as CO increased. The amount of benefit varied between hearing-impaired individuals, with some showing no improvement in SRT and one showing an improvement similar to that for normal-hearing subjects. The reduced ability to take advantage of TFS information in speech may partially explain why subjects with cochlear hearing loss get less benefit from listening in a fluctuating background than normal-hearing subjects. TFS information may be important in identifying the temporal "dips" in such a background.  相似文献   

7.
Temporal fine structure (TFS) sensitivity, frequency selectivity, and speech reception in noise were measured for young normal-hearing (NHY), old normal-hearing (NHO), and hearing-impaired (HI) subjects. Two measures of TFS sensitivity were used: the "TFS-LF test" (interaural phase difference discrimination) and the "TFS2 test" (discrimination of harmonic and frequency-shifted tones). These measures were not significantly correlated with frequency selectivity (after partialing out the effect of audiometric threshold), suggesting that insensitivity to TFS cannot be wholly explained by a broadening of auditory filters. The results of the two tests of TFS sensitivity were significantly but modestly correlated, suggesting that performance of the tests may be partly influenced by different factors. The NHO group performed significantly more poorly than the NHY group for both measures of TFS sensitivity, but not frequency selectivity, suggesting that TFS sensitivity declines with age in the absence of elevated audiometric thresholds or broadened auditory filters. When the effect of mean audiometric threshold was partialed out, speech reception thresholds in modulated noise were correlated with TFS2 scores, but not measures of frequency selectivity or TFS-LF test scores, suggesting that a reduction in sensitivity to TFS can partly account for the speech perception difficulties experienced by hearing-impaired subjects.  相似文献   

8.
Speech reception thresholds (SRTs) for sentences were determined in stationary and modulated background noise for two age-matched groups of normal-hearing (N = 13) and hearing-impaired listeners (N = 21). Correlations were studied between the SRT in noise and measures of auditory and nonauditory performance, after which stepwise regression analyses were performed within both groups separately. Auditory measures included the pure-tone audiogram and tests of spectral and temporal acuity. Nonauditory factors were assessed by measuring the text reception threshold (TRT), a visual analogue of the SRT, in which partially masked sentences were adaptively presented. Results indicate that, for the normal-hearing group, the variance in speech reception is mainly associated with nonauditory factors, both in stationary and in modulated noise. For the hearing-impaired group, speech reception in stationary noise is mainly related to the audiogram, even when audibility effects are accounted for. In modulated noise, both auditory (temporal acuity) and nonauditory factors (TRT) contribute to explaining interindividual differences in speech reception. Age was not a significant factor in the results. It is concluded that, under some conditions, nonauditory factors are relevant for the perception of speech in noise. Further evaluation of nonauditory factors might enable adapting the expectations from auditory rehabilitation in clinical settings.  相似文献   

9.
Several studies have demonstrated that when talkers are instructed to speak clearly, the resulting speech is significantly more intelligible than speech produced in ordinary conversation. These speech intelligibility improvements are accompanied by a wide variety of acoustic changes. The current study explored the relationship between acoustic properties of vowels and their identification in clear and conversational speech, for young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. Monosyllabic words excised from sentences spoken either clearly or conversationally by a male talker were presented in 12-talker babble for vowel identification. While vowel intelligibility was significantly higher in clear speech than in conversational speech for the YNH listeners, no clear speech advantage was found for the EHI group. Regression analyses were used to assess the relative importance of spectral target, dynamic formant movement, and duration information for perception of individual vowels. For both listener groups, all three types of information emerged as primary cues to vowel identity. However, the relative importance of the three cues for individual vowels differed greatly for the YNH and EHI listeners. This suggests that hearing loss alters the way acoustic cues are used for identifying vowels.  相似文献   

10.
In face-to-face speech communication, the listener extracts and integrates information from the acoustic and optic speech signals. Integration occurs within the auditory modality (i.e., across the acoustic frequency spectrum) and across sensory modalities (i.e., across the acoustic and optic signals). The difficulties experienced by some hearing-impaired listeners in understanding speech could be attributed to losses in the extraction of speech information, the integration of speech cues, or both. The present study evaluated the ability of normal-hearing and hearing-impaired listeners to integrate speech information within and across sensory modalities in order to determine the degree to which integration efficiency may be a factor in the performance of hearing-impaired listeners. Auditory-visual nonsense syllables consisting of eighteen medial consonants surrounded by the vowel [a] were processed into four nonoverlapping acoustic filter bands between 300 and 6000 Hz. A variety of one, two, three, and four filter-band combinations were presented for identification in auditory-only and auditory-visual conditions: A visual-only condition was also included. Integration efficiency was evaluated using a model of optimal integration. Results showed that normal-hearing and hearing-impaired listeners integrated information across the auditory and visual sensory modalities with a high degree of efficiency, independent of differences in auditory capabilities. However, across-frequency integration for auditory-only input was less efficient for hearing-impaired listeners. These individuals exhibited particular difficulty extracting information from the highest frequency band (4762-6000 Hz) when speech information was presented concurrently in the next lower-frequency band (1890-2381 Hz). Results suggest that integration of speech information within the auditory modality, but not across auditory and visual modalities, affects speech understanding in hearing-impaired listeners.  相似文献   

11.
In a recent study [S. Gordon-Salant, J. Acoust. Soc. Am. 80, 1599-1607 (1986)], young and elderly normal-hearing listeners demonstrated significant improvements in consonant-vowel (CV) recognition with acoustic modification of the speech signal incorporating increments in the consonant-vowel ratio (CVR). Acoustic modification of consonant duration failed to enhance performance. The present study investigated whether consonant recognition deficits of elderly hearing-impaired listeners would be reduced by these acoustic modifications, as well as by increases in speech level. Performance of elderly hearing-impaired listeners with gradually sloping and sharply sloping sensorineural hearing losses was compared to performance of elderly normal-threshold listeners (reported previously) for recognition of a variety of nonsense syllable stimuli. These stimuli included unmodified CVs, CVs with increases in CVR, CVs with increases in consonant duration, and CVs with increases in both CVR and consonant duration. Stimuli were presented at each of two speech levels with a background of noise. Results obtained from the hearing-impaired listeners agreed with those observed previously from normal-hearing listeners. Differences in performance between the three subject groups as a function of level were observed also.  相似文献   

12.
An articulation index calculation procedure developed for use with individual normal-hearing listeners [C. Pavlovic and G. Studebaker, J. Acoust. Soc. Am. 75, 1606-1612 (1984)] was modified to account for the deterioration in suprathreshold speech processing produced by sensorineural hearing impairment. Data from four normal-hearing and four hearing-impaired subjects were used to relate the loss in hearing sensitivity to the deterioration in speech processing in quiet and in noise. The new procedure only requires hearing threshold measurements and consists of the following two modifications of the original AI procedure of Pavlovic and Studebaker (1984): The speech and noise spectrum densities are integrated over bandwidths which are, when expressed in decibels, larger than the critical bandwidths by 10% of the hearing loss. This is in contrast to the unmodified procedure where integration is performed over critical bandwidths. The contribution of each frequency to the AI is the product of its contribution in the unmodified AI procedure and a "speech desensitization factor." The desensitization factor is specified as a function of the hearing loss. The predictive accuracies of both the unmodified and the modified calculation procedures were assessed by comparing the expected and observed speech recognition scores of four hearing-impaired subjects under various conditions of speech filtering and noise masking. The modified procedure appears accurate for general applications. In contrast, the unmodified procedure appears accurate only for applications where results obtained under various conditions on a single listener are compared to each other.  相似文献   

13.
The purpose of this study was to examine the effect of spectral-cue audibility on the recognition of stop consonants in normal-hearing and hearing-impaired adults. Subjects identified six synthetic CV speech tokens in a closed-set response task. Each syllable differed only in the initial 40-ms consonant portion of the stimulus. In order to relate performance to spectral-cue audibility, the initial 40 ms of each CV were analyzed via FFT and the resulting spectral array was passed through a sliding-filter model of the human auditory system to account for logarithmic representation of frequency and the summation of stimulus energy within critical bands. This allowed the spectral data to be displayed in comparison to a subject's sensitivity thresholds. For normal-hearing subjects, an orderly function relating the percentage of audible stimulus to recognition performance was found, with perfect discrimination performance occurring when the bulk of the stimulus spectrum was presented at suprathreshold levels. For the hearing-impaired subjects, however, it was found in many instances that suprathreshold presentation of stop-consonant spectral cues did not yield recognition equivalent to that found for the normal-hearing subjects. These results demonstrate that while the audibility of individual stop consonants is an important factor influencing recognition performance in hearing-impaired subjects, it is not always sufficient to explain the effects of sensorineural hearing loss.  相似文献   

14.
Speaking rate of adventitiously deaf male cochlear implant candidates   总被引:1,自引:0,他引:1  
No objective group data on speaking rate or speaking duration have been reported on the speech of adventitiously profoundly hearing-impaired adults. Results of the present study showed that speaking rate, i.e., number of syllables per second, was significantly slower and speaking duration was significantly longer for 25 adventitiously profoundly hearing-impaired adult male cochlear implant candidates than for 10 normal-hearing control subjects. The factors of length of time since onset of profound hearing loss and hearing aid use did not significantly affect speaking rate. Based on these objective data, a rationale and method are presented for aural rehabilitation of the profoundly hearing-impaired who exhibit speaking rate abnormalities.  相似文献   

15.
Frequency resolution was evaluated for two normal-hearing and seven hearing-impaired subjects with moderate, flat sensorineural hearing loss by measuring percent correct detection of a 2000-Hz tone as the width of a notch in band-reject noise increased. The level of the tone was fixed for each subject at a criterion performance level in broadband noise. Discrimination of synthetic speech syllables that differed in spectral content in the 2000-Hz region was evaluated as a function of the notch width in the same band-reject noise. Recognition of natural speech consonant/vowel syllables in quiet was also tested; results were analyzed for percent correct performance and relative information transmitted for voicing and place features. In the hearing-impaired subjects, frequency resolution at 2000 Hz was significantly correlated with the discrimination of synthetic speech information in the 2000-Hz region and was not related to the recognition of natural speech nonsense syllables unless (a) the speech stimuli contained the vowel /i/ rather than /a/, and (b) the score reflected information transmitted for place of articulation rather than percent correct.  相似文献   

16.
The purpose of this study was to quantify the effect of timing errors on the intelligibility of deaf children's speech. Deviant timing patterns were corrected in the recorded speech samples of six deaf children using digital speech processing techniques. The speech waveform was modified to correct timing errors only, leaving all other aspects of the speech unchanged. The following six-stage approximation procedure was used to correct the deviant timing patterns: (1) original, unaltered utterances, (2) correction of pauses only, (3) correction of relative timing, (4) correction of absolute syllable duration, (5) correction of relative timing and pauses, and (6) correction of absolute syllable duration and pauses. Measures of speech intelligibility were obtained for the original and the computer-modified utterances. On the average, the highest intelligibility score was obtained when relative timing errors only were corrected. The correction of this type of error improved the intelligibility of both stressed and unstressed words within a phrase. Improvements in word intelligibility, which occurred when relative timing was corrected, appeared to be closely related to the number of phonemic errors present within a word. The second highest intelligibility score was obtained for the original, unaltered sentences. On the average, the intelligibility scores obtained for the other four forms of timing modification were poorer than those obtained for the original sentences. Thus, the data show that intelligibility improved, on the average, when only one type of error, relative timing, was corrected.  相似文献   

17.
The purpose of this experiment was to determine the applicability of the Articulation Index (AI) model for characterizing the speech recognition performance of listeners with mild-to-moderate hearing loss. Performance-intensity functions were obtained from five normal-hearing listeners and 11 hearing-impaired listeners using a closed-set nonsense syllable test for two frequency responses (uniform and high-frequency emphasis). For each listener, the fitting constant Q of the nonlinear transfer function relating AI and speech recognition was estimated. Results indicated that the function mapping AI onto performance was approximately the same for normal and hearing-impaired listeners with mild-to-moderate hearing loss and high speech recognition scores. For a hearing-impaired listener with poor speech recognition ability, the AI procedure was a poor predictor of performance. The AI procedure as presently used is inadequate for predicting performance of individuals with reduced speech recognition ability and should be used conservatively in applications predicting optimal or acceptable frequency response characteristics for hearing-aid amplification systems.  相似文献   

18.
To examine the association between frequency resolution and speech recognition, auditory filter parameters and stop-consonant recognition were determined for 9 normal-hearing and 24 hearing-impaired subjects. In an earlier investigation, the relationship between stop-consonant recognition and the articulation index (AI) had been established on normal-hearing listeners. Based on AI predictions, speech-presentation levels for each subject in this experiment were selected to obtain a wide range of recognition scores. This strategy provides a method of interpreting speech-recognition performance among listeners who vary in magnitude and configuration of hearing loss by assuming that conditions which yield equal audible spectra will result in equivalent performance. It was reasoned that an association between frequency resolution and consonant recognition may be more appropriately estimated if hearing-impaired listeners' performance was measured under conditions that assured equivalent audibility of the speech stimuli. Derived auditory filter parameters indicated that filter widths and dynamic ranges were strongly associated with threshold. Stop-consonant recognition scores for most hearing-impaired listeners were not significantly poorer than predicted by the AI model. Furthermore, differences between observed recognition scores and those predicted by the AI were not associated with auditory filter characteristics, suggesting that frequency resolution and speech recognition may appear to be associated primarily because both are degraded by threshold elevation.  相似文献   

19.
Speech-in-noise-measurements are important in clinical practice and have been the subject of research for a long time. The results of these measurements are often described in terms of the speech reception threshold (SRT) and SNR loss. Using the basic concepts that underlie several models of speech recognition in steady-state noise, the present study shows that these measures are ill-defined, most importantly because the slope of the speech recognition functions for hearing-impaired listeners always decreases with hearing loss. This slope can be determined from the slope of the normal-hearing speech recognition function when the SRT for the hearing-impaired listener is known. The SII-function (i.e., the speech intelligibility index (SII) against SNR) is important and provides insights into many potential pitfalls when interpreting SRT data. Standardized SNR loss, sSNR loss, is introduced as a universal measure of hearing loss for speech in steady-state noise. Experimental data demonstrates that, unlike the SRT or SNR loss, sSNR loss is invariant to the target point chosen, the scoring method or the type of speech material.  相似文献   

20.
Reports using a variety of psychophysical tasks indicate that pitch perception by hearing-impaired listeners may be abnormal, contributing to difficulties in understanding speech and enjoying music. Pitches of complex sounds may be weaker and more indistinct in the presence of cochlear damage, especially when frequency regions are affected that form the strongest basis for pitch perception in normal-hearing listeners. In this study, the strength of the complex pitch generated by iterated rippled noise was assessed in normal-hearing and hearing-impaired listeners. Pitch strength was measured for broadband noises with spectral ripples generated by iteratively delaying a copy of a given noise and adding it back into the original. Octave-band-pass versions of these noises also were evaluated to assess frequency dominance regions for rippled-noise pitch. Hearing-impaired listeners demonstrated consistently weaker pitches in response to the rippled noises relative to pitch strength in normal-hearing listeners. However, in most cases, the frequency regions of pitch dominance, i.e., strongest pitch, were similar to those observed in normal-hearing listeners. Except where there exists a substantial sensitivity loss, contributions from normal pitch dominance regions associated with the strongest pitches may not be directly related to impaired spectral processing. It is suggested that the reduced strength of rippled-noise pitch in listeners with hearing loss results from impaired frequency resolution and possibly an associated deficit in temporal processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号