首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Consonant recognition in quiet and in noise was investigated as a function of age for essentially normal hearing listeners 21-68 years old, using the nonsense syllable test (NST) [Resnick et al., J. Acoust. Soc. Am. Suppl. 1 58, S114 (1975)]. The subjects audited the materials in quiet and at S/N ratios of +10 and +5 dB at their most comfortable listening levels (MCLs). The MCLs approximated conversational speech levels and were not significantly different between the age groups. The effects of age group, S/N condition (quiet, S/N +10, S/N +5) and NST subsets, and the S/N condition X subset interaction were all significant. Interactions involving the age factor were nonsignificant. Confusion matrices were similar across age groups, including the directions of errors between the most frequently confused phonemes. Also, the older subjects experienced performance decrements on the same features that were least accurately recognized by the younger subjects. The findings suggest that essentially normal older persons listening in quiet and in noise experience decreased consonant recognition ability, but that the nature of their phoneme confusions is similar to that of younger individuals. Even though the older subjects met the same selection criteria as did younger ones, there was an expected shift upward in auditory thresholds with age within these limits. Sensitivity at 8000 Hz was correlated with NST scores in noise when controlling for age, but the correlation between performance in noise and age was nonsignificant when controlling for the 8000-Hz threshold. These associations seem to implicate the phenomena underlying the increased 8000-Hz thresholds in the speech recognition problems of the elderly, and appear to support the concept of peripheral auditory deterioration with aging even among those with essentially normal hearing.  相似文献   

2.
The purpose of this study was to explore the potential advantages, both theoretical and applied, of preserving low-frequency acoustic hearing in cochlear implant patients. Several hypotheses are presented that predict that residual low-frequency acoustic hearing along with electric stimulation for high frequencies will provide an advantage over traditional long-electrode cochlear implants for the recognition of speech in competing backgrounds. A simulation experiment in normal-hearing subjects demonstrated a clear advantage for preserving low-frequency residual acoustic hearing for speech recognition in a background of other talkers, but not in steady noise. Three subjects with an implanted "short-electrode" cochlear implant and preserved low-frequency acoustic hearing were also tested on speech recognition in the same competing backgrounds and compared to a larger group of traditional cochlear implant users. Each of the three short-electrode subjects performed better than any of the traditional long-electrode implant subjects for speech recognition in a background of other talkers, but not in steady noise, in general agreement with the simulation studies. When compared to a subgroup of traditional implant users matched according to speech recognition ability in quiet, the short-electrode patients showed a 9-dB advantage in the multitalker background. These experiments provide strong preliminary support for retaining residual low-frequency acoustic hearing in cochlear implant patients. The results are consistent with the idea that better perception of voice pitch, which can aid in separating voices in a background of other talkers, was responsible for this advantage.  相似文献   

3.
A loss of cochlear compression may underlie many of the difficulties experienced by hearing-impaired listeners. Two behavioral forward-masking paradigms that have been used to estimate the magnitude of cochlear compression are growth of masking (GOM) and temporal masking (TM). The aim of this study was to determine whether these two measures produce within-subjects results that are consistent across a range of signal frequencies and, if so, to compare them in terms of reliability or efficiency. GOM and TM functions were measured in a group of five normal-hearing and five hearing-impaired listeners at signal frequencies of 1000, 2000, and 4000 Hz. Compression values were derived from the masking data and confidence intervals were constructed around these estimates. Both measures produced comparable estimates of compression, but both measures have distinct advantages and disadvantages, so that the more appropriate measure depends on factors such as the frequency region of interest and the degree of hearing loss. Because of the long testing times needed, neither measure is suitable for clinical use in its current form.  相似文献   

4.
The present study examined the application of the articulation index (AI) as a predictor of the speech-recognition performance of normal and hearing-impaired listeners with and without hearing protection. The speech-recognition scores of 12 normal and 12 hearing-impaired subjects were measured for a wide range of conditions designed to be representative of those in the workplace. Conditions included testing in quiet, in two types of background noise (white versus speech spectrum), at three signal-to-noise ratios (+ 5, 0, - 5 dB), and in three conditions of protection (unprotected, earplugs, earmuffs). The mean results for all 21 listening conditions and both groups of subjects were accurately described by the AI. Moreover, a single transfer-function relating performance to the AI could describe all the data from both groups.  相似文献   

5.
Spectral resolution has been reported to be closely related to vowel and consonant recognition in cochlear implant (CI) listeners. One measure of spectral resolution is spectral modulation threshold (SMT), which is defined as the smallest detectable spectral contrast in the spectral ripple stimulus. SMT may be determined by the activation pattern associated with electrical stimulation. In the present study, broad activation patterns were simulated using a multi-band vocoder to determine if similar impairments in speech understanding scores could be produced in normal-hearing listeners. Tokens were first decomposed into 15 logarithmically spaced bands and then re-synthesized by multiplying the envelope of each band by matched filtered noise. Various amounts of current spread were simulated by adjusting the drop-off of the noise spectrum away from the peak (40-5 dBoctave). The average SMT (0.25 and 0.5 cyclesoctave) increased from 6.3 to 22.5 dB, while average vowel identification scores dropped from 86% to 19% and consonant identification scores dropped from 93% to 59%. In each condition, the impairments in speech understanding were generally similar to those found in CI listeners with similar SMTs, suggesting that variability in spread of neural activation largely accounts for the variability in speech perception of CI listeners.  相似文献   

6.
Many competing noises in real environments are modulated or fluctuating in level. Listeners with normal hearing are able to take advantage of temporal gaps in fluctuating maskers. Listeners with sensorineural hearing loss show less benefit from modulated maskers. Cochlear implant users may be more adversely affected by modulated maskers because of their limited spectral resolution and by their reliance on envelope-based signal-processing strategies of implant processors. The current study evaluated cochlear implant users' ability to understand sentences in the presence of modulated speech-shaped noise. Normal-hearing listeners served as a comparison group. Listeners repeated IEEE sentences in quiet, steady noise, and modulated noise maskers. Maskers were presented at varying signal-to-noise ratios (SNRs) at six modulation rates varying from 1 to 32 Hz. Results suggested that normal-hearing listeners obtain significant release from masking from modulated maskers, especially at 8-Hz masker modulation frequency. In contrast, cochlear implant users experience very little release from masking from modulated maskers. The data suggest, in fact, that they may show negative effects of modulated maskers at syllabic modulation rates (2-4 Hz). Similar patterns of results were obtained from implant listeners using three different devices with different speech-processor strategies. The lack of release from masking occurs in implant listeners independent of their device characteristics, and may be attributable to the nature of implant processing strategies and/or the lack of spectral detail in processed stimuli.  相似文献   

7.
The goals of the present study were to measure acoustic temporal modulation transfer functions (TMTFs) in cochlear implant listeners and examine the relationship between modulation detection and speech recognition abilities. The effects of automatic gain control, presentation level and number of channels on modulation detection thresholds (MDTs) were examined using the listeners' clinical sound processor. The general form of the TMTF was low-pass, consistent with previous studies. The operation of automatic gain control had no effect on MDTs when the stimuli were presented at 65 dBA. MDTs were not dependent on the presentation levels (ranging from 50 to 75 dBA) nor on the number of channels. Significant correlations were found between MDTs and speech recognition scores. The rates of decay of the TMTFs were predictive of speech recognition abilities. Spectral-ripple discrimination was evaluated to examine the relationship between temporal and spectral envelope sensitivities. No correlations were found between the two measures, and 56% of the variance in speech recognition was predicted jointly by the two tasks. The present study suggests that temporal modulation detection measured with the sound processor can serve as a useful measure of the ability of clinical sound processing strategies to deliver clinically pertinent temporal information.  相似文献   

8.
9.
The differences in spectral shape resolution abilities among cochlear implant (CI) listeners, and between CI and normal-hearing (NH) listeners, when listening with the same number of channels (12), was investigated. In addition, the effect of the number of channels on spectral shape resolution was examined. The stimuli were rippled noise signals with various ripple frequency-spacings. An adaptive 41FC procedure was used to determine the threshold for resolvable ripple spacing, which was the spacing at which an interchange in peak and valley positions could be discriminated. The results showed poorer spectral shape resolution in CI compared to NH listeners (average thresholds of approximately 3000 and 400 Hz, respectively), and wide variability among CI listeners (range of approximately 800 to 8000 Hz). There was a significant relationship between spectral shape resolution and vowel recognition. The spectral shape resolution thresholds of NH listeners increased as the number of channels increased from 1 to 16, while the CI listeners showed a performance plateau at 4-6 channels, which is consistent with previous results using speech recognition measures. These results indicate that this test may provide a measure of CI performance which is time efficient and non-linguistic, and therefore, if verified, may provide a useful contribution to the prediction of speech perception in adults and children who use CIs.  相似文献   

10.
The purpose of this study was to develop and validate a method of estimating the relative "weight" that a multichannel cochlear implant user places on individual channels, indicating its contribution to overall speech recognition. The correlational method as applied to speech recognition was used both with normal-hearing listeners and with cochlear implant users fitted with six-channel speech processors. Speech was divided into frequency bands corresponding to the bands of the processor and a randomly chosen level of corresponding filtered noise was added to each channel on each trial. Channels in which the signal-to-noise ratio was more highly correlated with performance have higher weights, and conversely, channels in which the correlations were smaller have lower weights. Normal-hearing listeners showed approximately equal weights across frequency bands. In contrast, cochlear implant users showed unequal weighting across bands, and varied from individual to individual with some channels apparently not contributing significantly to speech recognition. To validate these channel weights, individual channels were removed and speech recognition in quiet was tested. A strong correlation was found between the relative weight of the channel removed and the decrease in speech recognition, thus providing support for use of the correlational method for cochlear implant users.  相似文献   

11.
Upward spreading of masking, measured in terms of absolute masked threshold, is greater in hearing-impaired listeners than in listeners with normal hearing. The purpose of this study was to make further observations on upward-masked thresholds and speech recognition in noise in elderly listeners. Two age groups were used: One group consisted of listeners who were more than 60 years old, and the second group consisted of listeners who were less than 36 years old. Both groups had listeners with normal hearing as well as listeners with mild to moderate sensorineural loss. The masking paradigm consisted of a continuous low-pass-filtered (1000-Hz) noise, which was mixed with the output of a self-tracking, sweep-frequency Bekesy audiometer. Thresholds were measured in quiet and with maskers at 70 and 90 dB SPL. The upward-masked thresholds were similar for young and elderly hearing-impaired listeners. A few elderly listeners had lower upward-masked thresholds compared with the young control group; however, their on-frequency masked thresholds were nearly identical to the control group. A significant correlation was found between upward-masked thresholds and the Speech Perception in Noise (SPIN) test in elderly listeners.  相似文献   

12.
Speech-reception thresholds (SRT) were measured for 17 normal-hearing and 17 hearing-impaired listeners in conditions simulating free-field situations with between one and six interfering talkers. The stimuli, speech and noise with identical long-term average spectra, were recorded with a KEMAR manikin in an anechoic room and presented to the subjects through headphones. The noise was modulated using the envelope fluctuations of the speech. Several conditions were simulated with the speaker always in front of the listener and the maskers either also in front, or positioned in a symmetrical or asymmetrical configuration around the listener. Results show that the hearing impaired have significantly poorer performance than the normal hearing in all conditions. The mean SRT differences between the groups range from 4.2-10 dB. It appears that the modulations in the masker act as an important cue for the normal-hearing listeners, who experience up to 5-dB release from masking, while being hardly beneficial for the hearing impaired listeners. The gain occurring when maskers are moved from the frontal position to positions around the listener varies from 1.5 to 8 dB for the normal hearing, and from 1 to 6.5 dB for the hearing impaired. It depends strongly on the number of maskers and their positions, but less on hearing impairment. The difference between the SRTs for binaural and best-ear listening (the "cocktail party effect") is approximately 3 dB in all conditions for both the normal-hearing and the hearing-impaired listeners.  相似文献   

13.
Perceptual coherence, the process by which the individual elements of complex sounds are bound together, was examined in adult listeners with longstanding childhood hearing losses, listeners with adult-onset hearing losses, and listeners with normal hearing. It was hypothesized that perceptual coherence would vary in strength between the groups due to their substantial differences in hearing history. Bisyllabic words produced by three talkers as well as comodulated three-tone complexes served as stimuli. In the first task, the second formant of each word was isolated and presented for recognition. In the second task, an isolated formant was paired with an intact word and listeners indicated whether or not the isolated second formant was a component of the intact word. In the third task, the middle component of the three-tone complex was presented in the same manner. For the speech stimuli, results indicate normal perceptual coherence in the listeners with adult-onset hearing loss but significantly weaker coherence in the listeners with childhood hearing losses. No differences were observed across groups for the nonspeech stimuli. These results suggest that perceptual coherence is relatively unaffected by hearing loss acquired during adulthood but appears to be impaired when hearing loss is present in early childhood.  相似文献   

14.
Two related studies investigated the relationship between place-pitch sensitivity and consonant recognition in cochlear implant listeners using the Nucleus MPEAK and SPEAK speech processing strategies. Average place-pitch sensitivity across the electrode array was evaluated as a function of electrode separation, using a psychophysical electrode pitch-ranking task. Consonant recognition was assessed by analyzing error matrices obtained with a standard consonant confusion procedure to obtain relative transmitted information (RTI) measures for three features: stimulus (RTI stim), envelope (RTI env[plc]), and place-of-articulation (RTI plc[env]). The first experiment evaluated consonant recognition performance with MPEAK and SPEAK in the same subjects. Subjects were experienced users of the MPEAK strategy who used the SPEAK strategy on a daily basis for one month and were tested with both processors. It was hypothesized that subjects with good place-pitch sensitivity would demonstrate better consonant place-cue perception with SPEAK than with MPEAK, by virtue of their ability to make use of SPEAK's enhanced representation of spectral speech cues. Surprisingly, all but one subject demonstrated poor consonant place-cue performance with both MPEAK and SPEAK even though most subjects demonstrated good or excellent place-pitch sensitivity. Consistent with this, no systematic relationship between place-pitch sensitivity and consonant place-cue performance was observed. Subjects' poor place-cue perception with SPEAK was subsequently attributed to the relatively short period of experience that they were given with the SPEAK strategy. The second study reexamined the relationship between place-pitch sensitivity and consonant recognition in a group of experienced SPEAK users. For these subjects, a positive relationship was observed between place-pitch sensitivity and consonant place-cue performance, supporting the hypothesis that good place-pitch sensitivity facilitates subjects' use of spectral cues to consonant identity. A strong, linear relationship was also observed between measures of envelope- and place-cue extraction, with place-cue performance increasing as a constant proportion (approximately 0.8) of envelope-cue performance. To the extent that the envelope-cue measure reflects subjects' abilities to resolve amplitude fluctuations in the speech envelope, this finding suggests that both envelope- and place-cue perception depend strongly on subjects' envelope-processing abilities. Related to this, the data suggest that good place-cue perception depends both on envelope-processing abilities and place-pitch sensitivity, and that either factor may limit place-cue perception in a given cochlear implant listener. Data from both experiments indicate that subjects with small electric dynamic ranges (< 8 dB for 125-Hz, 205-microsecond/ph pulse trains) are more likely to demonstrate poor electrode pitch-ranking skills and poor consonant recognition performance than subjects with larger electric dynamic ranges.  相似文献   

15.
People vary in the intelligibility of their speech. This study investigated whether across-talker intelligibility differences observed in normally-hearing listeners are also found in cochlear implant (CI) users. Speech perception for male, female, and child pairs of talkers differing in intelligibility was assessed with actual and simulated CI processing and in normal hearing. While overall speech recognition was, as expected, poorer for CI users, differences in intelligibility across talkers were consistent across all listener groups. This suggests that the primary determinants of intelligibility differences are preserved in the CI-processed signal, though no single critical acoustic property could be identified.  相似文献   

16.
This experiment examined the effects of spectral resolution and fine spectral structure on recognition of spectrally asynchronous sentences by normal-hearing and cochlear implant listeners. Sentence recognition was measured in six normal-hearing subjects listening to either full-spectrum or noise-band processors and five Nucleus-22 cochlear implant listeners fitted with 4-channel continuous interleaved sampling (CIS) processors. For the full-spectrum processor, the speech signals were divided into either 4 or 16 channels. For the noise-band processor, after band-pass filtering into 4 or 16 channels, the envelope of each channel was extracted and used to modulate noise of the same bandwidth as the analysis band, thus eliminating the fine spectral structure available in the full-spectrum processor. For the 4-channel CIS processor, the amplitude envelopes extracted from four bands were transformed to electric currents by a power function and the resulting electric currents were used to modulate pulse trains delivered to four electrode pairs. For all processors, the output of each channel was time-shifted relative to other channels, varying the channel delay across channels from 0 to 240 ms (in 40-ms steps). Within each delay condition, all channels were desynchronized such that the cross-channel delays between adjacent channels were maximized, thereby avoiding local pockets of channel synchrony. Results show no significant difference between the 4- and 16-channel full-spectrum speech processor for normal-hearing listeners. Recognition scores dropped significantly only when the maximum delay reached 200 ms for the 4-channel processor and 240 ms for the 16-channel processor. When fine spectral structures were removed in the noise-band processor, sentence recognition dropped significantly when the maximum delay was 160 ms for the 16-channel noise-band processor and 40 ms for the 4-channel noise-band processor. There was no significant difference between implant listeners using the 4-channel CIS processor and normal-hearing listeners using the 4-channel noise-band processor. The results imply that when fine spectral structures are not available, as in the implant listener's case, increased spectral resolution is important for overcoming cross-channel asynchrony in speech signals.  相似文献   

17.
In multichannel cochlear implants, low frequency information is delivered to apical cochlear locations while high frequency information is delivered to more basal locations, mimicking the normal acoustic tonotopic organization of the auditory nerves. In clinical practice, little attention has been paid to the distribution of acoustic input across the electrodes of an individual patient that might vary in terms of spacing and absolute tonotopic location. In normal-hearing listeners, Ba?kent and Shannon (J. Acoust. Soc. Am. 113, 2003) simulated implant signal processing conditions in which the frequency range assigned to the array was systematically made wider or narrower than the simulated stimulation range in the cochlea, resulting in frequency-place compression or expansion, respectively. In general, the best speech recognition was obtained when the input acoustic information was delivered to the matching tonotopic place in the cochlea with least frequency-place distortion. The present study measured phoneme and sentence recognition scores with similar frequency-place manipulations in six Med-El Combi 40+ implant subjects. Stimulation locations were estimated using the Greenwood mapping function based on the estimated electrode insertion depth. Results from frequency-place compression and expansion with implants were similar to simulation results, especially for postlingually deafened subjects, despite the uncertainty in the actual stimulation sites of the auditory nerves. The present study shows that frequency-place mapping is an important factor in implant performance and an individual implant patient's map could be optimized with functional tests using frequency-place manipulations.  相似文献   

18.
Speech intelligibility (PB words) in traffic-like noise was investigated in a laboratory situation simulating three common listening situations, indoors at 1 and 4 m and outdoors at 1 m. The maximum noise levels still permitting 75% intelligibility of PB words in these three listening situations were also defined. A total of 269 persons were examined. Forty-six had normal hearing, 90 a presbycusis-type hearing loss, 95 a noise-induced hearing loss and 38 a conductive hearing loss. In the indoor situation the majority of the groups with impaired hearing retained good speech intelligibility in 40 dB(A) masking noise. Lowering the noise level to less than 40 dB(A) resulted in a minor, usually insignificant, improvement in speech intelligibility. Listeners with normal hearing maintained good speech intelligibility in the outdoor listening situation at noise levels up to 60 dB(A), without lip-reading (i.e., using non-auditory information). For groups with impaired hearing due to age and/or noise, representing 8% of the population in Sweden, the noise level outdoors had to be lowered to less than 50 dB(A), in order to achieve good speech intelligibility at 1 m without lip-reading.  相似文献   

19.
Abnormalities in the cochlear function usually cause broadening of the auditory filters which reduces the speech intelligibility. An attempt to apply a spectral enhancement algorithm has been undertaken to improve the identification of Polish vowels by subjects with cochlear-based hearing-impairment. The identification scores of natural (unprocessed) vowels and spectrally enhanced (processed) vowels has been measured for hearing-impaired subjects. It has been found that spectral enhancement improves vowel scores by about 10% for those subjects, however, a wide variation in individual performance among subjects has been observed. The overall vowels identification scores obtained were 85% for natural vowels and 96% for spectrally enhanced vowels.  相似文献   

20.
This study investigated the effect of pulsatile stimulation rate on medial vowel and consonant recognition in cochlear implant listeners. Experiment 1 measured phoneme recognition as a function of stimulation rate in six Nucleus-22 cochlear implant listeners using an experimental four-channel continuous interleaved sampler (CIS) speech processing strategy. Results showed that all stimulation rates from 150 to 500 pulses/s/electrode produced equally good performance, while stimulation rates lower than 150 pulses/s/electrode produced significantly poorer performance. Experiment 2 measured phoneme recognition by implant listeners and normal-hearing listeners as a function of the low-pass cutoff frequency for envelope information. Results from both acoustic and electric hearing showed no significant difference in performance for all cutoff frequencies higher than 20 Hz. Both vowel and consonant scores dropped significantly when the cutoff frequency was reduced from 20 Hz to 2 Hz. The results of these two experiments suggest that temporal envelope information can be conveyed by relatively low stimulation rates. The pattern of results for both electrical and acoustic hearing is consistent with a simple model of temporal integration with an equivalent rectangular duration (ERD) of the temporal integrator of about 7 ms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号