首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
People vary in the intelligibility of their speech. This study investigated whether across-talker intelligibility differences observed in normally-hearing listeners are also found in cochlear implant (CI) users. Speech perception for male, female, and child pairs of talkers differing in intelligibility was assessed with actual and simulated CI processing and in normal hearing. While overall speech recognition was, as expected, poorer for CI users, differences in intelligibility across talkers were consistent across all listener groups. This suggests that the primary determinants of intelligibility differences are preserved in the CI-processed signal, though no single critical acoustic property could be identified.  相似文献   

2.
In multichannel cochlear implants, low frequency information is delivered to apical cochlear locations while high frequency information is delivered to more basal locations, mimicking the normal acoustic tonotopic organization of the auditory nerves. In clinical practice, little attention has been paid to the distribution of acoustic input across the electrodes of an individual patient that might vary in terms of spacing and absolute tonotopic location. In normal-hearing listeners, Ba?kent and Shannon (J. Acoust. Soc. Am. 113, 2003) simulated implant signal processing conditions in which the frequency range assigned to the array was systematically made wider or narrower than the simulated stimulation range in the cochlea, resulting in frequency-place compression or expansion, respectively. In general, the best speech recognition was obtained when the input acoustic information was delivered to the matching tonotopic place in the cochlea with least frequency-place distortion. The present study measured phoneme and sentence recognition scores with similar frequency-place manipulations in six Med-El Combi 40+ implant subjects. Stimulation locations were estimated using the Greenwood mapping function based on the estimated electrode insertion depth. Results from frequency-place compression and expansion with implants were similar to simulation results, especially for postlingually deafened subjects, despite the uncertainty in the actual stimulation sites of the auditory nerves. The present study shows that frequency-place mapping is an important factor in implant performance and an individual implant patient's map could be optimized with functional tests using frequency-place manipulations.  相似文献   

3.
4.
Neural-population interactions resulting from excitation overlap in multi-channel cochlear implants (CI) may cause blurring of the "internal" auditory representation of complex sounds such as vowels. In experiment I, confusion matrices for eight German steady-state vowellike signals were obtained from seven CI listeners. Identification performance ranged between 42% and 74% correct. On the basis of an information transmission analysis across all vowels, pairs of most and least frequently confused vowels were selected for each subject. In experiment II, vowel masking patterns (VMPs) were obtained using the previously selected vowels as maskers. The VMPs were found to resemble the "electrical" vowel spectra to a large extent, indicating a relatively weak effect of neural-population interactions. Correlation between vowel identification data and VMP spectral similarity, measured by means of several spectral distance metrics, showed that the CI listeners identified the vowels based on differences in the between-peak spectral information as well as the location of spectral peaks. The effect of nonlinear amplitude mapping of acoustic into "electrical" vowels, as performed in the implant processors, was evaluated separately and compared to the effect of neural-population interactions. Amplitude mapping was found to cause more blurring than neural-population interactions. Subjects exhibiting strong blurring effects yielded lower overall vowel identification scores.  相似文献   

5.
A methodology for the estimation of individual loudness growth functions using tone-burst otoacoustic emissions (TBOAEs) and tone-burst auditory brainstem responses (TBABRs) was proposed by Silva and Epstein [J. Acoust. Soc. Am. 127, 3629-3642 (2010)]. This work attempted to investigate the application of such technique to the more challenging cases of hearing-impaired listeners. The specific aims of this study were to (1) verify the accuracy of this technique with eight hearing-impaired listeners for 1- and 4-kHz tone-burst stimuli, (2) investigate the effect of residual noise levels from the TBABRs on the quality of the loudness growth estimation, and (3) provide a public dataset of physiological and psychoacoustical responses to a wide range of stimuli intensity. The results show that some of the physiological loudness growth estimates were within the mean-square-error range for standard psychoacoustical procedures, with closer agreement at 1 kHz. The median residual noise in the TBABRs was found to be related to the performance of the estimation, with some listeners showing strong improvements in the estimated loudness growth function when controlling for noise levels. This suggests that future studies using evoked potentials to estimate loudness growth should control for the estimated averaged residual noise levels of the TBABRs.  相似文献   

6.
This study examined within- and across-electrode-channel processing of temporal gaps in successful users of MED-EL COMBI 40+ cochlear implants. The first experiment tested across-ear gap duration discrimination (GDD) in four listeners with bilateral implants. The results demonstrated that across-ear GDD thresholds are elevated relative to monaural, within-electrode-channel thresholds; the size of the threshold shift was approximately the same as for monaural, across-electrode-channel configurations. Experiment 1 also demonstrated a decline in GDD performance for channel-asymmetric markers. The second experiment tested the effect of envelope fluctuation on gap detection (GD) for monaural markers carried on a single electrode channel. Results from five cochlear implant listeners indicated that envelopes associated with 50-Hz wide bands of noise resulted in poorer GD thresholds than envelopes associated with 300-Hz wide bands of noise. In both cases GD thresholds improved when envelope fluctuations were compressed by an exponent of 0.2. The results of both experiments parallel those found for acoustic hearing, therefore suggesting that temporal processing of gaps is largely limited by factors central to the cochlea.  相似文献   

7.
Cochlear implant function, as assessed by psychophysical measures, varies from one stimulation site to another within a patient's cochlea. This suggests that patient performance might be improved by selection of the best-functioning sites for the processor map. In evaluating stimulation sites for such a strategy, electrode configuration is an important variable. Variation across stimulation sites in loudness-related measures (detection thresholds and maximum comfortable loudness levels), is much larger for stimulation with bipolar electrode configurations than with monopolar configurations. The current study found that, in contrast to the loudness-related measures, magnitudes of across-site means and the across-site variances of modulation detection thresholds were not dependent on electrode configuration, suggesting that the mechanisms underlying variation in these various psychophysical measures are not all the same. The data presented here suggest that bipolar and monopolar electrode configurations are equally effective in identifying good and poor stimulation sites for modulation detection but that the across-site patterns of modulation detection thresholds are not the same for the two configurations. Therefore, it is recommended to test all stimulation sites using the patient's clinically assigned electrode configuration when performing psychophysical evaluation of a patient's modulation detection acuity to select sites for the processor map.  相似文献   

8.
The goals of the present study were to measure acoustic temporal modulation transfer functions (TMTFs) in cochlear implant listeners and examine the relationship between modulation detection and speech recognition abilities. The effects of automatic gain control, presentation level and number of channels on modulation detection thresholds (MDTs) were examined using the listeners' clinical sound processor. The general form of the TMTF was low-pass, consistent with previous studies. The operation of automatic gain control had no effect on MDTs when the stimuli were presented at 65 dBA. MDTs were not dependent on the presentation levels (ranging from 50 to 75 dBA) nor on the number of channels. Significant correlations were found between MDTs and speech recognition scores. The rates of decay of the TMTFs were predictive of speech recognition abilities. Spectral-ripple discrimination was evaluated to examine the relationship between temporal and spectral envelope sensitivities. No correlations were found between the two measures, and 56% of the variance in speech recognition was predicted jointly by the two tasks. The present study suggests that temporal modulation detection measured with the sound processor can serve as a useful measure of the ability of clinical sound processing strategies to deliver clinically pertinent temporal information.  相似文献   

9.
Most cochlear implant strategies utilize monopolar stimulation, likely inducing relatively broad activation of the auditory neurons. The spread of activity may be narrowed with a tripolar stimulation scheme, wherein compensating current of opposite polarity is simultaneously delivered to two adjacent electrodes. In this study, a model and cochlear implant subjects were used to examine loudness growth for varying amounts of tripolar compensation, parameterized by a coefficient sigma, ranging from 0 (monopolar) to 1 (full tripolar). In both the model and the subjects, current required for threshold activation could be approximated by I(sigma)=Ithr(0)(1-sigmaK), with fitted constants Ithr(0) and K. Three of the subjects had a "positioner," intended to place their electrode arrays closer to their neural tissue. The values of K were smaller for the positioner users and for a "close" electrode-to-tissue distance in the model. Above threshold, equal-loudness contours for some subjects deviated significantly from a linear scale-up of the threshold approximations. The patterns of deviation were similar to those observed in the model for conditions in which most of the neurons near the center electrode were excited.  相似文献   

10.
To investigate how hearing loss of primarily cochlear origin affects the loudness of brief tones, loudness matches between 5- and 200-ms tones were obtained as a function of level for 15 listeners with cochlear impairments and for seven age-matched controls. Three frequencies, usually 0.5, 1, and 4 kHz, were tested in each listener using a two-interval, two--alternative forced--choice (2I, 2AFC) paradigm with a roving-level, up-down adaptive procedure. Results for the normal listeners generally were consistent with published data [e.g., Florentine et al., J. Acoust Soc. Am. 99, 1633-1644 (1996)]. The amount of temporal integration--defined as the level difference between equally loud short and long tones--varied nonmonotonically with level and was largest at moderate levels. No consistent effect of frequency was apparent. The impaired listeners varied widely, but most showed a clear effect of level on the amount of temporal integration. Overall, their results appear consistent with expectations based on knowledge of the general properties of their loudness-growth functions and the equal-loudness-ratio hypothesis, which states that the loudness ratio between equal-SPL long and brief tones is the same at all SPLs. The impaired listeners' amounts of temporal integration at high SPLs often were larger than normal, although it was reduced near threshold. When evaluated at equal SLs, the amount of temporal integration well above threshold usually was in the low end of the normal range. Two listeners with abrupt high-frequency hearing losses (slopes > 50 dB/octave) showed larger-than-normal maximal amounts of temporal integration (40 to 50 dB). This finding is consistent with the shallow loudness functions predicted by our excitation-pattern model for impaired listeners [Florentine et al., in Modeling Sensorineural Hearing Loss, edited by W. Jesteadt (Erlbaum, Mahwah, NJ, 1997), pp. 187-198]. Loudness functions derived from impaired listeners' temporal-integration functions indicate that restoration of loudness in listeners with cochlear hearing loss usually will require the same gain whether the sound is short or long.  相似文献   

11.
Spectral-ripple discrimination has been used widely for psychoacoustical studies in normal-hearing, hearing-impaired, and cochlear implant listeners. The present study investigated the perceptual mechanism for spectral-ripple discrimination in cochlear implant listeners. The main goal of this study was to determine whether cochlear implant listeners use a local intensity cue or global spectral shape for spectral-ripple discrimination. The effect of electrode separation on spectral-ripple discrimination was also evaluated. Results showed that it is highly unlikely that cochlear implant listeners depend on a local intensity cue for spectral-ripple discrimination. A phenomenological model of spectral-ripple discrimination, as an "ideal observer," showed that a perceptual mechanism based on discrimination of a single intensity difference cannot account for performance of cochlear implant listeners. Spectral modulation depth and electrode separation were found to significantly affect spectral-ripple discrimination. The evidence supports the hypothesis that spectral-ripple discrimination involves integrating information from multiple channels.  相似文献   

12.
Many competing noises in real environments are modulated or fluctuating in level. Listeners with normal hearing are able to take advantage of temporal gaps in fluctuating maskers. Listeners with sensorineural hearing loss show less benefit from modulated maskers. Cochlear implant users may be more adversely affected by modulated maskers because of their limited spectral resolution and by their reliance on envelope-based signal-processing strategies of implant processors. The current study evaluated cochlear implant users' ability to understand sentences in the presence of modulated speech-shaped noise. Normal-hearing listeners served as a comparison group. Listeners repeated IEEE sentences in quiet, steady noise, and modulated noise maskers. Maskers were presented at varying signal-to-noise ratios (SNRs) at six modulation rates varying from 1 to 32 Hz. Results suggested that normal-hearing listeners obtain significant release from masking from modulated maskers, especially at 8-Hz masker modulation frequency. In contrast, cochlear implant users experience very little release from masking from modulated maskers. The data suggest, in fact, that they may show negative effects of modulated maskers at syllabic modulation rates (2-4 Hz). Similar patterns of results were obtained from implant listeners using three different devices with different speech-processor strategies. The lack of release from masking occurs in implant listeners independent of their device characteristics, and may be attributable to the nature of implant processing strategies and/or the lack of spectral detail in processed stimuli.  相似文献   

13.
The differences in spectral shape resolution abilities among cochlear implant (CI) listeners, and between CI and normal-hearing (NH) listeners, when listening with the same number of channels (12), was investigated. In addition, the effect of the number of channels on spectral shape resolution was examined. The stimuli were rippled noise signals with various ripple frequency-spacings. An adaptive 41FC procedure was used to determine the threshold for resolvable ripple spacing, which was the spacing at which an interchange in peak and valley positions could be discriminated. The results showed poorer spectral shape resolution in CI compared to NH listeners (average thresholds of approximately 3000 and 400 Hz, respectively), and wide variability among CI listeners (range of approximately 800 to 8000 Hz). There was a significant relationship between spectral shape resolution and vowel recognition. The spectral shape resolution thresholds of NH listeners increased as the number of channels increased from 1 to 16, while the CI listeners showed a performance plateau at 4-6 channels, which is consistent with previous results using speech recognition measures. These results indicate that this test may provide a measure of CI performance which is time efficient and non-linguistic, and therefore, if verified, may provide a useful contribution to the prediction of speech perception in adults and children who use CIs.  相似文献   

14.
This study investigated the effect of pulsatile stimulation rate on medial vowel and consonant recognition in cochlear implant listeners. Experiment 1 measured phoneme recognition as a function of stimulation rate in six Nucleus-22 cochlear implant listeners using an experimental four-channel continuous interleaved sampler (CIS) speech processing strategy. Results showed that all stimulation rates from 150 to 500 pulses/s/electrode produced equally good performance, while stimulation rates lower than 150 pulses/s/electrode produced significantly poorer performance. Experiment 2 measured phoneme recognition by implant listeners and normal-hearing listeners as a function of the low-pass cutoff frequency for envelope information. Results from both acoustic and electric hearing showed no significant difference in performance for all cutoff frequencies higher than 20 Hz. Both vowel and consonant scores dropped significantly when the cutoff frequency was reduced from 20 Hz to 2 Hz. The results of these two experiments suggest that temporal envelope information can be conveyed by relatively low stimulation rates. The pattern of results for both electrical and acoustic hearing is consistent with a simple model of temporal integration with an equivalent rectangular duration (ERD) of the temporal integrator of about 7 ms.  相似文献   

15.
16.
Spectral resolution has been reported to be closely related to vowel and consonant recognition in cochlear implant (CI) listeners. One measure of spectral resolution is spectral modulation threshold (SMT), which is defined as the smallest detectable spectral contrast in the spectral ripple stimulus. SMT may be determined by the activation pattern associated with electrical stimulation. In the present study, broad activation patterns were simulated using a multi-band vocoder to determine if similar impairments in speech understanding scores could be produced in normal-hearing listeners. Tokens were first decomposed into 15 logarithmically spaced bands and then re-synthesized by multiplying the envelope of each band by matched filtered noise. Various amounts of current spread were simulated by adjusting the drop-off of the noise spectrum away from the peak (40-5 dBoctave). The average SMT (0.25 and 0.5 cyclesoctave) increased from 6.3 to 22.5 dB, while average vowel identification scores dropped from 86% to 19% and consonant identification scores dropped from 93% to 59%. In each condition, the impairments in speech understanding were generally similar to those found in CI listeners with similar SMTs, suggesting that variability in spread of neural activation largely accounts for the variability in speech perception of CI listeners.  相似文献   

17.
The purpose of this study was to explore the potential advantages, both theoretical and applied, of preserving low-frequency acoustic hearing in cochlear implant patients. Several hypotheses are presented that predict that residual low-frequency acoustic hearing along with electric stimulation for high frequencies will provide an advantage over traditional long-electrode cochlear implants for the recognition of speech in competing backgrounds. A simulation experiment in normal-hearing subjects demonstrated a clear advantage for preserving low-frequency residual acoustic hearing for speech recognition in a background of other talkers, but not in steady noise. Three subjects with an implanted "short-electrode" cochlear implant and preserved low-frequency acoustic hearing were also tested on speech recognition in the same competing backgrounds and compared to a larger group of traditional cochlear implant users. Each of the three short-electrode subjects performed better than any of the traditional long-electrode implant subjects for speech recognition in a background of other talkers, but not in steady noise, in general agreement with the simulation studies. When compared to a subgroup of traditional implant users matched according to speech recognition ability in quiet, the short-electrode patients showed a 9-dB advantage in the multitalker background. These experiments provide strong preliminary support for retaining residual low-frequency acoustic hearing in cochlear implant patients. The results are consistent with the idea that better perception of voice pitch, which can aid in separating voices in a background of other talkers, was responsible for this advantage.  相似文献   

18.
This experiment examined the effects of spectral resolution and fine spectral structure on recognition of spectrally asynchronous sentences by normal-hearing and cochlear implant listeners. Sentence recognition was measured in six normal-hearing subjects listening to either full-spectrum or noise-band processors and five Nucleus-22 cochlear implant listeners fitted with 4-channel continuous interleaved sampling (CIS) processors. For the full-spectrum processor, the speech signals were divided into either 4 or 16 channels. For the noise-band processor, after band-pass filtering into 4 or 16 channels, the envelope of each channel was extracted and used to modulate noise of the same bandwidth as the analysis band, thus eliminating the fine spectral structure available in the full-spectrum processor. For the 4-channel CIS processor, the amplitude envelopes extracted from four bands were transformed to electric currents by a power function and the resulting electric currents were used to modulate pulse trains delivered to four electrode pairs. For all processors, the output of each channel was time-shifted relative to other channels, varying the channel delay across channels from 0 to 240 ms (in 40-ms steps). Within each delay condition, all channels were desynchronized such that the cross-channel delays between adjacent channels were maximized, thereby avoiding local pockets of channel synchrony. Results show no significant difference between the 4- and 16-channel full-spectrum speech processor for normal-hearing listeners. Recognition scores dropped significantly only when the maximum delay reached 200 ms for the 4-channel processor and 240 ms for the 16-channel processor. When fine spectral structures were removed in the noise-band processor, sentence recognition dropped significantly when the maximum delay was 160 ms for the 16-channel noise-band processor and 40 ms for the 4-channel noise-band processor. There was no significant difference between implant listeners using the 4-channel CIS processor and normal-hearing listeners using the 4-channel noise-band processor. The results imply that when fine spectral structures are not available, as in the implant listener's case, increased spectral resolution is important for overcoming cross-channel asynchrony in speech signals.  相似文献   

19.
The purpose of this study is to determine the relative impact of reverberant self-masking and overlap-masking effects on speech intelligibility by cochlear implant listeners. Sentences were presented in two conditions wherein reverberant consonant segments were replaced with clean consonants, and in another condition wherein reverberant vowel segments were replaced with clean vowels. The underlying assumption is that self-masking effects would dominate in the first condition, whereas overlap-masking effects would dominate in the second condition. Results indicated that the degradation of speech intelligibility in reverberant conditions is caused primarily by self-masking effects that give rise to flattened formant transitions.  相似文献   

20.
Two experiments investigated the impact of reverberation and masking on speech understanding using cochlear implant (CI) simulations. Experiment 1 tested sentence recognition in quiet. Stimuli were processed with reverberation simulation (T=0.425, 0.266, 0.152, and 0.0 s) and then either processed with vocoding (6, 12, or 24 channels) or were subjected to no further processing. Reverberation alone had only a small impact on perception when as few as 12 channels of information were available. However, when the processing was limited to 6 channels, perception was extremely vulnerable to the effects of reverberation. In experiment 2, subjects listened to reverberated sentences, through 6- and 12-channel processors, in the presence of either speech-spectrum noise (SSN) or two-talker babble (TTB) at various target-to-masker ratios. The combined impact of reverberation and masking was profound, although there was no interaction between the two effects. This differs from results obtained in subjects listening to unprocessed speech where interactions between reverberation and masking have been shown to exist. A speech transmission index (STI) analysis indicated a reasonably good prediction of speech recognition performance. Unlike previous investigations, the SSN and TTB maskers produced equivalent results, raising questions about the role of informational masking in CI processed speech.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号