首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cochlear implants are largely unable to encode voice pitch information, which hampers the perception of some prosodic cues, such as intonation. This study investigated whether children with a cochlear implant in one ear were better able to detect differences in intonation when a hearing aid was added in the other ear ("bimodal fitting"). Fourteen children with normal hearing and 19 children with bimodal fitting participated in two experiments. The first experiment assessed the just noticeable difference in F0, by presenting listeners with a naturally produced bisyllabic utterance with an artificially manipulated pitch accent. The second experiment assessed the ability to distinguish between questions and affirmations in Dutch words, again by using artificial manipulation of F0. For the implanted group, performance significantly improved in each experiment when the hearing aid was added. However, even with a hearing aid, the implanted group required exaggerated F0 excursions to perceive a pitch accent and to identify a question. These exaggerated excursions are close to the maximum excursions typically used by Dutch speakers. Nevertheless, the results of this study showed that compared to the implant only condition, bimodal fitting improved the perception of intonation.  相似文献   

2.
Understanding speech in background noise, talker identification, and vocal emotion recognition are challenging for cochlear implant (CI) users due to poor spectral resolution and limited pitch cues with the CI. Recent studies have shown that bimodal CI users, that is, those CI users who wear a hearing aid (HA) in their non-implanted ear, receive benefit for understanding speech both in quiet and in noise. This study compared the efficacy of talker-identification training in two groups of young normal-hearing adults, listening to either acoustic simulations of unilateral CI or bimodal (CI+HA) hearing. Training resulted in improved identification of talkers for both groups with better overall performance for simulated bimodal hearing. Generalization of learning to sentence and emotion recognition also was assessed in both subject groups. Sentence recognition in quiet and in noise improved for both groups, no matter if the talkers had been heard during training or not. Generalization to improvements in emotion recognition for two unfamiliar talkers also was noted for both groups with the simulated bimodal-hearing group showing better overall emotion-recognition performance. Improvements in sentence recognition were retained a month after training in both groups. These results have potential implications for aural rehabilitation of conventional and bimodal CI users.  相似文献   

3.
The interaural level difference (ILD) is an important cue for the localization of sound sources. Just noticeable differences (JND) in ILD were measured in 12 normal hearing subjects for uncorrelated noise bands with a bandwidth of 13 octave and a different center frequency in both ears. In one ear the center frequency was either 250, 500, 1000, or 4000 Hz. In the other ear, a frequency shift of 0, 16, 13, or 1 octave was introduced. JNDs in ILD for unshifted, uncorrelated noise bands of 13 octave width were 2.6, 2.6, 2.5, and 1.4 dB for 250, 500, 1000, and 4000 Hz, respectively. Averaged over all shifts, JNDs decreased significantly with increasing frequency. For the shifted conditions, JNDs increased significantly with increasing shift. Performance on average worsened by 0.5, 0.9, and 1.5 dB for shifts of 16, 13, and 1 octave. Though performance decreases, the just noticeable ILDs for the shifted conditions were still in a range usable for lateralization. This has implications for signal processing algorithms for bilateral bimodal hearing instruments and the fitting of bilateral cochlear implants.  相似文献   

4.
The purpose of this study was to explore the potential advantages, both theoretical and applied, of preserving low-frequency acoustic hearing in cochlear implant patients. Several hypotheses are presented that predict that residual low-frequency acoustic hearing along with electric stimulation for high frequencies will provide an advantage over traditional long-electrode cochlear implants for the recognition of speech in competing backgrounds. A simulation experiment in normal-hearing subjects demonstrated a clear advantage for preserving low-frequency residual acoustic hearing for speech recognition in a background of other talkers, but not in steady noise. Three subjects with an implanted "short-electrode" cochlear implant and preserved low-frequency acoustic hearing were also tested on speech recognition in the same competing backgrounds and compared to a larger group of traditional cochlear implant users. Each of the three short-electrode subjects performed better than any of the traditional long-electrode implant subjects for speech recognition in a background of other talkers, but not in steady noise, in general agreement with the simulation studies. When compared to a subgroup of traditional implant users matched according to speech recognition ability in quiet, the short-electrode patients showed a 9-dB advantage in the multitalker background. These experiments provide strong preliminary support for retaining residual low-frequency acoustic hearing in cochlear implant patients. The results are consistent with the idea that better perception of voice pitch, which can aid in separating voices in a background of other talkers, was responsible for this advantage.  相似文献   

5.
Users of a cochlear implant together with a contralateral hearing aid-so-called bimodal listeners-have difficulties with localizing sound sources. This is mainly due to the distortion of interaural time and level difference cues (ITD and ILD), and limited ITD sensitivity. An algorithm is presented that enhances ILD cues. Horizontal plane sound-source localization performance of six bimodal listeners was evaluated in (1) a real sound field with their clinical devices, (2) in a virtual sound field, under direct computer control, and (3) in a virtual sound field with ILD enhancement. The results in the real sound field did not differ significantly from the results in the virtual field, and ILD enhancement improved localization performance by 4°-10° absolute error, relative to a mean absolute error of 28° in the condition without ILD enhancement.  相似文献   

6.
Bilateral cochlear implant patients are unable to localize as well as normal hearing listeners. Although poor sensitivity to interaural time differences clearly contributes to this deficit, it is unclear whether deficits in terms of interaural level differences are also a contributing factor. In this study, localization was tested while manipulating interaural time and level cues using head-related transfer functions. The results indicate that bilateral cochlear implant users' ability to localize based on interaural level differences is actually greater than that of untrained normal hearing listeners.  相似文献   

7.
Users of bilateral cochlear implants and a cochlear implant combined with a contralateral hearing aid are sensitive to interaural time differences (ITDs). The way cochlear implant speech processors work and differences between modalities may result in interaural differences in shape of the temporal envelope presented to the binaural system. The effect of interaural differences in envelope shape on ITD sensitivity was investigated with normal-hearing listeners using a 4?kHz pure tone modulated with a periodic envelope with a trapezoid shape in each cycle. In one ear the onset segment of the trapezoid was transformed by a power function. No effect on the just noticeable difference in ITD was found with an interaural difference in envelope shape, but the ITD for a centered percept was significantly different across envelope shape conditions.  相似文献   

8.
Perception of a target voice in the presence of a competing talker, of same or different gender as the target, was investigated in cochlear implant users, in implant-alone and bimodal (acoustic hearing in the non-implanted ear) conditions. Recordings of two male and two female talkers acted as targets and maskers, to investigate whether bimodal benefit increased for different compared to same gender target/maskers due to increased ability to perceive and utilize fundamental frequency and spectral-shape differences. In both listening conditions participants showed benefit of target/masker gender difference. There was an overall bimodal benefit, which was independent of target/masker gender difference.  相似文献   

9.
This study investigated which acoustic cues within the speech signal are responsible for bimodal speech perception benefit. Seven cochlear implant (CI) users with usable residual hearing at low frequencies in the non-implanted ear participated. Sentence tests were performed in near-quiet (some noise on the CI side to reduce scores from ceiling) and in a modulated noise background, with the implant alone and with the addition, in the hearing ear, of one of four types of acoustic signals derived from the same sentences: (1) a complex tone modulated by the fundamental frequency (F0) and amplitude envelope contours; (2) a pure tone modulated by the F0 and amplitude contours; (3) a noise-vocoded signal; (4) unprocessed speech. The modulated tones provided F0 information without spectral shape information, whilst the vocoded signal presented spectral shape information without F0 information. For the group as a whole, only the unprocessed speech condition provided significant benefit over implant-alone scores, in both near-quiet and noise. This suggests that, on average, F0 or spectral cues in isolation provided limited benefit for these subjects in the tested listening conditions, and that the significant benefit observed in the full-signal condition was derived from implantees' use of a combination of these cues.  相似文献   

10.
Our aim in the present study was to apply extrapolated DPOAE I/O-functions [J. Acoust. Soc. Am. 111, 1810-1818 (2002); 113, 3275-3284 (2003)] in neonates in order to investigate their ability to estimate hearing thresholds and to differentiate between middle-ear and cochlear disorders. DPOAEs were measured in neonates after birth (mean age = 3.2 days) and 4 weeks later (follow-up) at 11 test frequencies between f2 = 1.5 and 8 kHz and compared to that found in normal hearing subjects and cochlear hearing loss patients. On average, in a single ear hearing threshold estimation was possible at about 2/3 of the test frequencies. A sufficient test performance of the approach is therefore suggested. Thresholds were higher at the first measurement compared to that found at the follow-up measurement. Since thresholds varied with frequency, transitory middle ear dysfunction due to amniotic fluid instead of cochlear immaturity is suggested to be the cause for the change in thresholds. DPOAE behavior in the neonate ears differed from that found in the cochlear hearing loss ears. From a simple model it was concluded that the difference between the estimated DPOAE threshold and the DPOAE detection threshold is able to differentiate between sound conductive and cochlear hearing loss.  相似文献   

11.
Sensitivity to interaural time differences (ITDs) with unmodulated low-frequency stimuli was assessed in bimodal listeners who had previously shown to be good performers in ITD experiments. Two types of stimuli were used: (1) an acoustic sinusoid combined with an electric transposed signal and (2) an acoustic sinusoid combined with an electric clicktrain. No or very low sensitivity to ITD was found for these stimuli, even though subjects were highly trained on the task and were intensively tested in multiple test sessions. In previous studies with users of a cochlear implant (CI) and a contralateral hearing aid (HA) (bimodal listeners), sensitivity was shown to ITD with modulated stimuli with frequency content between 600 and 3600 Hz. The outcomes of the current study imply that in speech processing design for users of a CI in combination with a HA on the contralateral side, the emphasis should be more on providing salient envelope ITD cues than on preserving fine-timing ITD cues present in acoustic signals.  相似文献   

12.
The addition of low-passed (LP) speech or even a tone following the fundamental frequency (F0) of speech has been shown to benefit speech recognition for cochlear implant (CI) users with residual acoustic hearing. The mechanisms underlying this benefit are still unclear. In this study, eight bimodal subjects (CI users with acoustic hearing in the non-implanted ear) and eight simulated bimodal subjects (using vocoded and LP speech) were tested on vowel and consonant recognition to determine the relative contributions of acoustic and phonetic cues, including F0, to the bimodal benefit. Several listening conditions were tested (CI/Vocoder, LP, T(F0-env), CI/Vocoder + LP, CI/Vocoder + T(F0-env)). Compared with CI/Vocoder performance, LP significantly enhanced both consonant and vowel perception, whereas a tone following the F0 contour of target speech and modulated with an amplitude envelope of the maximum frequency of the F0 contour (T(F0-env)) enhanced only consonant perception. Information transfer analysis revealed a dual mechanism in the bimodal benefit: The tone representing F0 provided voicing and manner information, whereas LP provided additional manner, place, and vowel formant information. The data in actual bimodal subjects also showed that the degree of the bimodal benefit depended on the cutoff and slope of residual acoustic hearing.  相似文献   

13.
This paper studies the effect of bilateral hearing aids on directional hearing in the frontal horizontal plane. Localization tests evaluated bilateral hearing aid users using different stimuli and different noise scenarios. Normal hearing subjects were used as a reference. The main research questions raised in this paper are: (i) How do bilateral hearing aid users perform on a localization task, relative to normal hearing subjects? (ii) Do bilateral hearing aids preserve localization cues, and (iii) Is there an influence of state of the art noise reduction algorithms, more in particular an adaptive directional microphone configuration, on localization performance? The hearing aid users were tested without and with their hearing aids, using both a standard omnidirectional microphone configuration and an adaptive directional microphone configuration. The following main conclusions are drawn. (i) Bilateral hearing aid users perform worse than normal hearing subjects in a localization task, although more than one-half of the subjects reach normal hearing performance when tested unaided. For both groups, localization performance drops significantly when acoustical scenarios become more complex. (ii) Bilateral, i.e., independently operating hearing aids do not preserve localization cues. (iii) Overall, adaptive directional noise reduction can have an additional and significant negative impact on localization performance.  相似文献   

14.
Selected subjects with bilateral cochlear implants (CIs) showed excellent horizontal localization of wide-band sounds in previous studies. The current study investigated localization cues used by two bilateral CI subjects with outstanding localization ability. The first experiment studied localization for sounds of different spectral and temporal composition in the free field. Localization of wide-band noise was unaffected by envelope pulsation, suggesting that envelope-interaural time difference (ITD) cues contributed little. Low-pass noise was not localizable for one subject and localization depended on the cutoff frequency for the other which suggests that ITDs played only a limited role. High-pass noise with slow envelope changes could be localized, in line with contribution of interaural level differences (ILDs). In experiment 2, processors of one subject were raised above the head to void the head shadow. If they were spaced at ear distance, ITDs allowed discrimination of left from right for a pulsed wide-band noise. Good localization was observed with a head-sized cardboard inserted between processors, showing the reliance on ILDs. Experiment 3 investigated localization in virtual space with manipulated ILDs and ITDs. Localization shifted predominantly for offsets in ILDs, even for pulsed high-pass noise. This confirms that envelope ITDs contributed little and that localization with bilateral CIs was dominated by ILDs.  相似文献   

15.
Performance on tests of pure-tone thresholds, speech-recognition thresholds, and speech-recognition scores for the two ears of each subject were evaluated in two groups of adults with bilateral hearing losses. One group was composed of individuals fitted with binaural hearing aids, and the other group included persons with monaural hearing aids. Performance prior to the use of hearing aids was compared to performance after 4-5 years of hearing aid use in order to determine whether the unaided ear would show effects of auditory deprivation. There were no differences over time for pure-tone thresholds or speech-recognition thresholds for both ears of both groups. Nevertheless, the results revealed that the speech-recognition difference scores of the binaurally fitted subjects remained stable over time whereas they increased for the monaurally fitted subjects. The findings reveal an auditory deprivation effect for the unfitted ears of the subjects with monaural hearing aids.  相似文献   

16.
Five bilateral cochlear implant users were tested for their localization abilities and speech understanding in noise, for both monaural and binaural listening conditions. They also participated in lateralization tasks to assess the impact of variations in interaural time delays (ITDs) and interaural level differences (ILDs) for electrical pulse trains under direct computer control. The localization task used pink noise bursts presented from an eight-loudspeaker array spanning an arc of approximately 108 degrees in front of the listeners at ear level (0-degree elevation). Subjects showed large benefits from bilateral device use compared to either side alone. Typical root-mean-square (rms) averaged errors across all eight loudspeakers in the array were about 10 degrees for bilateral device use and ranged from 20 degrees to 60 degrees using either ear alone. Speech reception thresholds (SRTs) were measured for sentences presented from directly in front of the listeners (0 degrees) in spectrally matching speech-weighted noise at either 0 degrees, +90 degrees or -90 degrees for four subjects out of five tested who could perform the task. For noise to either side, bilateral device use showed a substantial benefit over unilateral device use when noise was ipsilateral to the unilateral device. This was primarily because of monaural head-shadow effects, which resulted in robust SRT improvements (P<0.001) of about 4 to 5 dB when ipsilateral and contralateral noise positions were compared. The additional benefit of using both ears compared to the shadowed ear (i.e., binaural unmasking) was only 1 or 2 dB and less robust (P = 0.04). Results from the lateralization studies showed consistently good sensitivity to ILDs; better than the smallest level adjustment available in the implants (0.17 dB) for some subjects. Sensitivity to ITDs was moderate on the other hand, typically of the order of 100 micros. ITD sensitivity deteriorated rapidly when stimulation rates for unmodulated pulse-trains increased above a few hundred Hz but at 800 pps showed sensitivity comparable to 50-pps pulse-trains when a 50-Hz modulation was applied. In our opinion, these results clearly demonstrate important benefits are available from bilateral implantation, both for localizing sounds (in quiet) and for listening in noise when signal and noise sources are spatially separated. The data do indicate, however, that effects of interaural timing cues are weaker than those from interaural level cues and according to our psychophysical findings rely on the availability of low-rate information below a few hundred Hz.  相似文献   

17.
Individual and group loudness relations were obtained at a frequency in the region of impaired hearing for 100 people, 98 with bilateral cochlear impairment. Slope distributions were determined from absolute magnitude estimation (AME) and absolute magnitude production (AMP) of loudness; they were also derived from cross-modality matching (CMM) and AME of apparent length. With respect to both the means and the individual slope values, the two distributions closely agree. More than half of the measured deviations are less than 20%, with an overall average of -1.5%, meaning that transitivity is preserved for bilaterally impaired individuals. Moreover, over the stimulus range where cochlear impairment steepens the loudness function, both the group means and the individual slope values are clearly larger than in normal hearing. The results also show that, for groups of people with approximately similar losses, the standard deviation is a nearly constant proportion of the mean slope value giving a coefficient of variation of about 27% in normal and impaired hearing. This indicates, in accord with loudness matching, that the size of the slopes depends directly on the degree of hearing loss. The results disclose that loudness measurements obtained by magnitude scaling are able to reveal the operating characteristic of the ear for individuals.  相似文献   

18.
As advanced signal processing algorithms have been proposed to enhance hearing protective device (HPD) performance, it is important to determine how directional microphones might affect the localization ability of users and whether they might cause safety hazards. The effect of in-the-ear microphone directivity was assessed by measuring sound source identification of speech in the horizontal plane. Recordings of speech in quiet and in noise were made with Knowles Electronic Manikin for Acoustic Research wearing bilateral in-the-ear hearing aids with microphones having adjustable directivity (omnidirectional, cardioid, hypercardioid, supercardioid). Signals were generated from 16 locations in a circular array. Sound direction identification performance of eight normal hearing listeners and eight hearing-impaired listeners revealed that directional microphones did not degrade localization performance and actually reduced the front-back and lateral localization errors made when listening through omnidirectional microphones. The summed rms speech level for the signals entering the two ears appear to serve as a cue for making front-back discriminations when using directional microphones in the experimental setting. The results of this study show that the use of matched directional microphones when worn bilaterally do not have a negative effect on the ability to localize speech in the horizontal plane and may thus be useful in HPD design.  相似文献   

19.
Sound localization with hearing aids has traditionally been investigated in artificial laboratory settings. These settings are not representative of environments in which hearing aids are used. With individual Head-Related Transfer Functions (HRTFs) and room simulations, realistic environments can be reproduced and the performance of hearing aid algorithms can be evaluated. In this study, four different environments with background noise have been implemented in which listeners had to localize different sound sources. The HRTFs were measured inside the ear canals of the test subjects and by the microphones of Behind-The-Ear (BTEs) hearing aids. In the first experiment the system for virtual acoustics was evaluated by comparing perceptual sound localization results for the four scenes in a real room with a simulated one. In the second experiment, sound localization with three BTE algorithms, an omnidirectional microphone, a monaural cardioid-shaped beamformer and a monaural noise canceler, was examined. The results showed that the system for generating virtual environments is a reliable tool to evaluate sound localization with hearing aids. With BTE hearing aids localization performance decreased and the number of front-back confusions was at chance level. The beamformer, due to its directivity characteristics, allowed the listener to resolve the front-back ambiguity.  相似文献   

20.
Four adult bilateral cochlear implant users, with good open-set sentence recognition, were tested with three different sound coding strategies for binaural speech unmasking and their ability to localize 100 and 500 Hz click trains in noise. Two of the strategies tested were envelope-based strategies that are clinically widely used. The third was a research strategy that additionally preserved fine-timing cues at low frequencies. Speech reception thresholds were determined in diotic noise for diotic and interaurally time-delayed speech using direct audio input to a bilateral research processor. Localization in noise was assessed in the free field. Overall results, for both speech and localization tests, were similar with all three strategies. None provided a binaural speech unmasking advantage due to the application of 700 micros interaural time delay to the speech signal, and localization results showed similar response patterns across strategies that were well accounted for by the use of broadband interaural level cues. The data from both experiments combined indicate that, in contrast to normal hearing, timing cues available from natural head-width delays do not offer binaural advantages with present methods of electrical stimulation, even when fine-timing cues are explicitly coded.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号