首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Regions in the cochlea with no (or very few) functioning inner hair cells and/or neurons are called "dead regions" (DRs). The recognition of high-pass filtered nonsense syllables was measured as a function of filter cutoff frequency for hearing-impaired people with and without low-frequency (apical) cochlear DRs. The diagnosis of any DR was made using the TEN(HL) test, and psychophysical tuning curves were used to define the edge frequency (f(e)) more precisely. Stimuli were amplified differently for each ear, using the "Cambridge formula." For subjects with low-frequency hearing loss but without DRs, scores were high (about 78%) for low cutoff frequencies, remained approximately constant for cutoff frequencies up to 862 Hz, and then worsened with increasing cutoff frequency. For subjects with low-frequency DRs, performance was typically poor for the lowest cutoff frequency (100 Hz), improved as the cutoff frequency was increased to about 0.57f(e), and worsened with further increases. These results indicate that people with low-frequency DRs are able to make effective use of frequency components that fall in the range 0.57f(e) to f(e), but that frequency components below 0.57f(e) have deleterious effects. The results have implications for the fitting of hearing aids to people with low-frequency DRs.  相似文献   

2.
A dead region is a region of the cochlea where there are no functioning inner hair cells (IHCs) and/or neurons; it can be characterized in terms of the characteristic frequencies of the IHCs bordering that region. We examined the effect of high-frequency amplification on speech perception for subjects with high-frequency hearing loss with and without dead regions. The limits of any dead regions were defined by measuring psychophysical tuning curves and were confirmed using the TEN test described in Moore et al. [Br. J. Audiol. 34, 205-224 (2000)]. The speech stimuli were vowel-consonant-vowel (VCV) nonsense syllables, using one of three vowels (/i/, /a/, and /u/) and 21 different consonants. In a baseline condition, subjects were tested using broadband stimuli with a nominal input level of 65 dB SPL. Prior to presentation via Sennheiser HD580 earphones, the stimuli were subjected to the frequency-gain characteristic prescribed by the "Cambridge" formula, which is intended to give speech at 65 dB SPL the same overall loudness as for a normal listener, and to make the average loudness of the speech the same for each critical band over the frequency range important for speech intelligibility (in a listener without a dead region). The stimuli for all other conditions were initially subjected to this same frequency-gain characteristic. Then, the speech was low-pass filtered with various cutoff frequencies. For subjects without dead regions, performance generally improved progressively with increasing cutoff frequency. This indicates that they benefited from high-frequency information. For subjects with dead regions, two patterns of performance were observed. For most subjects, performance improved with increasing cutoff frequency until the cutoff frequency was somewhat above the estimated edge frequency of the dead region, but hardly changed with further increases. For a few subjects, performance initially improved with increasing cutoff frequency and then worsened with further increases, although the worsening was significant only for one subject. The results have important implications for the fitting of hearing aids.  相似文献   

3.
Chinese sentence recognition strongly relates to the reception of tonal information. For cochlear implant (CI) users with residual acoustic hearing, tonal information may be enhanced by restoring low-frequency acoustic cues in the nonimplanted ear. The present study investigated the contribution of low-frequency acoustic information to Chinese speech recognition in Mandarin-speaking normal-hearing subjects listening to acoustic simulations of bilaterally combined electric and acoustic hearing. Subjects listened to a 6-channel CI simulation in one ear and low-pass filtered speech in the other ear. Chinese tone, phoneme, and sentence recognition were measured in steady-state, speech-shaped noise, as a function of the cutoff frequency for low-pass filtered speech. Results showed that low-frequency acoustic information below 500 Hz contributed most strongly to tone recognition, while low-frequency acoustic information above 500 Hz contributed most strongly to phoneme recognition. For Chinese sentences, speech reception thresholds (SRTs) improved with increasing amounts of low-frequency acoustic information, and significantly improved when low-frequency acoustic information above 500 Hz was preserved. SRTs were not significantly affected by the degree of spectral overlap between the CI simulation and low-pass filtered speech. These results suggest that, for CI patients with residual acoustic hearing, preserving low-frequency acoustic information can improve Chinese speech recognition in noise.  相似文献   

4.
We determined how the perceived naturalness of music and speech (male and female talkers) signals was affected by various forms of linear filtering, some of which were intended to mimic the spectral "distortions" introduced by transducers such as microphones, loudspeakers, and earphones. The filters introduced spectral tilts and ripples of various types, variations in upper and lower cutoff frequency, and combinations of these. All of the differently filtered signals (168 conditions) were intermixed in random order within one block of trials. Levels were adjusted to give approximately equal loudness in all conditions. Listeners were required to judge the perceptual quality (naturalness) of the filtered signals on a scale from 1 to 10. For spectral ripples, perceived quality decreased with increasing ripple density up to 0.2 ripple/ERB(N) and with increasing ripple depth. Spectral tilts also degraded quality, and the effects were similar for positive and negative tilts. Ripples and/or tilts degraded quality more when they extended over a wide frequency range (87-6981 Hz) than when they extended over subranges. Low- and mid-frequency ranges were roughly equally important for music, but the mid-range was most important for speech. For music, the highest quality was obtained for the broadband signal (55-16,854 Hz). Increasing the lower cutoff frequency from 55 Hz resulted in a clear degradation of quality. There was also a distinct degradation as the upper cutoff frequency was decreased from 16,845 Hz. For speech, there was a marked degradation when the lower cutoff frequency was increased from 123 to 208 Hz and when the upper cutoff frequency was decreased from 10,869 Hz. Typical telephone bandwidth (313 to 3547 Hz) gave very poor quality.  相似文献   

5.
To examine spectral effects on declines in speech recognition in noise at high levels, word recognition for 18 young adults with normal hearing was assessed for low-pass-filtered speech and speech-shaped maskers or high-pass-filtered speech and speech-shaped maskers at three speech levels (70, 77, and 84 dB SPL) for each of three signal-to-noise ratios (+8, +3, and -2 dB). An additional low-level noise produced equivalent masked thresholds for all subjects. Pure-tone thresholds were measured in quiet and in all maskers. If word recognition was determined entirely by signal-to-noise ratio, and was independent of signal levels and the spectral content of speech and maskers, scores should remain constant with increasing level for both low- and high-frequency speech and maskers. Recognition of low-frequency speech in low-frequency maskers and high-frequency speech in high-frequency maskers decreased significantly with increasing speech level when signal-to-noise ratio was held constant. For low-frequency speech and speech-shaped maskers, the decline was attributed to nonlinear growth of masking which reduced the "effective" signal-to-noise ratio at high levels, similar to previous results for broadband speech and speech-shaped maskers. Masking growth and reduced "effective" signal-to-noise ratio accounted for some but not all the decline in recognition of high-frequency speech in high-frequency maskers.  相似文献   

6.
The speech understanding of persons with sloping high-frequency (HF) hearing impairment (HI) was compared to normal hearing (NH) controls and previous research on persons with "flat" losses [Hornsby and Ricketts (2003). J. Acoust. Soc. Am. 113, 1706-1717] to examine how hearing loss configuration affects the contribution of speech information in various frequency regions. Speech understanding was assessed at multiple low- and high-pass filter cutoff frequencies. Crossover frequencies, defined as the cutoff frequencies at which low- and high-pass filtering yielded equivalent performance, were significantly lower for the sloping HI, compared to NH, group suggesting that HF HI limits the utility of HF speech information. Speech intelligibility index calculations suggest this limited utility was not due simply to reduced audibility but also to the negative effects of high presentation levels and a poorer-than-normal use of speech information in the frequency region with the greatest hearing loss (the HF regions). This deficit was comparable, however, to that seen in low-frequency regions of persons with similar HF thresholds and "flat" hearing losses suggesting that sensorineural HI results in a "uniform," rather than frequency-specific, deficit in speech understanding, at least for persons with HF thresholds up to 60-80 dB HL.  相似文献   

7.
These experiments examined how high presentation levels influence speech recognition for high- and low-frequency stimuli in noise. Normally hearing (NH) and hearing-impaired (HI) listeners were tested. In Experiment 1, high- and low-frequency bandwidths yielding 70%-correct word recognition in quiet were determined at levels associated with broadband speech at 75 dB SPL. In Experiment 2, broadband and band-limited sentences (based on passbands measured in Experiment 1) were presented at this level in speech-shaped noise filtered to the same frequency bandwidths as targets. Noise levels were adjusted to produce approximately 30%-correct word recognition. Frequency bandwidths and signal-to-noise ratios supporting criterion performance in Experiment 2 were tested at 75, 87.5, and 100 dB SPL in Experiment 3. Performance tended to decrease as levels increased. For NH listeners, this "rollover" effect was greater for high-frequency and broadband materials than for low-frequency stimuli. For HI listeners, the 75- to 87.5-dB increase improved signal audibility for high-frequency stimuli and rollover was not observed. However, the 87.5- to 100-dB increase produced qualitatively similar results for both groups: scores decreased most for high-frequency stimuli and least for low-frequency materials. Predictions of speech intelligibility by quantitative methods such as the Speech Intelligibility Index may be improved if rollover effects are modeled as frequency dependent.  相似文献   

8.
To examine spectral and threshold effects for speech and noise at high levels, recognition of nonsense syllables was assessed for low-pass-filtered speech and speech-shaped maskers and high-pass-filtered speech and speech-shaped maskers at three speech levels, with signal-to-noise ratio held constant. Subjects were younger adults with normal hearing and older adults with normal hearing but significantly higher average quiet thresholds. A broadband masker was always present to minimize audibility differences between subject groups and across presentation levels. For subjects with lower thresholds, the declines in recognition of low-frequency syllables in low-frequency maskers were attributed to nonlinear growth of masking which reduced "effective" signal-to-noise ratio at high levels, whereas the decline for subjects with higher thresholds was not fully explained by nonlinear masking growth. For all subjects, masking growth did not entirely account for declines in recognition of high-frequency syllables in high-frequency maskers at high levels. Relative to younger subjects with normal hearing and lower quiet thresholds, older subjects with normal hearing and higher quiet thresholds had poorer consonant recognition in noise, especially for high-frequency speech in high-frequency maskers. Age-related effects on thresholds and task proficiency may be determining factors in the recognition of speech in noise at high levels.  相似文献   

9.
The effects of stimulus frequency and bandwidth on distance perception were examined for nearby sources in simulated reverberant space. Sources to the side [containing reverberation-related cues and interaural level difference (ILD) cues] and to the front (without ILDs) were simulated. Listeners judged the distance of noise bursts presented at a randomly roving level from simulated distances ranging from 0.15 to 1.7 m. Six stimuli were tested, varying in center frequency (300-5700 Hz) and bandwidth (200-5400 Hz). Performance, measured as the correlation between simulated and response distances, was worse for frontal than for lateral sources. For both simulated directions, performance was inversely proportional to the low-frequency stimulus cutoff, independent of stimulus bandwidth. The dependence of performance on frequency was stronger for frontal sources. These correlation results were well summarized by considering how mean response, as opposed to response variance, changed with stimulus direction and spectrum: (1) little bias was observed for lateral sources, but listeners consistently overestimated distance for frontal nearby sources; (2) for both directions, increasing the low-frequency cut-off reduced the range of responses. These results are consistent with the hypothesis that listeners used a direction-independent but frequency-dependent mapping of a reverberation-related cue, not the ILD cue, to judge source distance.  相似文献   

10.
The purpose of this study was to explore the potential advantages, both theoretical and applied, of preserving low-frequency acoustic hearing in cochlear implant patients. Several hypotheses are presented that predict that residual low-frequency acoustic hearing along with electric stimulation for high frequencies will provide an advantage over traditional long-electrode cochlear implants for the recognition of speech in competing backgrounds. A simulation experiment in normal-hearing subjects demonstrated a clear advantage for preserving low-frequency residual acoustic hearing for speech recognition in a background of other talkers, but not in steady noise. Three subjects with an implanted "short-electrode" cochlear implant and preserved low-frequency acoustic hearing were also tested on speech recognition in the same competing backgrounds and compared to a larger group of traditional cochlear implant users. Each of the three short-electrode subjects performed better than any of the traditional long-electrode implant subjects for speech recognition in a background of other talkers, but not in steady noise, in general agreement with the simulation studies. When compared to a subgroup of traditional implant users matched according to speech recognition ability in quiet, the short-electrode patients showed a 9-dB advantage in the multitalker background. These experiments provide strong preliminary support for retaining residual low-frequency acoustic hearing in cochlear implant patients. The results are consistent with the idea that better perception of voice pitch, which can aid in separating voices in a background of other talkers, was responsible for this advantage.  相似文献   

11.
Across-frequency processing by common interaural time delay (ITD) in spatial unmasking was investigated by measuring speech reception thresholds (SRTs) for high- and low-frequency bands of target speech presented against concurrent speech or a noise masker. Experiment 1 indicated that presenting one of these target bands with an ITD of +500 micros and the other with zero ITD (like the masker) provided some release from masking, but full binaural advantage was only measured when both target bands were given an ITD of + 500 micros. Experiment 2 showed that full binaural advantage could also be achieved when the high- and low-frequency bands were presented with ITDs of equal but opposite magnitude (+/- 500 micros). In experiment 3, the masker was also split into high- and low-frequency bands with ITDs of equal but opposite magnitude (+/-500 micros). The ITD of the low-frequency target band matched that of the high-frequency masking band and vice versa. SRTs indicated that, as long as the target and masker differed in ITD within each frequency band, full binaural advantage could be achieved. These results suggest that the mechanism underlying spatial unmasking exploits differences in ITD independently within each frequency channel.  相似文献   

12.
The speech understanding of persons with "flat" hearing loss (HI) was compared to a normal-hearing (NH) control group to examine how hearing loss affects the contribution of speech information in various frequency regions. Speech understanding in noise was assessed at multiple low- and high-pass filter cutoff frequencies. Noise levels were chosen to ensure that the noise, rather than quiet thresholds, determined audibility. The performance of HI subjects was compared to a NH group listening at the same signal-to-noise ratio and a comparable presentation level. Although absolute speech scores for the HI group were reduced, performance improvements as the speech and noise bandwidth increased were comparable between groups. These data suggest that the presence of hearing loss results in a uniform, rather than frequency-specific, deficit in the contribution of speech information. Measures of auditory thresholds in noise and speech intelligibility index (SII) calculations were also performed. These data suggest that differences in performance between the HI and NH groups are due primarily to audibility differences between groups. Measures of auditory thresholds in noise showed the "effective masking spectrum" of the noise was greater for the HI than the NH subjects.  相似文献   

13.
Cochlear implant (CI) users' speech understanding may be influenced by different speaking styles. In this study, speech recognition was measured in Mandarin-speaking CI and normal-hearing (NH) subjects for sentences produced according to four styles: slow, normal, fast, and whispered. CI subjects were tested using their clinical processors; NH subjects were tested while listening to a four-channel CI simulation. Performance gradually worsened with increasing speaking rate and was much poorer with whispered speech. CI performance was generally similar to NH performance with the four-channel simulation. Results suggest that some speaking styles, especially whispering, may negatively affect Mandarin-speaking CI users' speech understanding.  相似文献   

14.
To better represent fine structure cues in cochlear implants (CIs), recent research has proposed varying the stimulation rate based on slowly varying frequency modulation (FM) information. The present study investigated the abilities of CI users to detect FM with simultaneous amplitude modulation (AM). FM detection thresholds (FMDTs) for 10-Hz sinusoidal FM and upward frequency sweeps were measured as a function of standard frequency (75-1000 Hz). Three AM conditions were tested, including (1) No AM, (2) 20-Hz Sinusoidal AM (SAM) with modulation depths of 10%, 20%, or 30%, and (3) Noise AM (NAM), in which the amplitude was randomly and uniformly varied over a range of 1, 2, or 3 dB, relative to the reference amplitude. Results showed that FMDTs worsened with increasing standard frequencies, and were lower for sinusoidal FM than for upward frequency sweeps. Simultaneous AM significantly interfered with FM detection; FMDTs were significantly poorer with simultaneous NAM than with SAM. Besides, sinusoidal FMDTs significantly worsened when the starting phase of simultaneous SAM was randomized. These results suggest that FM and AM in CI partly share a common loudness-based coding mechanism and the feasibility of "FM+AM" strategies for CI speech processing may be limited.  相似文献   

15.
16.
Thresholds were measured for the detection of a temporal gap in a bandlimited noise signal presented in a continuous wideband masker, using an adaptive forced-choice procedure. In experiment I the ratio of signal spectrum level to masker spectrum level (the SMR) was fixed at 10 dB and gap thresholds were measured as a function of signal bandwidth at three center frequencies: 0.4, 1.0, and 6.5 kHz. Performance improved with increasing bandwidth and increasing center frequency. For a subset of conditions, gap threshold was also measured as bandwidth was varied keeping the upper cutoff frequency of the signal constant. In this case the variation of gap threshold with bandwidth was more gradual, suggesting that subjects detect the gap using primarily the highest frequency region available in the signal. At low center frequencies, however, subjects may have a limited ability to combine information in different frequency regions. In experiment II gap thresholds were measured as a function of SMR for several signal bandwidths at each of three center frequencies: 0.5, 1.0, and 6.5 kHz. Gap thresholds improved with increasing SMR, but the improvement was minimal for SMRs greater than 12-15 dB. The results are used to evaluate the relative importance of factors influencing gap threshold.  相似文献   

17.
Speech recognition in noise improves with combined acoustic and electric stimulation compared to electric stimulation alone [Kong et al., J. Acoust. Soc. Am. 117, 1351-1361 (2005)]. Here the contribution of fundamental frequency (F0) and low-frequency phonetic cues to speech recognition in combined hearing was investigated. Normal-hearing listeners heard vocoded speech in one ear and low-pass (LP) filtered speech in the other. Three listening conditions (vocode-alone, LP-alone, combined) were investigated. Target speech (average F0=120 Hz) was mixed with a time-reversed masker (average F0=172 Hz) at three signal-to-noise ratios (SNRs). LP speech aided performance at all SNRs. Low-frequency phonetic cues were then removed by replacing the LP speech with a LP equal-amplitude harmonic complex, frequency and amplitude modulated by the F0 and temporal envelope of voiced segments of the target. The combined hearing advantage disappeared at 10 and 15 dB SNR, but persisted at 5 dB SNR. A similar finding occurred when, additionally, F0 contour cues were removed. These results are consistent with a role for low-frequency phonetic cues, but not with a combination of F0 information between the two ears. The enhanced performance at 5 dB SNR with F0 contour cues absent suggests that voicing or glimpsing cues may be responsible for the combined hearing benefit.  相似文献   

18.
提出采用正弦模型改善患者高频听觉的非线性降频方法。正弦模型语音分解得到的幅度、频率和相位是算法三个主要的处理参数。为了避免谱失真,将语音频谱按倍频程划分为6个部分。最接近并低于患者门限频率的部分,只做幅度放大处理。按照不同频段对于语音理解度的贡献程度,将患者门限频率以上的频率段压缩并转移到患者的可听频段,并将对应相位信息变为最接近的对应低频相位。在本研究中,10个受试者进行了语音理解度测试。测试结果显示,经过训练后,患者的平均理解率至少提高45%。下一步的研究应增加受试者数量,并增加对患者的听损情况的详细分析,从而设计出更合理,更细致的降频助听算法。   相似文献   

19.
The hypothesis was investigated that selectively increasing the discrimination of low-frequency information (below 2600 Hz) by altering the frequency-to-electrode allocation would improve speech perception by cochlear implantees. Two experimental conditions were compared, both utilizing ten electrode positions selected based on maximal discrimination. A fixed frequency range (200-10513 Hz) was allocated either relatively evenly across the ten electrodes, or so that nine of the ten positions were allocated to the frequencies up to 2600 Hz. Two additional conditions utilizing all available electrode positions (15-18 electrodes) were assessed: one with each subject's usual frequency-to-electrode allocation; and the other using the same analysis filters as the other experimental conditions. Seven users of the Nucleus CI22 implant wore processors mapped with each experimental condition for 2-week periods away from the laboratory, followed by assessment of perception of words in quiet and sentences in noise. Performance with both ten-electrode maps was significantly poorer than with both full-electrode maps on at least one measure. Performance with the map allocating nine out of ten electrodes to low frequencies was equivalent to that with the full-electrode maps for vowel perception and sentences in noise, but was worse for consonant perception. Performance with the evenly allocated ten-electrode map was equivalent to that with the full-electrode maps for consonant perception, but worse for vowel perception and sentences in noise. Comparison of the two full-electrode maps showed that subjects could fully adapt to frequency shifts up to ratio changes of 1.3, given 2 weeks' experience. Future research is needed to investigate whether speech perception may be improved by the manipulation of frequency-to-electrode allocation in maps which have a full complement of electrodes in Nucleus implants.  相似文献   

20.
Evaluating the articulation index for auditory-visual input   总被引:4,自引:0,他引:4  
An investigation of the auditory-visual (AV) articulation index (AI) correction procedure outlined in the ANSI standard [ANSI S3.5-1969 (R1986)] was made by evaluating auditory (A), visual (V), and auditory-visual sentence identification for both wideband speech degraded by additive noise and a variety of bandpass-filtered speech conditions presented in quiet and in noise. When the data for each of the different listening conditions were averaged across talkers and subjects, the procedure outlined in the standard was fairly well supported, although deviations from the predicted AV score were noted for individual subjects as well as individual talkers. For filtered speech signals with AIA less than 0.25, there was a tendency for the standard to underpredict AV scores. Conversely, for signals with AIA greater than 0.25, the standard consistently overpredicted AV scores. Additionally, synergistic effects, where the AIA obtained from the combination of different bandpass-filtered conditions was greater than the sum of the individual AIA's, were observed for all nonadjacent filter-band combinations (e.g., the addition of a low-pass band with a 630-Hz cutoff and a high-pass band with a 3150-Hz cutoff). These latter deviations from the standard violate the basic assumption of additivity stated by Articulation Theory, but are consistent with earlier reports by Pollack [I. Pollack, J. Acoust. Soc. Am. 20, 259-266 (1948)], Licklider [J. C. R. Licklider, Psychology: A Study of a Science, Vol. 1, edited by S. Koch (McGraw-Hill, New York, 1959), pp. 41-144], and Kryter [K. D. Kryter, J. Acoust. Soc. Am. 32, 547-556 (1960)].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号