首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Fourteen prelinguistically profoundly hearing-impaired children were fitted with the multichannel electrotactile speech processor (Tickle Talker) developed by Cochlear Pty. Ltd. and the University of Melbourne. Each child participated in an ongoing training and evaluation program, which included measures of speech perception and production. Results of speech perception testing demonstrate clear benefits for children fitted with the device. Thresholds for detection of pure tones were lower for the Tickle Talker than for hearing aids across the frequency range 250-4000 Hz, with the greatest tactual advantage in the high-frequency consonant range (above 2000 Hz). Individual and mean speech detection thresholds for the Ling 5-sound test confirmed that speech sounds were detected by the electrotactile device at levels consistent with normal conversational speech. Results for three speech feature tests showed significant improvement when the Tickle Talker was used in combination with hearing aids (TA) as compared with hearing aids along (A). Mean scores in the TA condition increased by 11% for vowel duration, 20% for vowel formant, and 25% for consonant manner as compared with hearing aids alone. Mean TA score on a closed-set word test (WIPI) was 48%, as compared with 32% for hearing aids alone. Similarly, mean WIPI score for the combination of Tickle Talker, lipreading, and hearing aids (TLA) increased by 6% as compared with combined lipreading and hearing aid (LA) scores. Mean scores on open-set sentences (BKB) showed a significant increase of 21% for the tactually aided condition (TLA) as compared with unaided (LA). These results indicate that, given sufficient training, children can utilize speech feature information provided through the Tickle Talker to improve discrimination of words and sentences. These results indicate that, given sufficient training, children can utilize speech feature information provided through the Tickle Talker to improve discrimination of words and sentences. These results are consistent with improvement in speech discrimination previously reported for normally hearing and hearing-impaired adults using the device. Anecdotal evidence also indicates some improvements in speech production for children fitted with the Tickle Talker.  相似文献   

2.
Gross variations of the speech amplitude envelope, such as the duration of different segments and the gaps between them, carry information about prosody and some segmental features of vowels and consonants. The amplitude envelope is one parameter encoded by the Tickle Talker, an electrotactile speech processor for the hearing impaired which stimulates the digital nerve bundles with a pulsatile electric current. Psychophysical experiments measuring the duration discrimination and identification, gap detection, and integration times for pulsatile electrical stimulation are described and compared with similar auditory measures for normal and impaired hearing and electrical stimulation via a cochlear implant. The tactile duration limen of 15% for a 300-ms standard was similar to auditory measures. Tactile gap detection thresholds of 9 to 20 ms were larger than for normal-hearing but shorter than for some hearing-impaired listeners and cochlear implant users. The electrotactile integration time of about 250 ms was shorter than previously measured tactile values but longer than auditory integration times. The results indicate that the gross amplitude envelope variations should be conveyed well by the Tickle Talker. Short bursts of low amplitude are the features most likely to be poorly perceived.  相似文献   

3.
Two multichannel tactile devices for the hearing impaired were compared in speech perception tasks of varying levels of complexity. Both devices implemented the "vocoder" principle in their stimulus processing: One device had a 16-element linear vibratory array worn on the forearm and displayed activity in 16 overlapping frequency channels; the other device delivered tactile stimulation to a linear array of 16 electrodes worn on the abdomen. Subjects were tested in several phoneme discrimination tasks, ranging from discrimination of pairs of words differing in only one phoneme under tactile aid alone conditions to identification of stimuli in a larger set under tactile aid alone, lipreading alone, and lipreading plus tactile aid conditions. Results showed both devices to be better transmitters of manner and voicing features of articulation than of place features, when tested in single-item tasks. No systematic differences in performance with the two devices were observed. However, in a connected discourse tracking task, the vibrotactile vocoder in conjunction with lipreading yielded much greater improvements over lipreading alone than did the electrotactile vocoder. One possible explanation for this difference in performance, the inclusion of a noise suppression circuit in the electrotactile aid, was evaluated, but did not appear to account for the differences observed. Results are discussed in terms of additional differences between the two devices that may influence performance.  相似文献   

4.
Speech discrimination testing, using both open- and closed-set materials, was carried out with four severely to profoundly hearing impaired adults and seven normally hearing subjects to assess performance of a wearable eight-channel electrotactile aid (Tickle Talker). Significant increases in speechtracking rates were noted for all subjects when using the electrotactile aid. After 70 h of training, mean tracking rate in the tactile plus lipreading condition was 55 words per minute (wpm), as compared with 36 wpm for lipreading alone, for the normally hearing group. For the hearing impaired group, the mean tracking rate in the aided condition was 37 wpm, as compared with 24 wpm for lipreading alone, following 35 h of training. Performance scores on Central Institute for the Deaf (CID) everyday sentences, Consonant Nucleus Consonant (CNC) words, and closed-set vowel and consonant identification were significantly improved when using the electrotactile aid. Performance scores, using the aid without lipreading, were well above chance on consonant and vowel identification and on elements of the Minimal Auditory Capabilities Battery. Two hearing impaired subjects have used the device satisfactorily in the home environment.  相似文献   

5.
In face-to-face speech communication, the listener extracts and integrates information from the acoustic and optic speech signals. Integration occurs within the auditory modality (i.e., across the acoustic frequency spectrum) and across sensory modalities (i.e., across the acoustic and optic signals). The difficulties experienced by some hearing-impaired listeners in understanding speech could be attributed to losses in the extraction of speech information, the integration of speech cues, or both. The present study evaluated the ability of normal-hearing and hearing-impaired listeners to integrate speech information within and across sensory modalities in order to determine the degree to which integration efficiency may be a factor in the performance of hearing-impaired listeners. Auditory-visual nonsense syllables consisting of eighteen medial consonants surrounded by the vowel [a] were processed into four nonoverlapping acoustic filter bands between 300 and 6000 Hz. A variety of one, two, three, and four filter-band combinations were presented for identification in auditory-only and auditory-visual conditions: A visual-only condition was also included. Integration efficiency was evaluated using a model of optimal integration. Results showed that normal-hearing and hearing-impaired listeners integrated information across the auditory and visual sensory modalities with a high degree of efficiency, independent of differences in auditory capabilities. However, across-frequency integration for auditory-only input was less efficient for hearing-impaired listeners. These individuals exhibited particular difficulty extracting information from the highest frequency band (4762-6000 Hz) when speech information was presented concurrently in the next lower-frequency band (1890-2381 Hz). Results suggest that integration of speech information within the auditory modality, but not across auditory and visual modalities, affects speech understanding in hearing-impaired listeners.  相似文献   

6.
A group of prelinguistically hearing impaired children, between 7 and 11 years of age, were trained in the perception of vowel duration and place, the fricative /s/, and manner of articulation (/m/ vs /b/ and /s/ vs /t/) distinctions, using information provided by a multiple-channel electrotactile aid (Tickle Talker), and through aided hearing. Training was provided in the tactile-plus-aided hearing (TA) and tactile (T) conditions. Speech feature recognition tests were conducted in the TA, T, and aided hearing (A) conditions, during pretraining, training, and post-training phases. Test scores in the TA and T conditions were significantly greater than scores in the A condition for all tests, suggesting that perception of these features was improved when the tactile aid was worn. Test scores in the training and post-training phases were significantly greater than in the pretraining phase, suggesting that the training provided was responsible for the improvement in feature perception. Statistical analyses demonstrated a significant interaction between the main effects of condition and phase, suggesting that training improved perception in the TA and T conditions, but not in the A condition. Post-training and training test scores were similar suggesting that the perceptual skills acquired during training were retained after the removal of training. Recognition of trained features improved for trained, as well as for untrained words.  相似文献   

7.
Vowel and consonant confusion matrices were collected in the hearing alone (H), lipreading alone (L), and hearing plus lipreading (HL) conditions for 28 patients participating in the clinical trial of the multiple-channel cochlear implant. All patients were profound-to-totally deaf and "hearing" refers to the presentation of auditory information via the implant. The average scores were 49% for vowels and 37% for consonants in the H condition and the HL scores were significantly higher than the L scores. Information transmission and multidimensional scaling analyses showed that different speech features were conveyed at different levels in the H and L conditions. In the HL condition, the visual and auditory signals provided independent information sources for each feature. For vowels, the auditory signal was the major source of duration information, while the visual signal was the major source of first and second formant frequency information. The implant provided information about the amplitude envelope of the speech and the estimated frequency of the main spectral peak between 800 and 4000 Hz, which was useful for consonant recognition. A speech processor that coded the estimated frequency and amplitude of an additional peak between 300 and 1000 Hz was shown to increase the vowel and consonant recognition in the H condition by improving the transmission of first formant and voicing information.  相似文献   

8.
Frequency resolution was evaluated for two normal-hearing and seven hearing-impaired subjects with moderate, flat sensorineural hearing loss by measuring percent correct detection of a 2000-Hz tone as the width of a notch in band-reject noise increased. The level of the tone was fixed for each subject at a criterion performance level in broadband noise. Discrimination of synthetic speech syllables that differed in spectral content in the 2000-Hz region was evaluated as a function of the notch width in the same band-reject noise. Recognition of natural speech consonant/vowel syllables in quiet was also tested; results were analyzed for percent correct performance and relative information transmitted for voicing and place features. In the hearing-impaired subjects, frequency resolution at 2000 Hz was significantly correlated with the discrimination of synthetic speech information in the 2000-Hz region and was not related to the recognition of natural speech nonsense syllables unless (a) the speech stimuli contained the vowel /i/ rather than /a/, and (b) the score reflected information transmitted for place of articulation rather than percent correct.  相似文献   

9.
Psychophysical tests were carried out to investigate the perception of electrocutaneous stimuli delivered to the digital nerve bundles. The tests provided data for defining the operating range of a tactile aid for patients with profound-to-total hearing loss, as well as the individual differences between subjects and the information that could be transmitted. Monopolar biphasic constant current pulses with variable pulse widths were used. Threshold pulse widths varied widely between subjects and between fingers for the same subject. Thresholds were reasonably stable, but maximum comfortable levels increased with time. Perceived intensity was weakly dependent on pulse rate. Absolute identification of stimuli differing in pulse width gave information transmissions from 1.3-2.1 bits, limited by the dynamic ranges of the stimuli (3-17 dB). Stimuli from electrodes placed on either side of each finger were identified easily by all subjects. Absolute identification of stimuli differing in pulse rate gave information transmissions from 0.5-2.0 bits. Difference limens for pulse rate varied between subjects and were generally poor above 100 pps. On the basis of the results, an electrotactile speech processor is proposed, which codes the speech amplitude as pulse width, the fundamental frequency as pulse rate, and the second formant frequency as electrode position. Variable performances on tasks relying on amplitude and fundamental frequency cues are expected to arise from the intersubject differences in dynamic range and pulse rate discrimination. The psychophysical results for electrotactile stimulation are compared with previously published results for electroauditory stimulation with a multiple-channel cochlear implant.  相似文献   

10.
A versatile, battery-powered device for the representation of speech information as patterns of tactile stimulation has been developed. The features of this device that are different from other electrotactile speech processors are the site of stimulation, the proposed strategy for the representation of speech information, and the small size of the device.  相似文献   

11.
Tactile-alone word recognition training was provided to six normally hearing users of the Tickle Talker, an electrotactile speech perception device. A mean group tactile-alone vocabulary of 31 words was learned in 12 h of training. These results were comparable to, or superior to, those reported for other tactile devices and Tadoma. With increased training the group became faster at learning tactually new words, which were introduced in small training sets. However, as their tactile-alone vocabulary grew, subjects required more training time to reach the pass criterion when evaluated on their recognition of their whole vocabulary list. A maximum possible vocabulary size was not established. The application of tactile-alone training with hearing-impaired users of the device is discussed.  相似文献   

12.
Many hearing-impaired listeners suffer from distorted auditory processing capabilities. This study examines which aspects of auditory coding (i.e., intensity, time, or frequency) are distorted and how this affects speech perception. The distortion-sensitivity model is used: The effect of distorted auditory coding of a speech signal is simulated by an artificial distortion, and the sensitivity of speech intelligibility to this artificial distortion is compared for normal-hearing and hearing-impaired listeners. Stimuli (speech plus noise) are wavelet coded using a complex sinusoidal carrier with a Gaussian envelope (1/4 octave bandwidth). Intensity information is distorted by multiplying the modulus of each wavelet coefficient by a random factor. Temporal and spectral information are distorted by randomly shifting the wavelet positions along the temporal or spectral axis, respectively. Measured were (1) detection thresholds for each type of distortion, and (2) speech-reception thresholds for various degrees of distortion. For spectral distortion, hearing-impaired listeners showed increased detection thresholds and were also less sensitive to the distortion with respect to speech perception. For intensity and temporal distortion, this was not observed. Results indicate that a distorted coding of spectral information may be an important factor underlying reduced speech intelligibility for the hearing impaired.  相似文献   

13.
The purpose of this experiment was to determine the applicability of the Articulation Index (AI) model for characterizing the speech recognition performance of listeners with mild-to-moderate hearing loss. Performance-intensity functions were obtained from five normal-hearing listeners and 11 hearing-impaired listeners using a closed-set nonsense syllable test for two frequency responses (uniform and high-frequency emphasis). For each listener, the fitting constant Q of the nonlinear transfer function relating AI and speech recognition was estimated. Results indicated that the function mapping AI onto performance was approximately the same for normal and hearing-impaired listeners with mild-to-moderate hearing loss and high speech recognition scores. For a hearing-impaired listener with poor speech recognition ability, the AI procedure was a poor predictor of performance. The AI procedure as presently used is inadequate for predicting performance of individuals with reduced speech recognition ability and should be used conservatively in applications predicting optimal or acceptable frequency response characteristics for hearing-aid amplification systems.  相似文献   

14.
To examine spectral effects on declines in speech recognition in noise at high levels, word recognition for 18 young adults with normal hearing was assessed for low-pass-filtered speech and speech-shaped maskers or high-pass-filtered speech and speech-shaped maskers at three speech levels (70, 77, and 84 dB SPL) for each of three signal-to-noise ratios (+8, +3, and -2 dB). An additional low-level noise produced equivalent masked thresholds for all subjects. Pure-tone thresholds were measured in quiet and in all maskers. If word recognition was determined entirely by signal-to-noise ratio, and was independent of signal levels and the spectral content of speech and maskers, scores should remain constant with increasing level for both low- and high-frequency speech and maskers. Recognition of low-frequency speech in low-frequency maskers and high-frequency speech in high-frequency maskers decreased significantly with increasing speech level when signal-to-noise ratio was held constant. For low-frequency speech and speech-shaped maskers, the decline was attributed to nonlinear growth of masking which reduced the "effective" signal-to-noise ratio at high levels, similar to previous results for broadband speech and speech-shaped maskers. Masking growth and reduced "effective" signal-to-noise ratio accounted for some but not all the decline in recognition of high-frequency speech in high-frequency maskers.  相似文献   

15.
To examine the association between frequency resolution and speech recognition, auditory filter parameters and stop-consonant recognition were determined for 9 normal-hearing and 24 hearing-impaired subjects. In an earlier investigation, the relationship between stop-consonant recognition and the articulation index (AI) had been established on normal-hearing listeners. Based on AI predictions, speech-presentation levels for each subject in this experiment were selected to obtain a wide range of recognition scores. This strategy provides a method of interpreting speech-recognition performance among listeners who vary in magnitude and configuration of hearing loss by assuming that conditions which yield equal audible spectra will result in equivalent performance. It was reasoned that an association between frequency resolution and consonant recognition may be more appropriately estimated if hearing-impaired listeners' performance was measured under conditions that assured equivalent audibility of the speech stimuli. Derived auditory filter parameters indicated that filter widths and dynamic ranges were strongly associated with threshold. Stop-consonant recognition scores for most hearing-impaired listeners were not significantly poorer than predicted by the AI model. Furthermore, differences between observed recognition scores and those predicted by the AI were not associated with auditory filter characteristics, suggesting that frequency resolution and speech recognition may appear to be associated primarily because both are degraded by threshold elevation.  相似文献   

16.
Temporal fine structure (TFS) sensitivity, frequency selectivity, and speech reception in noise were measured for young normal-hearing (NHY), old normal-hearing (NHO), and hearing-impaired (HI) subjects. Two measures of TFS sensitivity were used: the "TFS-LF test" (interaural phase difference discrimination) and the "TFS2 test" (discrimination of harmonic and frequency-shifted tones). These measures were not significantly correlated with frequency selectivity (after partialing out the effect of audiometric threshold), suggesting that insensitivity to TFS cannot be wholly explained by a broadening of auditory filters. The results of the two tests of TFS sensitivity were significantly but modestly correlated, suggesting that performance of the tests may be partly influenced by different factors. The NHO group performed significantly more poorly than the NHY group for both measures of TFS sensitivity, but not frequency selectivity, suggesting that TFS sensitivity declines with age in the absence of elevated audiometric thresholds or broadened auditory filters. When the effect of mean audiometric threshold was partialed out, speech reception thresholds in modulated noise were correlated with TFS2 scores, but not measures of frequency selectivity or TFS-LF test scores, suggesting that a reduction in sensitivity to TFS can partly account for the speech perception difficulties experienced by hearing-impaired subjects.  相似文献   

17.
Paired-comparison judgments of intelligibility of speech in noise were obtained from eight hearing-impaired subjects on a large number of hearing aids simulated by a digital master hearing aid. The hearing aids which comprised a 5 X 5 matrix differed systematically in the amount of low-frequency and high-frequency gain provided. A comparison of three adaptive strategies for determining optimum hearing aid frequency-gain characteristics (an iterative round robin, a double elimination tournament, and a modified simplex procedure) revealed convergence on the same or similar hearing aids for most subjects. Analysis revealed that subjects for whom all three procedures converged on the same hearing aid showed a single pronounced peak in the response surface, while a broader peak was evident for the subjects for whom the three procedures identified similar hearing aids. The modified simplex procedure was found to be most efficient and the iterative round robin least efficient.  相似文献   

18.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

19.
To examine spectral and threshold effects for speech and noise at high levels, recognition of nonsense syllables was assessed for low-pass-filtered speech and speech-shaped maskers and high-pass-filtered speech and speech-shaped maskers at three speech levels, with signal-to-noise ratio held constant. Subjects were younger adults with normal hearing and older adults with normal hearing but significantly higher average quiet thresholds. A broadband masker was always present to minimize audibility differences between subject groups and across presentation levels. For subjects with lower thresholds, the declines in recognition of low-frequency syllables in low-frequency maskers were attributed to nonlinear growth of masking which reduced "effective" signal-to-noise ratio at high levels, whereas the decline for subjects with higher thresholds was not fully explained by nonlinear masking growth. For all subjects, masking growth did not entirely account for declines in recognition of high-frequency syllables in high-frequency maskers at high levels. Relative to younger subjects with normal hearing and lower quiet thresholds, older subjects with normal hearing and higher quiet thresholds had poorer consonant recognition in noise, especially for high-frequency speech in high-frequency maskers. Age-related effects on thresholds and task proficiency may be determining factors in the recognition of speech in noise at high levels.  相似文献   

20.
The study was designed to test the validity of the American Academy of Ophthalmology and Otolaryngology's (AAOO) 26-dB average hearing threshold level at 500, 1000, and 2000 Hz as a predictor of hearing handicap. To investigate this criterion the performance of a normal-hearing group was compared with that of two groups, categorized according to the AAOO [Trans. Am. Acad. Ophthal. Otolaryng. 63, 236-238 (1959)] guidelines as having no handicap. The latter groups, however, had significant hearing losses in the frequencies above 2000 Hz. Mean hearing threshold levels for 3000, 4000, and 6000 Hz were 54 dB for group II and 63 dB for group III. Two kinds of speech stimuli were presented at an A-weighted sound level of 60 dB in quiet and in three different levels of noise. The resulting speech recognition scores were significantly lower for the hearing-impaired groups than for the normal-hearing group on both kinds of speech materials and in all three noise conditions. Mean scores for group III were significantly lower than those of the normal-hearing group, even in the quiet condition. Speech recognition scores showed significantly better correlation with hearing levels for frequency combinations including frequencies above 2000 Hz than for the 500-, 1000-, and 2000-Hz combination. On the basis of these results the author recommends that the 26-dB fence should be somewhat lower, and that frequencies above 2000 Hz should be included in any scheme for evaluating hearing handicap.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号