共查询到20条相似文献,搜索用时 15 毫秒
1.
Akeroyd MA Gatehouse S Blaschke J 《The Journal of the Acoustical Society of America》2007,121(2):1077-1089
This experiment measured the capability of hearing-impaired individuals to discriminate differences in the cues to the distance of spoken sentences. The stimuli were generated synthetically, using a room-image procedure to calculate the direct sound and first 74 reflections for a source placed in a 7 x 9 m room, and then presenting each of those sounds individually through a circular array of 24 loudspeakers. Seventy-seven listeners participated, aged 22-83 years and with hearing levels from -5 to 59 dB HL. In conditions where a substantial change in overall level due to the inverse-square law was available as a cue, the elderly hearing-impaired listeners did not perform any different from control groups. In other conditions where that cue was unavailable (so leaving the direct-to-reverberant relationship as a cue), either because the reverberant field dominated the direct sound or because the overall level had been artificially equalized, hearing-impaired listeners performed worse than controls. There were significant correlations with listeners' self-reported distance capabilities as measured by the "Speech, Spatial, and Qualities of Hearing" questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)]. The results demonstrate that hearing-impaired listeners show deficits in the ability to use some of the cues which signal auditory distance. 相似文献
2.
Training American listeners to perceive Mandarin tones 总被引:1,自引:0,他引:1
Wang Y Spence MM Jongman A Sereno JA 《The Journal of the Acoustical Society of America》1999,106(6):3649-3658
Auditory training has been shown to be effective in the identification of non-native segmental distinctions. In this study, it was investigated whether such training is applicable to the acquisition of non-native suprasegmental contrasts, i.c., Mandarin tones. Using the high-variability paradigm, eight American learners of Mandarin were trained in eight sessions during the course of two weeks to identify the four tones in natural words produced by native Mandarin talkers. The trainees' identification accuracy revealed an average 21% increase from the pretest to the post-test, and the improvement gained in training was generalized to new stimuli (18% increase) and to new talkers and stimuli (25% increase). Moreover, the six-month retention test showed that the improvement was retained long after training by an average 21% increase from the pretest. The results are discussed in terms of non-native suprasegmental perceptual modification, and the analogies between L2 acquisition processes at the segmental and suprasegmental levels. 相似文献
3.
Frequency modulation detection limens (FMDLs) were measured for five hearing-impaired (HI) subjects for carrier frequencies f(c) = 1000, 4000, and 6000 Hz, using modulation frequencies f(m) = 2 and 10 Hz and levels of 20 dB sensation level and 90 dB SPL. FMDLs were smaller for f(m) = 10 than for f(m) = 2 Hz for the two higher f(c), but not for f(c) = 1000 Hz. FMDLs were also determined with additional random amplitude modulation (AM), to disrupt excitation-pattern cues. The disruptive effect was larger for f(m) = 10 than for f(m) = 2 Hz. The smallest disruption occurred for f(m) = 2 Hz and f(c) = 1000 Hz. AM detection thresholds for normal-hearing and HI subjects were measured for the same f(c) and f(m) values. Performance was better for the HI subjects for both f(m). AM detection was much better for f(m) = 10 than for f(m) = 2 Hz. Additional tests showed that most HI subjects could discriminate temporal fine structure (TFS) at 800 Hz. The results are consistent with the idea that, for f(m) = 2 Hz and f(c) = 1000 Hz, frequency modulation (FM) detection was partly based on the use of TFS information. For higher carrier frequencies and for all carrier frequencies with f(m) = 10 Hz, FM detection was probably based on place cues. 相似文献
4.
English consonant recognition in noise and in reverberation by Japanese and American listeners 总被引:1,自引:0,他引:1
English consonant recognition in undegraded and degraded listening conditions was compared for listeners whose primary language was either Japanese or American English. There were ten subjects in each of the two groups, termed the non-native (Japanese) and the native (American) subjects, respectively. The Modified Rhyme Test was degraded either by a babble of voices (S/N = -3 dB) or by a room reverberation (reverberation time, T = 1.2 s). The Japanese subjects performed at a lower level than the American subjects in both noise and reverberation, although the performance difference in the undegraded, quiet condition was relatively small. There was no difference between the scores obtained in noise and in reverberation for either group. A limited-error analysis revealed some differences in type of errors for the groups of listeners. Implications of the results are discussed in terms of the effects of degraded listening conditions on non-native listeners' speech perception. 相似文献
5.
Envelope detection and processing are very important for cochlear implant (CI) listeners, who must rely on obtaining significant amounts of acoustic information from the time-varying envelopes of stimuli. In previous work, Chatterjee and Robert [JARO 2(2), 159-171 (2001)] reported on a stochastic-resonance-type effect in modulation detection by CI listeners: optimum levels of noise in the envelope enhanced modulation detection under certain conditions, particularly when the carrier level was low. The results of that study suggested that a low carrier level was sufficient to evoke the observed stochastic resonance effect, but did not clarify whether a low carrier level was necessary to evoke the effect. Modulation thresholds in CI listeners generally decrease with increasing carrier level. The experiments in this study were designed to investigate whether the observed noise-induced enhancement is related to the low carrier level per se, or to the poor modulation sensitivity that accompanies it. This was done by keeping the carrier amplitude fixed at a moderate level and increasing modulation frequency so that modulation sensitivity could be reduced without lowering carrier level. The results suggest that modulation sensitivity, not carrier level, is the primary factor determining the effect of the noise. 相似文献
6.
Frequency resolution was evaluated for two normal-hearing and seven hearing-impaired subjects with moderate, flat sensorineural hearing loss by measuring percent correct detection of a 2000-Hz tone as the width of a notch in band-reject noise increased. The level of the tone was fixed for each subject at a criterion performance level in broadband noise. Discrimination of synthetic speech syllables that differed in spectral content in the 2000-Hz region was evaluated as a function of the notch width in the same band-reject noise. Recognition of natural speech consonant/vowel syllables in quiet was also tested; results were analyzed for percent correct performance and relative information transmitted for voicing and place features. In the hearing-impaired subjects, frequency resolution at 2000 Hz was significantly correlated with the discrimination of synthetic speech information in the 2000-Hz region and was not related to the recognition of natural speech nonsense syllables unless (a) the speech stimuli contained the vowel /i/ rather than /a/, and (b) the score reflected information transmitted for place of articulation rather than percent correct. 相似文献
7.
Strange W Akahane-Yamada R Kubo R Trent SA Nishi K 《The Journal of the Acoustical Society of America》2001,109(4):1691-1704
This study investigated the extent to which adult Japanese listeners' perceived phonetic similarity of American English (AE) and Japanese (J) vowels varied with consonantal context. Four AE speakers produced multiple instances of the 11 AE vowels in six syllabic contexts /b-b, b-p, d-d, d-t, g-g, g-k/ embedded in a short carrier sentence. Twenty-four native speakers of Japanese were asked to categorize each vowel utterance as most similar to one of 18 Japanese categories [five one-mora vowels, five two-mora vowels, plus/ei, ou/ and one-mora and two-mora vowels in palatalized consonant CV syllables, C(j)a(a), C(j)u(u), C(j)o(o)]. They then rated the "category goodness" of the AE vowel to the selected Japanese category on a seven-point scale. None of the 11 AE vowels was assimilated unanimously to a single J response category in all context/speaker conditions; consistency in selecting a single response category ranged from 77% for /eI/ to only 32% for /ae/. Median ratings of category goodness for modal response categories were somewhat restricted overall, ranging from 5 to 3. Results indicated that temporal assimilation patterns (judged similarity to one-mora versus two-mora Japanese categories) differed as a function of the voicing of the final consonant, especially for the AE vowels, /see text/. Patterns of spectral assimilation (judged similarity to the five J vowel qualities) of /see text/ also varied systematically with consonantal context and speakers. On the basis of these results, it was predicted that relative difficulty in the identification and discrimination of AE vowels by Japanese speakers would vary significantly as a function of the contexts in which they were produced and presented. 相似文献
8.
This study examined within- and across-electrode-channel processing of temporal gaps in successful users of MED-EL COMBI 40+ cochlear implants. The first experiment tested across-ear gap duration discrimination (GDD) in four listeners with bilateral implants. The results demonstrated that across-ear GDD thresholds are elevated relative to monaural, within-electrode-channel thresholds; the size of the threshold shift was approximately the same as for monaural, across-electrode-channel configurations. Experiment 1 also demonstrated a decline in GDD performance for channel-asymmetric markers. The second experiment tested the effect of envelope fluctuation on gap detection (GD) for monaural markers carried on a single electrode channel. Results from five cochlear implant listeners indicated that envelopes associated with 50-Hz wide bands of noise resulted in poorer GD thresholds than envelopes associated with 300-Hz wide bands of noise. In both cases GD thresholds improved when envelope fluctuations were compressed by an exponent of 0.2. The results of both experiments parallel those found for acoustic hearing, therefore suggesting that temporal processing of gaps is largely limited by factors central to the cochlea. 相似文献
9.
Pfingst BE Burkholder-Juhasz RA Xu L Thompson CS 《The Journal of the Acoustical Society of America》2008,123(2):1054-1062
In modern cochlear implants, much of the information required for recognition of important sounds is conveyed by temporal modulation of the charge per phase in interleaved trains of electrical pulses. In this study, modulation detection thresholds (MDTs) were used to assess listeners' abilities to detect sinusoidal modulation of charge per phase at each available stimulation site in their 22-electrode implants. Fourteen subjects were tested. MDTs were found to be highly variable across stimulation sites in most listeners. The across-site patterns of MDTs differed considerably from subject to subject. The subject-specific patterns of across-site variability of MDTs suggest that peripheral site-specific characteristics, such as electrode placement and the number and condition of surviving neurons, play a primary role in determining modulation sensitivity. Across-site patterns of detection thresholds (T levels), maximum comfortable loudness levels (C levels) and dynamic ranges (DRs) were not consistently correlated with across-site patterns of MDTs within subjects, indicating that the mechanisms underlying across-site variation in these measures differed from those underlying across-site variation in MDTs. MDTs sampled from multiple sites in a listener's electrode array might be useful for diagnosing across-subject differences in speech recognition with cochlear implants and for guiding strategies to improve the individual's perception. 相似文献
10.
Two-dimensional sound localization by human listeners 总被引:2,自引:0,他引:2
This study measured the ability of subjects to localize broadband sound sources that varied in both horizontal and vertical location. Brief (150 ms) sounds were presented in a free field, and subjects reported the apparent stimulus location by turning to face the sound source; head orientation was measured electromagnetically. Localization of continuous sounds also was tested to estimate errors in the motor act of orienting with the head. Localization performance was excellent for brief sounds presented in front of the subject. The smallest errors, averaged across subjects, were about 2 degrees and 3.5 degrees in the horizontal and vertical dimensions, respectively. The sizes of errors increased, for more peripheral stimulus locations, to maxima of about 20 degrees. Localization performance was better in the horizontal than in the vertical dimension for stimuli located on or near the frontal midline, but the opposite was true for most stimuli located further peripheral. Front/back confusions occurred in 6% of trials; the characteristics of those responses suggest that subjects derived horizontal localization information principally from interaural difference cues. The generally high level of performance obtained with the head orientation technique argues for its utility in continuing studies of sound localization. 相似文献
11.
Thresholds of ongoing interaural time difference (ITD) were obtained from normal-hearing and hearing-impaired listeners who had high-frequency, sensorineural hearing loss. Several stimuli (a 500-Hz sinusoid, a narrow-band noise centered at 500 Hz, a sinusoidally amplitude-modulated 4000-Hz tone, and a narrow-band noise centered at 4000 Hz) and two criteria [equal sound-pressure level (Eq SPL) and equal sensation level (Eq SL)] for determining the level of stimuli presented to each listener were employed. The ITD thresholds and slopes of the psychometric functions were elevated for hearing-impaired listeners for the two high-frequency stimuli in comparison to: the listener's own low-frequency thresholds; and data obtained from normal-hearing listeners for stimuli presented with Eq SPL interaurally. The two groups of listeners required similar ITDs to reach threshold when stimuli were presented at Eq SLs to each ear. For low-frequency stimuli, the ITD thresholds of the hearing-impaired listener were generally slightly greater than those obtained from the normal-hearing listeners. Whether these stimuli were presented at either Eq SPL or Eq SL did not differentially affect the ITD thresholds across groups. 相似文献
12.
Won JH Drennan WR Nie K Jameyson EM Rubinstein JT 《The Journal of the Acoustical Society of America》2011,130(1):376-388
The goals of the present study were to measure acoustic temporal modulation transfer functions (TMTFs) in cochlear implant listeners and examine the relationship between modulation detection and speech recognition abilities. The effects of automatic gain control, presentation level and number of channels on modulation detection thresholds (MDTs) were examined using the listeners' clinical sound processor. The general form of the TMTF was low-pass, consistent with previous studies. The operation of automatic gain control had no effect on MDTs when the stimuli were presented at 65 dBA. MDTs were not dependent on the presentation levels (ranging from 50 to 75 dBA) nor on the number of channels. Significant correlations were found between MDTs and speech recognition scores. The rates of decay of the TMTFs were predictive of speech recognition abilities. Spectral-ripple discrimination was evaluated to examine the relationship between temporal and spectral envelope sensitivities. No correlations were found between the two measures, and 56% of the variance in speech recognition was predicted jointly by the two tasks. The present study suggests that temporal modulation detection measured with the sound processor can serve as a useful measure of the ability of clinical sound processing strategies to deliver clinically pertinent temporal information. 相似文献
13.
The resolution of complex spectral patterns by cochlear implant and normal-hearing listeners 总被引:4,自引:0,他引:4
The differences in spectral shape resolution abilities among cochlear implant (CI) listeners, and between CI and normal-hearing (NH) listeners, when listening with the same number of channels (12), was investigated. In addition, the effect of the number of channels on spectral shape resolution was examined. The stimuli were rippled noise signals with various ripple frequency-spacings. An adaptive 41FC procedure was used to determine the threshold for resolvable ripple spacing, which was the spacing at which an interchange in peak and valley positions could be discriminated. The results showed poorer spectral shape resolution in CI compared to NH listeners (average thresholds of approximately 3000 and 400 Hz, respectively), and wide variability among CI listeners (range of approximately 800 to 8000 Hz). There was a significant relationship between spectral shape resolution and vowel recognition. The spectral shape resolution thresholds of NH listeners increased as the number of channels increased from 1 to 16, while the CI listeners showed a performance plateau at 4-6 channels, which is consistent with previous results using speech recognition measures. These results indicate that this test may provide a measure of CI performance which is time efficient and non-linguistic, and therefore, if verified, may provide a useful contribution to the prediction of speech perception in adults and children who use CIs. 相似文献
14.
Previous work has established that naturally produced clear speech is more intelligible than conversational speech for adult hearing-impaired listeners and normal-hearing listeners under degraded listening conditions. The major goal of the present study was to investigate the extent to which naturally produced clear speech is an effective intelligibility enhancement strategy for non-native listeners. Thirty-two non-native and 32 native listeners were presented with naturally produced English sentences. Factors that varied were speaking style (conversational versus clear), signal-to-noise ratio (-4 versus -8 dB) and talker (one male versus one female). Results showed that while native listeners derived a substantial benefit from naturally produced clear speech (an improvement of about 16 rau units on a keyword-correct count), non-native listeners exhibited only a small clear speech effect (an improvement of only 5 rau units). This relatively small clear speech effect for non-native listeners is interpreted as a consequence of the fact that clear speech is essentially native-listener oriented, and therefore is only beneficial to listeners with extensive experience with the sound structure of the target language. 相似文献
15.
For a group of 30 hearing-impaired subjects and a matched group of 15 normal-hearing subjects (age range 13-17) the following data were collected: the tone audiogram, the auditory bandwidth at 1000 Hz, and the recognition threshold of a short melody presented simultaneously with two other melodies, lower and higher in frequency, respectively. The threshold was defined as the frequency distance required to recognize the test melody. It was found that, whereas the mean recognition threshold for the normal-hearing subjects was five semitones, it was, on the average, 27 semitones for the hearing-impaired subjects. Although the interindividual spread for the latter group was large, it did not correlate with the subjects' auditory bandwidth, nor with their musical experience or education. 相似文献
16.
Speech produced in the presence of noise-Lombard speech-is more intelligible in noise than speech produced in quiet, but the origin of this advantage is poorly understood. Some of the benefit appears to arise from auditory factors such as energetic masking release, but a role for linguistic enhancements similar to those exhibited in clear speech is possible. The current study examined the effect of Lombard speech in noise and in quiet for Spanish learners of English. Non-native listeners showed a substantial benefit of Lombard speech in noise, although not quite as large as that displayed by native listeners tested on the same task in an earlier study [Lu and Cooke (2008), J. Acoust. Soc. Am. 124, 3261-3275]. The difference between the two groups is unlikely to be due to energetic masking. However, Lombard speech was less intelligible in quiet for non-native listeners than normal speech. The relatively small difference in Lombard benefit in noise for native and non-native listeners, along with the absence of Lombard benefit in quiet, suggests that any contribution of linguistic enhancements in the Lombard benefit for natives is small. 相似文献
17.
In a multiple observation, sample discrimination experiment normal-hearing (NH) and hearing-impaired (HI) listeners heard two multitone complexes each consisting of six simultaneous tones with nominal frequencies spaced evenly on an ERB(N) logarithmic scale between 257 and 6930 Hz. On every trial, the frequency of each tone was sampled from a normal distribution centered near its nominal frequency. In one interval of a 2IFC task, all tones were sampled from distributions lower in mean frequency and in the other interval from distributions higher in mean frequency. Listeners had to identify the latter interval. Decision weights were obtained from multiple regression analysis of the between- interval frequency differences for each tone and listeners' responses. Frequency difference limens (an index of sensorineural resolution) and decision weights for each tone were used to predict the sensitivity of different decision-theoretic models. Results indicate that low-frequency tones were given much greater perceptual weight than high-frequency tones by both groups of listeners. This tendency increased as hearing loss increased and as sensorineural resolution decreased, resulting in significantly less efficient weighting strategies for the HI listeners. Overall, results indicate that HI listeners integrated frequency information less optimally than NH listeners, even after accounting for differences in sensorineural resolution. 相似文献
18.
The goal of this study was to measure the ability of adult hearing-impaired listeners to discriminate formant frequency for vowels in isolation, syllables, and sentences. Vowel formant discrimination for F1 and F2 for the vowels /I epsilon ae / was measured. Four experimental factors were manipulated including linguistic context (isolated vowels, syllables, and sentences), signal level (70 and 95 dB SPL), formant frequency, and cognitive load. A complex identification task was added to the formant discrimination task only for sentences to assess effects of cognitive load. Results showed significant elevation in formant thresholds as formant frequency and linguistic context increased. Higher signal level also elevated formant thresholds primarily for F2. However, no effect of the additional identification task on the formant discrimination was observed. In comparable conditions, these hearing-impaired listeners had elevated thresholds for formant discrimination compared to young normal-hearing listeners primarily for F2. Altogether, poorer performance for formant discrimination for these adult hearing-impaired listeners was mainly caused by hearing loss rather than cognitive difficulty for tasks implemented in this study. 相似文献
19.
M F Dorman K Marton M T Hannley J M Lindholm 《The Journal of the Acoustical Society of America》1985,77(2):664-670
Young normal-hearing listeners, elderly normal-hearing listeners, and elderly hearing-impaired listeners were tested on a variety of phonetic identification tasks. Where identity was cued by stimulus duration, the elderly hearing-impaired listeners evidenced normal identification functions. On a task in which there were multiple cues to vowel identity, performance was also normal. On a/b d g/identification task in which the starting frequency of the second formant was varied, performance was abnormal for both the elderly hearing-impaired listeners and the elderly normal-hearing listeners. We conclude that errors in phonetic identification among elderly hearing-impaired listeners with mild to moderate, sloping hearing impairment do not stem from abnormalities in processing stimulus duration. The results with the /b d g/continuum suggest that one factor underlying errors may be an inability to base identification on dynamic spectral information when relatively static information, which is normally characteristic of a phonetic segment, is unavailable. 相似文献