首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 38 毫秒
1.
Cochlear implant users report difficulty understanding speech in both noisy and reverberant environments. Electric-acoustic stimulation (EAS) is known to improve speech intelligibility in noise. However, little is known about the potential benefits of EAS in reverberation, or about how such benefits relate to those observed in noise. The present study used EAS simulations to examine these questions. Sentences were convolved with impulse responses from a model of a room whose estimated reverberation times were varied from 0 to 1 sec. These reverberated stimuli were then vocoded to simulate electric stimulation, or presented as a combination of vocoder plus low-pass filtered speech to simulate EAS. Monaural sentence recognition scores were measured in two conditions: reverberated speech and speech in a reverberated noise. The long-term spectrum and amplitude modulations of the noise were equated to the reverberant energy, allowing a comparison of the effects of the interferer (speech vs noise). Results indicate that, at least in simulation, (1) EAS provides significant benefit in reverberation; (2) the benefits of EAS in reverberation may be underestimated by those in a comparable noise; and (3) the EAS benefit in reverberation likely arises from partially preserved cues in this background accessible via the low-frequency acoustic component.  相似文献   

2.
Listeners were asked to identify modified recordings of the words "sir" and "stir," which were spoken by an adult male British-English speaker. Steps along a continuum between the words were obtained by a pointwise interpolation of their temporal-envelopes. These test words were embedded in a longer "context" utterance, and played with different amounts of reverberation. Increasing only the test-word's reverberation shifts the listener's category boundary so that more "sir"-identifications are made. This effect reduces when the context's reverberation is also increased, indicating perceptual compensation that is informed by the context. Experiment 1 finds that compensation is more prominent in rapid speech, that it varies between rooms, that it is more prominent when the test-word's reverberation is high, and that it increases with the context's reverberation. Further experiments show that compensation persists when the room is switched between the context and the test word, when presentation is monaural, and when the context is reversed. However, compensation reduces when the context's reverberation pattern is reversed, as well as when noise-versions of the context are used. "Tails" that reverberation introduces at the ends of sounds and at spectral transitions may inform the compensation mechanism about the amount of reflected sound in the signal.  相似文献   

3.
The effects of noise and reverberation on the identification of monophthongs and diphthongs were evaluated for ten subjects with moderate sensorineural hearing losses. Stimuli were 15 English vowels spoken in a /b-t/ context, in a carrier sentence. The original tape was recorded without reverberation, in a quiet condition. This test tape was degraded either by recording in a room with reverberation time of 1.2 s, or by adding a babble of 12 voices at a speech-to-noise ratio of 0 dB. Both types of degradation caused statistically significant reductions of mean identification scores as compared to the quiet condition. Although the mean identification scores for the noise and reverberant conditions were not significantly different, the patterns of errors for these two conditions were different. Errors for monophthongs in reverberation but not in noise seemed to be related to an overestimation of vowel duration, and there was a tendency to weight the formant frequencies differently in the reverberation and quiet conditions. Errors for monophthongs in noise seemed to be related to spectral proximity of formant frequencies for confused pairs. For the diphthongs in both noise and reverberation, there was a tendency to judge a diphthong as the beginning monophthong. This may have been due to temporal smearing in the reverberation condition, and to a higher masked threshold for changing compared to stationary formant frequencies in the noise condition.  相似文献   

4.
Speech reception thresholds were measured in virtual rooms to investigate the influence of reverberation on speech intelligibility for spatially separated targets and interferers. The measurements were realized under headphones, using target sentences and noise or two-voice interferers. The room simulation allowed variation of the absorption coefficient of the room surfaces independently for target and interferer. The direct-to-reverberant ratio and interaural coherence of sources were also varied independently by considering binaural and diotic listening. The main effect of reverberation on the interferer was binaural and mediated by the coherence, in agreement with binaural unmasking theories. It appeared at lower reverberation levels than the effect of reverberation on the target, which was mainly monaural and associated with the direct-to-reverberant ratio, and could be explained by the loss of amplitude modulation in the reverberant speech signals. This effect was slightly smaller when listening binaurally. Reverberation might also be responsible for a disruption of the mechanism by which the auditory system exploits fundamental frequency differences to segregate competing voices, and a disruption of the "listening in the gaps" associated with speech interferers. These disruptions may explain an interaction observed between the effects of reverberation on the targets and two-voice interferers.  相似文献   

5.
Listening conditions in everyday life typically include a combination of reverberation and nonstationary background noise. It is well known that sentence intelligibility is adversely affected by these factors. To assess their combined effects, an approach is introduced which combines two methods of predicting speech intelligibility, the extended speech intelligibility index (ESII) and the speech transmission index. First, the effects of reverberation on nonstationary noise (i.e., reduction of masker modulations) and on speech modulations are evaluated separately. Subsequently, the ESII is applied to predict the speech reception threshold (SRT) in the masker with reduced modulations. To validate this approach, SRTs were measured for ten normal-hearing listeners, in various combinations of nonstationary noise and artificially created reverberation. After taking the characteristics of the speech corpus into account, results show that the approach accurately predicts SRTs in nonstationary noise and reverberation for normal-hearing listeners. Furthermore, it is shown that, when reverberation is present, the benefit from masker fluctuations may be substantially reduced.  相似文献   

6.
Two experiments investigated the impact of reverberation and masking on speech understanding using cochlear implant (CI) simulations. Experiment 1 tested sentence recognition in quiet. Stimuli were processed with reverberation simulation (T=0.425, 0.266, 0.152, and 0.0 s) and then either processed with vocoding (6, 12, or 24 channels) or were subjected to no further processing. Reverberation alone had only a small impact on perception when as few as 12 channels of information were available. However, when the processing was limited to 6 channels, perception was extremely vulnerable to the effects of reverberation. In experiment 2, subjects listened to reverberated sentences, through 6- and 12-channel processors, in the presence of either speech-spectrum noise (SSN) or two-talker babble (TTB) at various target-to-masker ratios. The combined impact of reverberation and masking was profound, although there was no interaction between the two effects. This differs from results obtained in subjects listening to unprocessed speech where interactions between reverberation and masking have been shown to exist. A speech transmission index (STI) analysis indicated a reasonably good prediction of speech recognition performance. Unlike previous investigations, the SSN and TTB maskers produced equivalent results, raising questions about the role of informational masking in CI processed speech.  相似文献   

7.
The spectral envelope is a major determinant of the perceptual identity of many classes of sound including speech. When sounds are transmitted from the source to the listener, the spectral envelope is invariably and diversely distorted, by factors such as room reverberation. Perceptual compensation for spectral-envelope distortion was investigated here. Carrier sounds were distorted by spectral envelope difference filters whose frequency response is the spectral envelope of one vowel minus the spectral envelope of another. The filter /I/ minus /e/ and its inverse were used. Subjects identified a test sound that followed the carrier. The test sound was drawn from an /Itch/ to /etch/ continuum. Perceptual compensation produces a phoneme boundary difference between /I/ minus /e/ and its inverse. Carriers were the phrase "the next word is" spoken by the same (male) speaker as the test sounds, signal-correlated noise derived from this phrase, the same phrase spoken by a female speaker, male and female versions played backwards, and a repeated end-point vowel. The carrier and test were presented to the same ear, to different ears, and from different apparent directions (by varying interaural time delay). The results show that compensation is unlike peripheral phenomena, such as adaptation, and unlike phonetic perceptual phenomena. The evidence favors a central, auditory mechanism.  相似文献   

8.
Speech-intelligibility tests auralized in a virtual classroom were used to investigate the optimal reverberation times for verbal communication for normal-hearing and hearing-impaired adults. The idealized classroom had simple geometry, uniform surface absorption, and an approximately diffuse sound field. It contained a speech source, a listener at a receiver position, and a noise source located at one of two positions. The relative output levels of the speech and noise sources were varied, along with the surface absorption and the corresponding reverberation time. The binaural impulse responses of the speech and noise sources in each classroom configuration were convolved with Modified Rhyme Test (MRT) and babble-noise signals. The resulting signals were presented to normal-hearing and hearing-impaired adult subjects to identify the configurations that gave the highest speech intelligibilities for the two groups. For both subject groups, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time included both zero and nonzero values. The results generally support previous theoretical results.  相似文献   

9.
利用一种噪声修正方法解决低混噪比条件下混响数据反演海底反射系数幅值参数偏差的问题。将全波动混响理论中的混响非相干平均强度,叠加海洋环境噪声平均强度,获得包含噪声的混响平均强度理论,分析背景噪声平均强度对反演海底反射系数幅值参数的影响。数值仿真表明,背景噪声平均强度的叠加使得混响平均强度衰减趋势减缓,导致海底反射系数幅值参数反演结果偏小,混响噪声比越低,影响越明显。提出一种噪声平均强度相减的修正方法,对混响噪声比为[0,6] dB的混响平均强度数据进行修正,利用修正后的混响数据反演得到偏差较小的海底反射系数幅值参数。对比浅海混响实验数据衰减至不同混噪比时反演所得海底反射系数幅值参数,以及经噪声修正后的反演结果,验证了本文提出方法的可行性。   相似文献   

10.
The potential effects of acoustical environment on speech understanding are especially important as children enter school where students' ability to hear and understand complex verbal information is critical to learning. However, this ability is compromised because of widely varied and unfavorable classroom acoustics. The extent to which unfavorable classroom acoustics affect children's performance on longer learning tasks is largely unknown as most research has focused on testing children using words, syllables, or sentences as stimuli. In the current study, a simulated classroom environment was used to measure comprehension performance of two classroom learning activities: a discussion and lecture. Comprehension performance was measured for groups of elementary-aged students in one of four environments with varied reverberation times and background noise levels. The reverberation time was either 0.6 or 1.5 s, and the signal-to-noise level was either +10 or +7 dB. Performance is compared to adult subjects as well as to sentence-recognition in the same condition. Significant differences were seen in comprehension scores as a function of age and condition; both increasing background noise and reverberation degraded performance in comprehension tasks compared to minimal differences in measures of sentence-recognition.  相似文献   

11.
Perceptual distances among single tokens of American English vowels were established for nonreverberant and reverberant conditions. Fifteen vowels in the phonetic context (b-t), embedded in the sentence "Mark the (b-t) again" were recorded by a male talker. For the reverberant condition, the sentences were played through a room with a reverberation time of 1.2 s. The CVC syllables were removed from the sentences and presented in pairs to ten subjects with audiometrically normal hearing, who judged the similarity of the syllable pairs separately for the nonreverberant and reverberant conditions. The results were analyzed by multidimensional scaling procedures, which showed that the perceptual data were accounted for by a three-dimensional vowel space. Correlations were obtained between the coordinates of the vowels along each dimension and selected acoustic parameters. For both conditions, dimensions 1 and 2 were highly correlated with formant frequencies F2 and F1, respectively, and dimension 3 was correlated with the product of the duration of the vowels and the difference between F3 and F1 expressed on the Bark scale. These observations are discussed in terms of the influence of reverberation on speech perception.  相似文献   

12.
This study investigated the effect of mild-to-moderate sensorineural hearing loss on the ability to identify speech in noise for vowel-consonant-vowel tokens that were either unprocessed, amplitude modulated synchronously across frequency, or amplitude modulated asynchronously across frequency. One goal of the study was to determine whether hearing-impaired listeners have a particular deficit in the ability to integrate asynchronous spectral information in the perception of speech. Speech tokens were presented at a high, fixed sound level and the level of a speech-shaped noise was changed adaptively to estimate the masked speech identification threshold. The performance of the hearing-impaired listeners was generally worse than that of the normal-hearing listeners, but the impaired listeners showed particularly poor performance in the synchronous modulation condition. This finding suggests that integration of asynchronous spectral information does not pose a particular difficulty for hearing-impaired listeners with mild/moderate hearing losses. Results are discussed in terms of common mechanisms that might account for poor speech identification performance of hearing-impaired listeners when either the masking noise or the speech is synchronously modulated.  相似文献   

13.
Perceptual compensation for reverberation was measured by embedding test words in contexts that were either spoken phrases or processed versions of this speech. The processing gave steady-spectrum contexts with no changes in the shape of the short-term spectral envelope over time, but with fluctuations in the temporal envelope. Test words were from a continuum between "sir" and "stir." When the amount of reverberation in test words was increased, to a level above the amount in the context, they sounded more like "sir." However, when the amount of reverberation in the context was also increased, to the level present in the test word, there was perceptual compensation in some conditions so that test words sounded more like "stir" again. Experiments here found compensation with speech contexts and with some steady-spectrum contexts, indicating that fluctuations in the context's temporal envelope can be sufficient for compensation. Other results suggest that the effectiveness of speech contexts is partly due to the narrow-band "frequency-channels" of the auditory periphery, where temporal-envelope fluctuations can be more pronounced than they are in the sound's broadband temporal envelope. Further results indicate that for compensation to influence speech, the context needs to be in a broad range of frequency channels.  相似文献   

14.
Speech intelligibility metrics that take into account sound reflections in the room and the background noise have been compared, assuming diffuse sound field. Under this assumption, sound decays exponentially with a decay constant inversely proportional to reverberation time. Analytical formulas were obtained for each speech intelligibility metric providing a common basis for comparison. These formulas were applied to three sizes of rectangular classrooms. The sound source was the human voice without amplification, and background noise was taken into account by a noise-to-signal ratio. Correlations between the metrics and speech intelligibility are presented and applied to the classrooms under study. Relationships between some speech intelligibility metrics were also established. For each noise-to-signal ratio, the value of each speech intelligibility metric is maximized for a specific reverberation time. For quiet classrooms, the reverberation time that maximizes these speech intelligibility metrics is between 0.1 and 0.3 s. Speech intelligibility of 100% is possible with reverberation times up to 0.4-0.5 s and this is the recommended range. The study suggests "ideal" and "acceptable" maximum background-noise level for classrooms of 25 and 20 dB, respectively, below the voice level at 1 m in front of the talker.  相似文献   

15.
This study was designed to characterize the effect of background noise on the identification of syllables using behavioral and electrophysiological measures. Twenty normal-hearing adults (18-30 years) performed an identification task in a two-alternative forced-choice paradigm. Stimuli consisted of naturally produced syllables [da] and [ga] embedded in white noise. The noise was initiated 1000 ms before the onset of the speech stimuli in order to separate the auditory event related potentials (AERP) response to noise onset from that to the speech. Syllables were presented in quiet and in five SNRs: +15, +3, 0, -3, and -6 dB. Results show that (1) performance accuracy, d', and reaction time were affected by the noise, more so for reaction time; (2) both N1 and P3 latency were prolonged as noise levels increased, more so for P3; (3) [ga] was better identified than [da], in all noise conditions; and (4) P3 latency was longer for [da] than for [ga] for SNR 0 through -6 dB, while N1 latency was longer for [ga] than for [da] in most listening conditions. In conclusion, the unique stimuli structure utilized in this study demonstrated the effects of noise on speech recognition at both the physical and the perceptual processing levels.  相似文献   

16.
Speech perception by subjects with sensorineural hearing impairment was studied using various types of short-term (syllabic) amplitude compression. Average speech level was approximately constant. In quiet, a single-channel wideband compression (WBC) with compression ratio equal to 10, attack time 10 ms and release time 90 ms produced significantly higher scores than a three-channel multiband compression (MBC) or no compression when a nonsense syllable test (City University of New York) was used. The scores under MBC, WBC, or no compression were not significantly different when the modified rhyme test (MRT) was used. But when overshoots caused by compression were clipped, the MRT scores improved significantly. The influence of MBC on reverberant speech and of WBC on noisy speech were tested with the MRT. Reverberation reduced the scores, and this reduction was the same with compression as without. Noise added to speech before compression also reduced the scores, but the reduction was larger with compression than without. When noise was added after compression, an improvement was observed when WBC had a compression ratio of about 5, attack time 1 ms, and release time 30 ms. Other compression modes (e.g., with high-frequency pre-emphasis) did not show an improvement. The results indicate that WBC with a compression ratio around 5, attack time shorter than 3 ms, and release time between 30 and 90 ms can be beneficial if signal-to-noise ratio is large, or, if in a noisy or reverberant environment, the effects of noise or reverberation are eliminated by using listening systems.  相似文献   

17.
Articulation index (AI) theory was used to evaluate stop-consonant recognition of normal-hearing listeners and listeners with high-frequency hearing loss. From results reported in a companion article [Dubno et al., J. Acoust. Soc. Am. 85, 347-354 (1989)], a transfer function relating the AI to stop-consonant recognition was established, and a frequency importance function was determined for the nine stop-consonant-vowel syllables used as test stimuli. The calculations included the rms and peak levels of the speech that had been measured in 1/3 octave bands; the internal noise was estimated from the thresholds for each subject. The AI model was then used to predict performance for the hearing-impaired listeners. A majority of the AI predictions for the hearing-impaired subjects fell within +/- 2 standard deviations of the normal-hearing listeners' results. However, as observed in previous data, the AI tended to overestimate performance of the hearing-impaired listeners. The accuracy of the predictions decreased with the magnitude of high-frequency hearing loss. Thus, with the exception of performance for listeners with severe high-frequency hearing loss, the results suggest that poorer speech recognition among hearing-impaired listeners results from reduced audibility within critical spectral regions of the speech stimuli.  相似文献   

18.
Detailed acoustical measurements were made in 41 working elementary school classrooms near Ottawa, Canada to obtain more representative and more accurate indications of the acoustical quality of conditions for speech communication during actual teaching activities. This paper describes the room acoustics characteristics and noise environment of 27 traditional rectangular classrooms from the 41 measured rooms. The purpose of the work was to better understand how to improve speech communication between teachers and students. The study found, that on average, the students experienced: teacher speech levels of 60.4 dB A, noise levels of 49.1 dB A, and a mean speech-to-noise ratio of 11 dB A during teaching activities. The mean reverberation time in the occupied classrooms was 0.41 s, which was 10% less than in the unoccupied rooms. The reverberation time measurements were used to determine the average absorption added by each student. Detailed analyses of early and late-arriving speech sounds showed these sound levels could be predicted quite accurately and suggest improved approaches to room acoustics design.  相似文献   

19.
The purpose of this study was to reexamine the factors leading to stop-consonant perception for consonant-vowel (CV) stimuli with just two formants over a range of vowels, under both an open- and closed-response condition. Five two-formant CV stimulus continua were synthesized, each covering a range of second-formant (F2) starting frequencies, for vowels corresponding roughly to [i,I,ae,u,a]. In addition, for the [I] and [a] continua, the duration of the first-formant (F1) transition was systematically varied. Three main findings emerged. First, criterion-level labial and alveolar responses were obtained for those stimuli with substantial F2 transitions. Second, for some stimuli, increases in the duration of the F1 transition increased velar responses to criterion level. Third, the response paradigm had a substantial influence on stop-consonant perception across all vowel continua. The results support a model of stop-consonant perception that includes spectral and time-varying spectral properties as integral components of analysis.  相似文献   

20.
Vowel identification in quiet, noise, and reverberation was tested with 40 subjects who varied in age and hearing level. Stimuli were 15 English vowels spoken in a (b-t) context in a carrier sentence, which were degraded by reverberation or noise (a babble of 12 voices). Vowel identification scores were correlated with various measures of hearing loss and with age. The mean of four hearing levels at 0.5, 1, 2, and 4 kHz, termed HTL4, produced the highest correlation coefficients in all three listening conditions. The correlation with age was smaller than with HTL4 and significant only for the degraded vowels. Further analyses were performed for subjects assigned to four groups on the basis of the amount of hearing loss. In noise, performance of all four groups was significantly different, whereas, in both quiet and reverberation, only the group with the greatest hearing loss performed differently from the other groups. The relationship among hearing loss, age, and number and type of errors is discussed in light of acoustic cues available for vowel identification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号