首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The article deals with the question of why multichannel amplitude compression appears to have a negative rather than a positive effect on speech intelligibility by hearing-impaired listeners. It is argued that the small time constants of amplitude compression diminish the temporal as well as the spectral contrasts in the speech signal. According to the modulation-transfer function concept, this results in reduced intelligibility scores. Experimental evidence is reviewed indicating that the following two arguments in favor of amplitude compression in case of sensorineural hearing loss are not valid: (1) to compensate for the effects of loudness recruitment and (2) to get weak consonants above threshold. The author concludes that, in multichannel hearing aids, automatic gain control with time constants of 0.25-0.5 s should be given preference to amplitude compression.  相似文献   

2.
Speech perception by subjects with sensorineural hearing impairment was studied using various types of short-term (syllabic) amplitude compression. Average speech level was approximately constant. In quiet, a single-channel wideband compression (WBC) with compression ratio equal to 10, attack time 10 ms and release time 90 ms produced significantly higher scores than a three-channel multiband compression (MBC) or no compression when a nonsense syllable test (City University of New York) was used. The scores under MBC, WBC, or no compression were not significantly different when the modified rhyme test (MRT) was used. But when overshoots caused by compression were clipped, the MRT scores improved significantly. The influence of MBC on reverberant speech and of WBC on noisy speech were tested with the MRT. Reverberation reduced the scores, and this reduction was the same with compression as without. Noise added to speech before compression also reduced the scores, but the reduction was larger with compression than without. When noise was added after compression, an improvement was observed when WBC had a compression ratio of about 5, attack time 1 ms, and release time 30 ms. Other compression modes (e.g., with high-frequency pre-emphasis) did not show an improvement. The results indicate that WBC with a compression ratio around 5, attack time shorter than 3 ms, and release time between 30 and 90 ms can be beneficial if signal-to-noise ratio is large, or, if in a noisy or reverberant environment, the effects of noise or reverberation are eliminated by using listening systems.  相似文献   

3.
Sensorineural hearing loss is accompanied by loudness recruitment, a steeper-than-normal rise of perceived loudness with presentation level. To compensate for this abnormality, amplitude compression is often applied (e.g., in a hearing aid). Alternatively, since speech intelligibility has been modeled as the perception of fast energy fluctuations, enlarging these (by means of expansion) may improve speech intelligibility. Still, even if these signal-processing techniques prove useful in terms of speech intelligibility, practical application might be hindered by unacceptably low sound quality. Therefore, both speech intelligibility and sound quality were evaluated for syllabic compression and expansion of the temporal envelope. Speech intelligibility was evaluated with an adaptive procedure, based on short everyday sentences either in noise or with a competing speaker. Sound quality was measured by means of a rating-scale procedure, for both speech and music. In a systematic setup, both the ratio of compression or expansion and the number of independent processing bands were varied. Individual hearing thresholds were compensated for by a listener-specific filter and amplification. Both listeners with normal hearing and listeners with sensorineural hearing impairment participated as paid volunteers. The results show that, on average, both compression and expansion fail to show better speech intelligibility or sound quality than linear amplification.  相似文献   

4.
Relations between perception of suprathreshold speech and auditory functions were examined in 24 hearing-impaired listeners and 12 normal-hearing listeners. The speech intelligibility index (SII) was used to account for audibility. The auditory functions included detection efficiency, temporal and spectral resolution, temporal and spectral integration, and discrimination of intensity, frequency, rhythm, and spectro-temporal shape. All auditory functions were measured at 1 kHz. Speech intelligibility was assessed with the speech-reception threshold (SRT) in quiet and in noise, and with the speech-reception bandwidth threshold (SRBT), previously developed for investigating speech perception in a limited frequency region around 1 kHz. The results showed that the elevated SRT in quiet could be explained on the basis of audibility. Audibility could only partly account for the elevated SRT values in noise and the deviant SRBT values, suggesting that suprathreshold deficits affected intelligibility in these conditions. SII predictions for the SRBT improved significantly by including the individually measured upward spread of masking in the SII model. Reduced spectral resolution, reduced temporal resolution, and reduced frequency discrimination appeared to be related to speech perception deficits. Loss of peripheral compression appeared to have the smallest effect on the intelligibility of suprathreshold speech.  相似文献   

5.
The author proposed to adopt wide dynamic range compression and adaptive multichannel modulation-based noise reduction algorithms to enhance hearing protector performance. Three experiments were conducted to investigate the effects of compression and noise reduction configurations on the amount of noise reduction, speech intelligibility, and overall preferences using existing digital hearing aids. In Experiment 1, sentence materials were recorded in speech spectrum noise and white noise after being processed by eight digital hearing aids. When the hearing aids were set to 3:1 compression, the amount of noise reduction achieved was enhanced or maintained for hearing aids with parallel configurations, but reduced for hearing aids with serial configurations. In Experiments 2 and 3, 16 normal-hearing listeners' speech intelligibility and perceived sound quality were tested when they listened to speech recorded through hearing aids with parallel and serial configurations. Regardless of the configuration, the noise reduction algorithms reduced the noise level and maintained speech intelligibility in white noise. Additionally, the listeners preferred the parallel rather than the serial configuration in 3:1 conditions and the serial configuration in 1:1 rather than 3:1 compression when the noise reduction algorithms were activated. Implications for hearing protector and hearing aid design are discussed.  相似文献   

6.
The effects of intensity on monosyllabic word recognition were studied in adults with normal hearing and mild-to-moderate sensorineural hearing loss. The stimuli were bandlimited NU#6 word lists presented in quiet and talker-spectrum-matched noise. Speech levels ranged from 64 to 99 dB SPL and S/N ratios from 28 to -4 dB. In quiet, the performance of normal-hearing subjects remained essentially constant in noise, at a fixed S/N ratio, it decreased as a linear function of speech level. Hearing-impaired subjects performed like normal-hearing subjects tested in noise when the data were corrected for the effects of audibility loss. From these and other results, it was concluded that: (1) speech intelligibility in noise decreases when speech levels exceed 69 dB SPL and the S/N ratio remains constant; (2) the effects of speech and noise level are synergistic; (3) the deterioration in intelligibility can be modeled as a relative increase in the effective masking level; (4) normal-hearing and hearing-impaired subjects are affected similarly by increased signal level when differences in speech audibility are considered; (5) the negative effects of increasing speech and noise levels on speech recognition are similar for all adult subjects, at least up to 80 years; and (6) the effective dynamic range of speech may be larger than the commonly assumed value of 30 dB.  相似文献   

7.
Despite the recognition that the steepness of filter slopes can play an important role in the intelligibility of bandpass speech, there has been no systematic examination of its importance. The present study used high orders of finite impulse response (FIR) filtering to produce slopes ranging from 150 to 10,000 dB/octave. The slopes flanked 1/3-octave passbands of everyday sentences having a center frequency of 1500 Hz (the region of highest intelligibility for the male speaker's voice). Presentation levels were approximately 75 and 45 dB. No significant differences were found for the two presentation levels. Average intelligibility scores ranged from 77% at 150 dB/octave down to the asymptotic intelligibility score of 12% at 4800 dB/octave. These results indicate that slopes of several thousand dB/octave may be required for accurate and unambiguous specification of the range of frequencies contributing to intelligibility of filtered speech. In addition, the extremely steep slopes are needed to ensure that none of the spectral components contributing to intelligibility has its relative importance diminished by spectral tilt.  相似文献   

8.
The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.  相似文献   

9.
The effects of six-channel compression and expansion amplification on the intelligibility of nonsense syllables embedded in speech spectrum noise were examined for four hearing-impaired subjects. For one condition (linear) the stimulus was given six-channel amplification with frequency shaping to suit the subject's hearing loss. The other condition (nonlinear) was the same except that low level inputs, to any given channel, received expansion amplification and high level inputs received compression. For each condition, each subject received the nonsense syllables at three different input levels, representing low, average, and high intensity speech. The results of this study, like those of most other studies of multichannel compression, are mainly negative. Nonlinear processing (mainly expansion) of low intensity speech resulted in a significant degradation of speech intelligibility for two subjects and in no improvement for the others. One subject showed a significant improvement in intelligibility for the nonlinearly processed average intensity speech and another subject showed significant improvement for the high intensity input (mainly compression). Clearly, nonlinear processing is beneficial for some subjects, under some listening conditions, but further research is needed to identify the relevent characteristics of such subjects. An acoustic analysis of selected items revealed that the failure of expansion to improve intelligibility was primarily due to the very low intensity consonants /e/ and /k/, in final position, being presented at an even lower intensity in the expansion condition than in the linear condition. Expansion may be worth further investigation with different parameters. Several other problems caused by the multichannel processing were also revealed. These included alteration of spectral shapes and band interaction effects. Ways of overcoming these problems, and of capitalizing on the likely advantages of multichannel amplification, are currently being investigated.  相似文献   

10.
The purpose of this study was to quantify the effect of timing errors on the intelligibility of deaf children's speech. Deviant timing patterns were corrected in the recorded speech samples of six deaf children using digital speech processing techniques. The speech waveform was modified to correct timing errors only, leaving all other aspects of the speech unchanged. The following six-stage approximation procedure was used to correct the deviant timing patterns: (1) original, unaltered utterances, (2) correction of pauses only, (3) correction of relative timing, (4) correction of absolute syllable duration, (5) correction of relative timing and pauses, and (6) correction of absolute syllable duration and pauses. Measures of speech intelligibility were obtained for the original and the computer-modified utterances. On the average, the highest intelligibility score was obtained when relative timing errors only were corrected. The correction of this type of error improved the intelligibility of both stressed and unstressed words within a phrase. Improvements in word intelligibility, which occurred when relative timing was corrected, appeared to be closely related to the number of phonemic errors present within a word. The second highest intelligibility score was obtained for the original, unaltered sentences. On the average, the intelligibility scores obtained for the other four forms of timing modification were poorer than those obtained for the original sentences. Thus, the data show that intelligibility improved, on the average, when only one type of error, relative timing, was corrected.  相似文献   

11.
Background noise reduces the depth of the low-frequency envelope modulations known to be important for speech intelligibility. The relative strength of the target and masker envelope modulations can be quantified using a modulation signal-to-noise ratio, (S/N)(mod), measure. Such a measure can be used in noise-suppression algorithms to extract target-relevant modulations from the corrupted (target + masker) envelopes for potential improvement in speech intelligibility. In the present study, envelopes are decomposed in the modulation spectral domain into a number of channels spanning the range of 0-30 Hz. Target-dominant modulations are identified and retained in each channel based on the (S/N)(mod) selection criterion, while modulations which potentially interfere with perception of the target (i.e., those dominated by the masker) are discarded. The impact of modulation-selective processing on the speech-reception threshold for sentences in noise is assessed with normal-hearing listeners. Results indicate that the intelligibility of noise-masked speech can be improved by as much as 13 dB when preserving target-dominant modulations, present up to a modulation frequency of 18 Hz, while discarding masker-dominant modulations from the mixture envelopes.  相似文献   

12.
Three experiments were conducted to study relative contributions of speaking rate, temporal envelope, and temporal fine structure to clear speech perception. Experiment I used uniform time scaling to match the speaking rate between clear and conversational speech. Experiment II decreased the speaking rate in conversational speech without processing artifacts by increasing silent gaps between phonetic segments. Experiment III created "auditory chimeras" by mixing the temporal envelope of clear speech with the fine structure of conversational speech, and vice versa. Speech intelligibility in normal-hearing listeners was measured over a wide range of signal-to-noise ratios to derive speech reception thresholds (SRT). The results showed that processing artifacts in uniform time scaling, particularly time compression, reduced speech intelligibility. Inserting gaps in conversational speech improved the SRT by 1.3 dB, but this improvement might be a result of increased short-term signal-to-noise ratios during level normalization. Data from auditory chimeras indicated that the temporal envelope cue contributed more to the clear speech advantage at high signal-to-noise ratios, whereas the temporal fine structure cue contributed more at low signal-to-noise ratios. Taken together, these results suggest that acoustic cues for the clear speech advantage are multiple and distributed.  相似文献   

13.
Recent evidence suggests that spectral change, as measured by cochlea-scaled entropy (CSE), predicts speech intelligibility better than the information carried by vowels or consonants in sentences. Motivated by this finding, the present study investigates whether intelligibility indices implemented to include segments marked with significant spectral change better predict speech intelligibility in noise than measures that include all phonetic segments paying no attention to vowels/consonants or spectral change. The prediction of two intelligibility measures [normalized covariance measure (NCM), coherence-based speech intelligibility index (CSII)] is investigated using three sentence-segmentation methods: relative root-mean-square (RMS) levels, CSE, and traditional phonetic segmentation of obstruents and sonorants. While the CSE method makes no distinction between spectral changes occurring within vowels/consonants, the RMS-level segmentation method places more emphasis on the vowel-consonant boundaries wherein the spectral change is often most prominent, and perhaps most robust, in the presence of noise. Higher correlation with intelligibility scores was obtained when including sentence segments containing a large number of consonant-vowel boundaries than when including segments with highest entropy or segments based on obstruent/sonorant classification. These data suggest that in the context of intelligibility measures the type of spectral change captured by the measure is important.  相似文献   

14.
Many hearing-impaired listeners suffer from distorted auditory processing capabilities. This study examines which aspects of auditory coding (i.e., intensity, time, or frequency) are distorted and how this affects speech perception. The distortion-sensitivity model is used: The effect of distorted auditory coding of a speech signal is simulated by an artificial distortion, and the sensitivity of speech intelligibility to this artificial distortion is compared for normal-hearing and hearing-impaired listeners. Stimuli (speech plus noise) are wavelet coded using a complex sinusoidal carrier with a Gaussian envelope (1/4 octave bandwidth). Intensity information is distorted by multiplying the modulus of each wavelet coefficient by a random factor. Temporal and spectral information are distorted by randomly shifting the wavelet positions along the temporal or spectral axis, respectively. Measured were (1) detection thresholds for each type of distortion, and (2) speech-reception thresholds for various degrees of distortion. For spectral distortion, hearing-impaired listeners showed increased detection thresholds and were also less sensitive to the distortion with respect to speech perception. For intensity and temporal distortion, this was not observed. Results indicate that a distorted coding of spectral information may be an important factor underlying reduced speech intelligibility for the hearing impaired.  相似文献   

15.
The intelligibility of syllables whose cepstral trajectories were temporally filtered was measured. The speech signals were transformed to their LPC cepstral coefficients, and these coefficients were passed through different filters. These filtered trajectories were recombined with the residuals and the speech signal reconstructed. The intelligibility of the reconstructed speech segments was then measured in two perceptual experiments for Japanese syllables. The effect of various low-pass, high-pass, and bandpass filtering is reported, and the results summarized using a theoretical approach based on the independence of the contributions in different modulation bands. The overall results suggest that speech intelligibility is not severely impaired as long as the filtered spectral components have a rate of change between 1 and 16 Hz.  相似文献   

16.
In cochlear implants (CIs), different talkers often produce different levels of speech understanding because of the spectrally distorted speech patterns provided by the implant device. A spectral normalization approach was used to transform the spectral characteristics of one talker to those of another talker. In Experiment 1, speech recognition with two talkers was measured in CI users, with and without spectral normalization. Results showed that the spectral normalization algorithm had small but significant effect on performance. In Experiment 2, the effects of spectral normalization were measured in CI users and normal-hearing (NH) subjects; a pitch-stretching technique was used to simulate six talkers with different fundamental frequencies and vocal tract configurations. NH baseline performance was nearly perfect with these pitch-shift transformations. For CI subjects, while there was considerable intersubject variability in performance with the different pitch-shift transformations, spectral normalization significantly improved the intelligibility of these simulated talkers. The results from Experiments 1 and 2 demonstrate that spectral normalization toward more-intelligible talkers significantly improved CI users' speech understanding with less-intelligible talkers. The results suggest that spectral normalization using optimal reference patterns for individual CI patients may compensate for some of the acoustic variability across talkers.  相似文献   

17.
Previous research has demonstrated reduced speech recognition when speech is presented at higher-than-normal levels (e.g., above conversational speech levels), particularly in the presence of speech-shaped background noise. Persons with hearing loss frequently listen to speech-in-noise at these levels through hearing aids, which incorporate multiple-channel, wide dynamic range compression. This study examined the interactive effects of signal-to-noise ratio (SNR), speech presentation level, and compression ratio on consonant recognition in noise. Nine subjects with normal hearing identified CV and VC nonsense syllables in a speech-shaped noise at two SNRs (0 and +6 dB), three presentation levels (65, 80, and 95 dB SPL) and four compression ratios (1:1, 2:1, 4:1, and 6:1). Stimuli were processed through a simulated three-channel, fast-acting, wide dynamic range compression hearing aid. Consonant recognition performance decreased as compression ratio increased and presentation level increased. Interaction effects were noted between SNR and compression ratio, as well as between presentation level and compression ratio. Performance decrements due to increases in compression ratio were larger at the better (+6 dB) SNR and at the lowest (65 dB SPL) presentation level. At higher levels (95 dB SPL), such as those experienced by persons with hearing loss, increasing compression ratio did not significantly affect speech intelligibility.  相似文献   

18.
Using a "noise-vocoder" cochlear implant simulator [Shannon et al., Science 270, 303-304 (1995)], the effect of the speed of dynamic range compression on speech intelligibility was assessed, using normal-hearing subjects. The target speech had a level 5 dB above that of the competing speech. Initially, baseline performance was measured with no compression active, using between 4 and 16 processing channels. Then, performance was measured using a fast-acting compressor and a slow-acting compressor, each operating prior to the vocoder simulation. The fast system produced significant gain variation over syllabic timescales. The slow system produced significant gain variation only over the timescale of sentences. With no compression active, about six channels were necessary to achieve 50% correct identification of words in sentences. Sixteen channels produced near-maximum performance. Slow-acting compression produced no significant degradation relative to the baseline. However, fast-acting compression consistently reduced performance relative to that for the baseline, over a wide range of performance levels. It is suggested that fast-acting compression degrades performance for two reasons: (1) because it introduces correlated fluctuations in amplitude in different frequency bands, which tends to produce perceptual fusion of the target and background sounds and (2) because it reduces amplitude modulation depth and intensity contrasts.  相似文献   

19.
Some evidence, mostly drawn from experiments using only a single moderate rate of speech, suggests that low-frequency amplitude modulations may be particularly important for intelligibility. Here, two experiments investigated intelligibility of temporally distorted sentences across a wide range of simulated speaking rates, and two metrics were used to predict results. Sentence intelligibility was assessed when successive segments of fixed duration were temporally reversed (exp. 1), and when sentences were processed through four third-octave-band filters, the outputs of which were desynchronized (exp. 2). For both experiments, intelligibility decreased with increasing distortion. However, in exp. 2, intelligibility recovered modestly with longer desynchronization. Across conditions, performances measured as a function of proportion of utterance distorted converged to a common function. Estimates of intelligibility derived from modulation transfer functions predict a substantial proportion of the variance in listeners' responses in exp. 1, but fail to predict performance in exp. 2. By contrast, a metric of potential information, quantified as relative dissimilarity (change) between successive cochlear-scaled spectra, is introduced. This metric reliably predicts listeners' intelligibility across the full range of speaking rates in both experiments. Results support an information-theoretic approach to speech perception and the significance of spectral change rather than physical units of time.  相似文献   

20.
Most information in speech is carried in spectral changes over time, rather than in static spectral shape per se. A form of signal processing aimed at enhancing spectral changes over time was developed and evaluated using hearing-impaired listeners. The signal processing was based on the overlap-add method, and the degree and type of enhancement could be manipulated via four parameters. Two experiments were conducted to assess speech intelligibility and clarity preferences. Three sets of parameter values (one corresponding to a control condition), two types of masker (steady speech-spectrum noise and two-talker speech) and two signal-to-masker ratios (SMRs) were used for each masker type. Generally, the effects of the processing were small, although intelligibility was improved by about 8 percentage points relative to the control condition for one set of parameter values using the steady noise masker at -6 dB SMR. The processed signals were not preferred over those for the control condition, except for the steady noise masker at -6 dB SMR. Further work is needed to determine whether tailoring the processing to the characteristics of the individual hearing-impaired listener is beneficial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号