首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 921 毫秒
1.
提出在参数的提取过程中用不同的感知规整因子对不同人的参数归一化,从而实现在非特定人语音识别中对不同人的归一化处理。感知规整因子是基于声门上和声门下之间耦合作用产生声门下共鸣频率来估算的,与采用声道第三共振峰作为基准频率的方法比较,它能较多的滤除语义信息的影响,更好地体现说话人的个性特征。本文提取抗噪性能优于Mel倒谱参数的感知最小方差无失真参数作为识别特征,语音模型用经典的隐马尔可夫模型(HMM)。实验证明,本文方法与传统的语音识别参数和用声道第三共振峰进行谱规整的方法相比,在干净语音中单词错误识别率分别下降了4%和3%,在噪声环境下分别下降了9%和5%,有效地改善了非特定人语音识别系统的性能。   相似文献   

2.
从语音信号产生的数字模型出发,对1~10这10个汉语数字的语音信号进行预处理,提取了Mel频率倒谱系数,并将特征参数序列进行非线性时间规整为固定的帧数,使用BP神经网络的训练和识别,研究该方法的可行性和有效性。结果表明,在噪声环境下1,7,9的识别率为80%,而2,3,4,5,6,8,10的识别率都是100%,识别率主要受噪声、不同人的发音不同等因素的影响。该方法具有可行性强、识别率高的特点,可应用于语音识别系统。  相似文献   

3.
由于传统谱减语音增强存在残留的"音乐噪声",因此基于传统谱减法降噪的电子耳蜗(CI)感知的声音品质也会受到影响.为提高CI的抗噪性,本文提出了一种自适应变阶谱减算法,并将该方法应用于电子耳蜗的语音增强中.根据CI电极对应的频带关系,该算法先对采集的带噪声音信号功率谱进行Bark子带划分,并在每个Bark子带中根据信噪比的变化进行谱减阶数和系数的自适应调节,使各子带噪声更均衡地去除,基本消除了传统方法存在的"音乐噪声".基于该算法的电子耳蜗ACE仿真实验及测听结果表明,与传统谱减法相比,改进的算法能更好地抑制背景噪声和残留噪声,仿真得到的CI合成音感知更好和更清晰.  相似文献   

4.
语音线性预测分析算法在噪声环境下性能会急剧恶化,针对这一问题,提出一种改进的噪声鲁棒稀疏线性预测算法。首先采用学生t分布对具有稀疏性的语音线性预测残差建模,并显式考虑加性噪声的影响以提高模型鲁棒性,从而构建完整的概率模型。然后采用变分贝叶斯方法推导模型参数的近似后验分布,最终实现噪声鲁棒的稀疏线性预测参数估计。实验结果表明,与传统算法以及近几年提出的基于l_1范数优化的稀疏线性预测算法相比,该算法在多项指标上具有优势,对环境噪声具有更好的鲁棒性,并且谱失真度更小,因而能够有效提高噪声环境下的语音质量。  相似文献   

5.
吕钊  吴小培  张超  李密 《声学学报》2010,35(4):465-470
提出了一种基于独立分量分析(ICA)的语音信号鲁棒特征提取算法,用以解决在卷积噪声环境下语音信号的训练与识别特征不匹配的问题。该算法通过短时傅里叶变换将带噪语音信号从时域转换到频域后,采用复值ICA方法从带噪语音的短时谱中分离出语音信号的短时谱,然后根据所得到的语音信号短时谱计算美尔倒谱系数(MFCC)及其一阶差分作为特征参数。在仿真与真实环境下汉语数字语音识别实验中,所提算法相比较传统的MFCC其识别正确率分别提升了34.8%和32.6%。实验结果表明基于ICA方法的语音特征在卷积噪声环境下具有良好的鲁棒性。   相似文献   

6.
基于最大似然多项式回归的鲁棒语音识别   总被引:2,自引:0,他引:2  
吕勇  吴镇扬 《声学学报》2010,35(1):88-96
本文针对最大似然线性回归算法线性假设的缺点,将多项式回归方法用于模型自适应,构建了基于最大似然多项式回归的非线性模型自适应算法。该算法在对数谱域用多项式回归方法,逼近每个Mel子带上识别环境模型均值与训练环境模型均值之间的非线性关系。多项式系数通过EM算法和最大似然准则从识别环境下的少量自适应数据中估计。实验结果表明,二阶多项式就可以较好地逼近模型均值的非线性环境变换关系。在噪声补偿和说话人自适应实验中,最大似然多项式回归算法的误识率都明显低于最大似然线性回归算法。本文算法较好地克服了线性模型自适应算法线性假设的缺陷,可同时减小噪声,和说话人的改变或其它因素对语音识别系统的影响,尤其适合说话人和噪声的联合自适应。   相似文献   

7.
针对低信噪比说话人识别中缺失数据特征方法鲁棒性下降的问题,提出了一种采用感知听觉场景分析的缺失数据特征提取方法。首先求取语音的缺失数据特征谱,并由语音的感知特性求出感知特性的语音含量。含噪语音经过感知特性的语音增强和对其语谱的二维增强后求解出语音的分布,联合感知特性语音含量和缺失强度参数提取出感知听觉因子。再结合缺失数据特征谱把特征的提取过程分解为不同听觉场景进行区分地分析和处理,以增强说话人识别系统的鲁棒性能。实验结果表明,在-10 dB到10 dB的低信噪比环境下,对于4种不同的噪声,提出的方法比5种对比方法的鲁棒性均有提高,平均识别率分别提高26.0%,19.6%,12.7%,4.6%和6.5%。论文提出的方法,是一种在时-频域中寻找语音鲁棒特征的方法,更适合于低信噪比环境下的说话人识别。   相似文献   

8.
针对语音识别中由于强噪声的影响而引起的Lombard和Loud效应进行研究,提出了基于训练数据的加性噪声和Lombard及Loud效应的联合补偿法。对于加性噪声是从谱减法的逆向角度对训练数据在频谱域采用谱加法;对于Lombard和Loud语音,则采用基于隐马尔可夫模型(HMM)状态标注的训练数据补偿,该方法同时考虑Lombard和Loud语音不同声学单元的不同状态在倒谱域的多种变化和多种变异情况下不同声学单元的音长及相对音长的变化。这种基于数据的多模式补偿使模型自动适应多种噪声和语音变异情况,在强噪声环境下具有很强的鲁棒性,并且不影响识别系统在正常环境或正常发音时的识别性能.同时,由于补偿是在训练过程中得到,不增加识别时的计算复杂度。  相似文献   

9.
周彬  邹霞  张雄伟 《声学学报》2014,39(5):655-662
语音线性预测分析算法在噪声环境下性能会急剧恶化,针对这一问题,提出一种改进的噪声鲁棒稀疏线性预测算法。首先采用学生t分布对具有稀疏性的语音线性预测残差建模,并显式考虑加性噪声的影响以提高模型鲁棒性,从而构建完整的概率模型。然后采用变分贝叶斯方法推导模型参数的近似后验分布,最终实现噪声鲁棒的稀疏线性预测参数估计。实验结果表明,与传统算法以及近几年提出的基于l1范数优化的稀疏线性预测算法相比,该算法在多项指标上具有优势,对环境噪声具有更好的鲁棒性,并且谱失真度更小,因而能够有效提高噪声环境下的语音质量。   相似文献   

10.
提出了利用偶数帧段输入隐马尔可夫模型(HMM)提高在噪声环境下汉语连续语音识别系统鲁棒性的方法,并提出了对于传统谱相减降噪技术的修改方法。实验结果表明,本文的方法能有效地提高噪声背景下汉语连续语音识别系统的性能。  相似文献   

11.
Perceptual linear predictive (PLP) analysis of speech   总被引:31,自引:0,他引:31  
A new technique for the analysis of speech, the perceptual linear predictive (PLP) technique, is presented and examined. This technique uses three concepts from the psychophysics of hearing to derive an estimate of the auditory spectrum: (1) the critical-band spectral resolution, (2) the equal-loudness curve, and (3) the intensity-loudness power law. The auditory spectrum is then approximated by an autoregressive all-pole model. A 5th-order all-pole model is effective in suppressing speaker-dependent details of the auditory spectrum. In comparison with conventional linear predictive (LP) analysis, PLP analysis is more consistent with human hearing. The effective second formant F2' and the 3.5-Bark spectral-peak integration theories of vowel perception are well accounted for. PLP analysis is computationally efficient and yields a low-dimensional representation of speech. These properties are found to be useful in speaker-independent automatic-speech recognition.  相似文献   

12.
A robust feature extraction technique for phoneme recognition is proposed which is based on deriving modulation frequency components from the speech signal. The modulation frequency components are computed from syllable-length segments of sub-band temporal envelopes estimated using frequency domain linear prediction. Although the baseline features provide good performance in clean conditions, the performance degrades significantly in noisy conditions. In this paper, a technique for noise compensation is proposed where an estimate of the noise envelope is subtracted from the noisy speech envelope. The noise compensation technique suppresses the effect of additive noise in speech. The robustness of the proposed features is further enhanced by the gain normalization technique. The normalized temporal envelopes are compressed with static (logarithmic) and dynamic (adaptive loops) compression and are converted into modulation frequency features. These features are used in an automatic phoneme recognition task. Experiments are performed in mismatched train/test conditions where the test data are corrupted with various environmental distortions like telephone channel noise, additive noise, and room reverberation. Experiments are also performed on large amounts of real conversational telephone speech. In these experiments, the proposed features show substantial improvements in phoneme recognition rates compared to other speech analysis techniques. Furthermore, the contribution of various processing stages for robust speech signal representation is analyzed.  相似文献   

13.
In an attempt to increase the robustness of automatic speech recognition (ASR) systems, a feature extraction scheme is proposed that takes spectro-temporal modulation frequencies (MF) into account. This physiologically inspired approach uses a two-dimensional filter bank based on Gabor filters, which limits the redundant information between feature components, and also results in physically interpretable features. Robustness against extrinsic variation (different types of additive noise) and intrinsic variability (arising from changes in speaking rate, effort, and style) is quantified in a series of recognition experiments. The results are compared to reference ASR systems using Mel-frequency cepstral coefficients (MFCCs), MFCCs with cepstral mean subtraction (CMS) and RASTA-PLP features, respectively. Gabor features are shown to be more robust against extrinsic variation than the baseline systems without CMS, with relative improvements of 28% and 16% for two training conditions (using only clean training samples or a mixture of noisy and clean utterances, respectively). When used in a state-of-the-art system, improvements of 14% are observed when spectro-temporal features are concatenated with MFCCs, indicating the complementarity of those feature types. An analysis of the importance of specific MF shows that temporal MF up to 25 Hz and spectral MF up to 0.25 cycles/channel are beneficial for ASR.  相似文献   

14.
The aim of this study is to quantify the gap between the recognition performance of human listeners and an automatic speech recognition (ASR) system with special focus on intrinsic variations of speech, such as speaking rate and effort, altered pitch, and the presence of dialect and accent. Second, it is investigated if the most common ASR features contain all information required to recognize speech in noisy environments by using resynthesized ASR features in listening experiments. For the phoneme recognition task, the ASR system achieved the human performance level only when the signal-to-noise ratio (SNR) was increased by 15 dB, which is an estimate for the human-machine gap in terms of the SNR. The major part of this gap is attributed to the feature extraction stage, since human listeners achieve comparable recognition scores when the SNR difference between unaltered and resynthesized utterances is 10 dB. Intrinsic variabilities result in strong increases of error rates, both in human speech recognition (HSR) and ASR (with a relative increase of up to 120%). An analysis of phoneme duration and recognition rates indicates that human listeners are better able to identify temporal cues than the machine at low SNRs, which suggests incorporating information about the temporal dynamics of speech into ASR systems.  相似文献   

15.
This paper presents a new method to speech enhancement based on time-frequency analysis and adaptive digital filtering. The proposed method for dual-channel speech enhancement was developed by tracking frequencies of corrupting signal by the discrete Gabor transform (DGT) and implementing multi-notch adaptive digital filter (MNADF) at those frequencies. Since no a priori knowledge of the noise source statistics is required this method differs from traditional speech enhancement methods. Specifically, the proposed method was applied to the case where speech quality and intelligibility deteriorate in the presence of background noise. Speech coders and automatic speech recognition (ASR) systems are designed to act on clean speech signals. Therefore, corrupted speech signals by the noise must be enhanced before their processing. The method uses a primary input containing the corrupted speech signal while a reference input containing the noise only. In this paper, we designed MNADF instead of single-notch adaptive digital filter and used DGT to track frequencies of corrupting signal because fast filtering process and fast measure of the time-dependent noise frequency are of great importance in speech enhancement process. Therefore, MNADF was implemented to take advantage of fast filtering process. Different types of noises from Noisex-92 database were used to degrade real speech signals. Objective measures, the study of the speech spectrograms and global signal-to-noise ratio (SNR), segmental SNR (segSNR), Itakura-Saito distance measure as well as subjective listing test demonstrated consistently superior enhancement performance of the proposed method over traditional speech enhancement method such as spectral subtraction. Combining MNADF and DGT, excellent speech enhancement was obtained.  相似文献   

16.
为实现噪声情况下的人声分离,提出了一种采用稀疏非负矩阵分解与深度吸引子网络的单通道人声分离算法。首先,通过训练得到人声与噪声的字典矩阵,将其作为先验信息从带噪混合语音中分离出人声与噪声的系数矩阵;然后,根据人声系数矩阵中不同的声源成分在嵌入空间中的相似性不同,使用深度吸引子网络将其分离为各声源语音的系数矩阵;最后,使用分离得到的各语音系数矩阵与人声的字典矩阵重构干净的分离语音。在不同噪声情况下的实验结果表明,本文算法能够在抑制背景噪声的同时提高分离语音的整体质量,优于结合声噪人声分离模型的对比算法。   相似文献   

17.
18.
Mel frequency cepstral coefficients (MFCC) are the most widely used speech features in automatic speech recognition systems, primarily because the coefficients fit well with the assumptions used in hidden Markov models and because of the superior noise robustness of MFCC over alternative feature sets such as linear prediction-based coefficients. The authors have recently introduced human factor cepstral coefficients (HFCC), a modification of MFCC that uses the known relationship between center frequency and critical bandwidth from human psychoacoustics to decouple filter bandwidth from filter spacing. In this work, the authors introduce a variation of HFCC called HFCC-E in which filter bandwidth is linearly scaled in order to investigate the effects of wider filter bandwidth on noise robustness. Experimental results show an increase in signal-to-noise ratio of 7 dB over traditional MFCC algorithms when filter bandwidth increases in HFCC-E. An important attribute of both HFCC and HFCC-E is that the algorithms only differ from MFCC in the filter bank coefficients: increased noise robustness using wider filters is achieved with no additional computational cost.  相似文献   

19.
The performance of linear prediction analysis of speech deteriorates rapidly under noisy environments.To tackle this issue,an improved noise-robust sparse linear prediction algorithm is proposed.First,the linear prediction residual of speech is modeled as Student-t distribution,and the additive noise is incorporated explicitly to increase the robustness,thus a probabilistic model for sparse linear prediction of speech is built.Furthermore,variational Bayesian inference is utilized to approximate the intractable posterior distributions of the model parameters,and then the optimal linear prediction parameters are estimated robustly.The experimental results demonstrate the advantage of the developed algorithm in terms of several different metrics compared with the traditional algorithm and the l1 norm minimization based sparse linear prediction algorithm proposed in recent years.Finally it draws to a conclusion that the proposed algorithm is more robust to noise and is able to increase the speech quality in applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号