首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Pitch detection is an important part of speech recognition and speech processing. In this paper, a pitch detection algorithm based on second generation wavelet transform was developed. The proposed algorithm reduces the computational load of those algorithms that were based on classical wavelet transform. The proposed pitch detection algorithm was tested for both real speech and synthetic speech signal. Some experiments were carried out under noisy environment condition to evaluate the accuracy and robustness of the proposed algorithm. Results showed that the proposed algorithm was robust to noise and provided accurate estimates of the pitch period for both low-pitched and high-pitched speakers. Moreover, different wavelet filters that were obtained using second generation wavelet transform were considered to see the effects of them on the proposed algorithm. It was noticed that Haar filter showed good performance as compared to the other wavelet filters.  相似文献   

2.
吕钊  吴小培  张超  李密 《声学学报》2010,35(4):465-470
提出了一种基于独立分量分析(ICA)的语音信号鲁棒特征提取算法,用以解决在卷积噪声环境下语音信号的训练与识别特征不匹配的问题。该算法通过短时傅里叶变换将带噪语音信号从时域转换到频域后,采用复值ICA方法从带噪语音的短时谱中分离出语音信号的短时谱,然后根据所得到的语音信号短时谱计算美尔倒谱系数(MFCC)及其一阶差分作为特征参数。在仿真与真实环境下汉语数字语音识别实验中,所提算法相比较传统的MFCC其识别正确率分别提升了34.8%和32.6%。实验结果表明基于ICA方法的语音特征在卷积噪声环境下具有良好的鲁棒性。   相似文献   

3.
为了提高感知线性预测系数(PLP)在噪声环境下的识别性能,使用子带能量偏差减的方法,提出了一种基于子带能量规整的感知线性预测系数(SPNPLP)。PLP有效地集中了语音中的有用信息,在安静环境下自动语音识别系统使用PLP可以取得良好的识别率;但是在噪声环境中其识别性能急剧下降。通过使用能量偏差减的方法对PLP的子带能量进行规整,抑制背景噪声激励,提出了SPNPLP,增强自动语音识别系统在噪声环境下的鲁棒性。在一个语法大小为501的孤立词识别任务和一个大词表连续语音识别任务上做了测试,SPNPLP在这两个任务上,与PLP相比,汉字识别精度分别绝对提升了11.26%和9.2%。实验结果表明SPNPLP比PLP具有更好的噪声鲁棒性。   相似文献   

4.
提出了一种滑动窗累积量的递推估计算法并应用于语音端点检测中,用以解决传统端点检测方法在噪声环境下检测性能变差的问题。在对含噪语音信号进行加窗之后,利用滑动窗累积量的递推估计算法估计含噪语音信号的高阶累积量值,并在此基础上结合能量特征进行语音端点检测。实验结果表明,所提滑动窗累积量递推估计算法相比较传统高阶累积量计算方法运算效率明显提高;所提端点检测算法在不同噪声和信噪比环境下相比较G.729b算法点正确率Pc-point值平均提升了6.07%。基于滑动窗高阶累积量的语音端点检测算法具有较高的运算效率及良好的鲁棒性。   相似文献   

5.
In this paper, a novel single microphone channel-based speech enhancement technique is presented. While most of the conventional nonnegative matrix factorization-based approaches focus on generating a basis matrix of speech and noise for enhancement, the proposed algorithm performs an additional process to reconstruct speech from noisy speech when these two elements are highly overlapped in selected spectral bands. This process involves a log-spectral amplitude based estimator, which provides the spectrotemporal speech presence probability to obtain a more accurate reconstruction. Moreover, the proposed algorithm applies an unsupervised learning method to the input noise, so it is adaptable to any type of environmental noise without a pre-trained dictionary. The experimental results demonstrate that the proposed algorithm obtains improved speech enhancement performance compared with conventional single channel-based approaches.  相似文献   

6.
尹辉  谢湘  匡镜明 《声学学报》2012,37(1):97-103
分数阶Fourier变换在处理非平稳信号尤其是chirp信号方面有着独特的优势,而人耳听觉系统具有自动语音识别系统难以比拟的优良性能。本文采用Gammatone听觉滤波器组对语音信号进行前端时域滤波,然后对输出的各个子带信号用分数阶Fourer变换方法提取声学特征。分数阶Fourier变换的阶数对其性能有着重要影响,本文针对子带时域信号提出了采用瞬时频率曲线拟合求取阶数的方法,并将其与采用模糊函数的方法作了比较。在干净与含噪汉语孤立数字库上的语音识别结果表明,采用新提出的声学特征得到的识别正确率相对MFCC基线系统有了显著提高;根据瞬时频率曲线搜索阶数的算法与模糊函数方法相比,计算量大大减少,并且根据该方法提取的声学特征得到了最高的平均识别正确率。   相似文献   

7.
Regarding the performance of traditional endpoint detection algorithms degrades as the environment noise level increases,a recursive calculating algorithm for higher-order cumulants over a sliding window is proposed.Then it is applied to the speech endpoint detection.Furthermore,endpoint detection is carried out with the feature of energy.Experimental results show that both the computational efficiency and the robustness against noise of the proposed algorithm are improved remarkably compared with traditional algorithm.The average probability of correct point detection(Pc-point) of the proposed voice activity detection(VAD) is6.07%higher than that of G.729 b VAD in different noisy at different signal-noise ratios(SNRs)environments.  相似文献   

8.
It is well known that the non-stationary wideband noise is the most difficult to be removed in speech enhancement. In this paper a novel speech enhancement algorithm based on the dyadic wavelet transform and the simplified Karhunen-Loeve transform (KLT) is proposed to suppress the non-stationary wideband noise. The noisy speech is decomposed into components by the wavelet space and KLT-based vector space, and the components are processed and reconstructed, respectively, by distinguishing between voiced speech and unvoiced speech. There are no requirements of noise whitening and SNR pre-calculating. In order to evaluate the performance of this algorithm in more detail, a three-dimensional spectral distortion measure is introduced. Experiments and comparison between different speech enhancement systems by means of the distortion measure show that the proposed method has no drawbacks existing in the previous methods and performs better shaping and suppressing of the non-stationary wideband noise for speech enhancement.  相似文献   

9.
Studies have shown that supplementary articulatory information can help to improve the recognition rate of automatic speech recognition systems. Unfortunately, articulatory information is not directly observable, necessitating its estimation from the speech signal. This study describes a system that recognizes articulatory gestures from speech, and uses the recognized gestures in a speech recognition system. Recognizing gestures for a given utterance involves recovering the set of underlying gestural activations and their associated dynamic parameters. This paper proposes a neural network architecture for recognizing articulatory gestures from speech and presents ways to incorporate articulatory gestures for a digit recognition task. The lack of natural speech database containing gestural information prompted us to use three stages of evaluation. First, the proposed gestural annotation architecture was tested on a synthetic speech dataset, which showed that the use of estimated tract-variable-time-functions improved gesture recognition performance. In the second stage, gesture-recognition models were applied to natural speech waveforms and word recognition experiments revealed that the recognized gestures can improve the noise-robustness of a word recognition system. In the final stage, a gesture-based Dynamic Bayesian Network was trained and the results indicate that incorporating gestural information can improve word recognition performance compared to acoustic-only systems.  相似文献   

10.
Considerable advances in automatic speech recognition have been made in the last decades, thanks specially to the use of hidden Markov models. In the field of speech signal analysis, different techniques have been developed. However, deterioration in the performance of the speech recognizers has been observed when they are trained with clean signal and tested with noisy signals. This is still an open problem in this field. Continuous multiresolution entropy has been shown to be robust to additive noise in applications to different physiological signals. In previous works we have included Shannon and Tsallis entropies, and their corresponding divergences, in different speech analysis and recognition systems. In this paper we present an extension of the continuous multiresolution entropy to different divergences and we propose them as new dimensions for the pre-processing stage of a speech recognition system. This approach takes into account information about changes in the dynamics of speech signal at different scales. The methods proposed here are tested with speech signals corrupted with babble and white noise. Their performance is compared with classical mel cepstral parametrization. The results suggest that these continuous multiresolution entropy related measures provide valuable information to the speech recognition system and that they could be considered to be included as an extra component in the pre-processing stage.  相似文献   

11.
In order to improve the performance of deception detection based on Chinese speech signals, a method of sparse decomposition on spectral feature is proposed. First, the wavelet packet transform is applied to divide the speech signal into multiple sub-bands. Band cepstral features of wavelet packets are obtained by operating the discrete cosine transform on loga?rithmic energy of each sub-band. The cepstral feature is generated by combing Mel Frequency Cepstral Coefficient and Wavelet Packet Band Cepstral Coefficient. Second, K-singular value decomposition algorithm is employed to achieve the training of an over-complete mixture dictionary based on both the truth and deceptive feature sets, and an orthogonal matching pursuit algorithm is used for sparse coding according to the mixture dictionary to get sparse feature.Finally, recognition experiments axe performed with various classified modules. Experimental results show that the sparse decomposition method has better performance comparied with con?ventional dimension reduced methods. The recognition accuracy of the method proposed in this paper is 78.34%, which is higher than methods using other features, improving the recognition ability of deception detection system significantly.  相似文献   

12.
为了提高汉语语音的谎言检测准确率,提出了一种对信号倒谱参数进行稀疏分解的方法。首先,采用小波包滤波器组对语音信号进行多频带划分,求得子频带对数能量并进行离散余弦变换以提取小波包频带倒谱系数,结合梅尔频率谱系数得到倒谱参数;其次,依据K-奇异值分解方法分别利用说谎和非说谎两种状态下的语音倒谱参数集训练得到过完备混合字典,在此字典上根据正交匹配追踪算法对参数集进行稀疏编码提取稀疏特征;最终进行多种分类模型下的识别实验·实验结果表明,稀疏分解方法相比传统参数降维方法具有更好的优化性能,本文推荐的稀疏谱特征最佳识别率达到78.34%,优于其他特征参数,显著提高了谎言检测识别准确率。   相似文献   

13.
In the traditional multi-stream fusion methods of speech recognition, all the feature components in a data stream share the same stream weight, while their distortion levels are usually different when the speech recognizer works in noisy environments. To overcome this limitation of the traditional multi-stream frameworks, the current study proposes a new stream fusion method that weights not only the stream outputs, but also the output probabilities of feature components. How the stream and feature component weights in the new fusion method affect the decision is analyzed and two stream fusion schemes based on the mariginalisation and soft decision models in the missing data techniques are proposed. Experimental results on the hybrid sub-band multi-stream speech recognizer show that the proposed schemes can adjust the stream influences on the decision adaptively and outperform the traditional multi-stream methods in various noisy environments.  相似文献   

14.
The performance of linear prediction analysis of speech deteriorates rapidly under noisy environments.To tackle this issue,an improved noise-robust sparse linear prediction algorithm is proposed.First,the linear prediction residual of speech is modeled as Student-t distribution,and the additive noise is incorporated explicitly to increase the robustness,thus a probabilistic model for sparse linear prediction of speech is built.Furthermore,variational Bayesian inference is utilized to approximate the intractable posterior distributions of the model parameters,and then the optimal linear prediction parameters are estimated robustly.The experimental results demonstrate the advantage of the developed algorithm in terms of several different metrics compared with the traditional algorithm and the l1 norm minimization based sparse linear prediction algorithm proposed in recent years.Finally it draws to a conclusion that the proposed algorithm is more robust to noise and is able to increase the speech quality in applications.  相似文献   

15.
混合双语语音识别的研究   总被引:1,自引:0,他引:1  
随着现代社会信息的全球化,双语以及多语混合的语言现象日趋普遍,随之而产生的双语或多语语音识别也成为语音识别研究领域的热门课题。在双语混合语音识别中,主要面临的问题有两个:一是在保证双语识别率的前提下控制系统的复杂度;二是有效处理插入语中原用语引起的非母语口音现象。为了解决双语混合现象以及减少统计建模所需的数据量,通过音素混合聚类方法建立起一个统一的双语识别系统。在聚类算法中,提出了一种新型基于混淆矩阵的两遍音素聚类算法,并将该方法与传统的基于声学似然度准则的聚类方法进行比较;针对双语语音中非母语语音识别性能较低的问题,提出一种新型的双语模型修正算法用于提高非母语语音的识别性能。实验结果表明,通过上述方法建立起来的中英双语语音识别系统在有效控制模型规模的同时,实现了同时对两种语言的识别,且在单语言语音和混合语言语音上的识别性能也能得到有效保证。   相似文献   

16.
为实现噪声情况下的人声分离,提出了一种采用稀疏非负矩阵分解与深度吸引子网络的单通道人声分离算法。首先,通过训练得到人声与噪声的字典矩阵,将其作为先验信息从带噪混合语音中分离出人声与噪声的系数矩阵;然后,根据人声系数矩阵中不同的声源成分在嵌入空间中的相似性不同,使用深度吸引子网络将其分离为各声源语音的系数矩阵;最后,使用分离得到的各语音系数矩阵与人声的字典矩阵重构干净的分离语音。在不同噪声情况下的实验结果表明,本文算法能够在抑制背景噪声的同时提高分离语音的整体质量,优于结合声噪人声分离模型的对比算法。   相似文献   

17.
A multistream phoneme recognition framework is proposed based on forming streams from different spectrotemporal modulations of speech. Phoneme posterior probabilities were estimated from each stream separately and combined at the output level. A statistical model of the final estimated posterior probabilities is used to characterize the system performance. During the operation, the best fusion architecture is chosen automatically to maximize the similarity of output statistics to clean condition. Results on phoneme recognition from noisy speech indicate the effectiveness of the proposed method.  相似文献   

18.
Accented speech recognition is more challenging than standard speech recognition due to the effects of phonetic and acoustic confusions. Phonetic confusion in accented speech occurs when an expected phone is pronounced as a different one, which leads to erroneous recognition. Acoustic confusion occurs when the pronounced phone is found to lie acoustically between two baseform models and can be equally recognized as either one. We propose that it is necessary to analyze and model these confusions separately in order to improve accented speech recognition without degrading standard speech recognition. Since low phonetic confusion units in accented speech do not give rise to automatic speech recognition errors, we focus on analyzing and reducing phonetic and acoustic confusability under high phonetic confusion conditions. We propose using likelihood ratio test to measure phonetic confusion, and asymmetric acoustic distance to measure acoustic confusion. Only accent-specific phonetic units with low acoustic confusion are used in an augmented pronunciation dictionary, while phonetic units with high acoustic confusion are reconstructed using decision tree merging. Experimental results show that our approach is effective and superior to methods modeling phonetic confusion or acoustic confusion alone in accented speech, with a significant 5.7% absolute WER reduction, without degrading standard speech recognition.  相似文献   

19.
This paper addresses the problem of the speech quality improvement using adaptive filtering algorithms. Recently in Djendi and Bendoumia (2014) [1], we have proposed a new two-channel backward algorithm for noise reduction and speech intelligibility enhancement. The main drawback of proposed two-channel subband algorithm is its poor performance when the number of subband is high. This inconvenience is well seen in the steady state regime values. The source of this problem is the fixed step-sizes of the cross-adaptive filtering algorithms that distort the speech signal when they are selected high and degrade the convergence speed behaviours when they are selected small. In this paper, we propose four modifications of this algorithm which allow improving both the convergence speed and the steady state values even in very noisy condition and a high number of subbands. To confirm the good performance of the four proposed variable-step-size SBBSS algorithms, we have carried out several simulations in various noisy environments. In these simulations, we have evaluated objective and subjective criteria as the system mismatch, the cepstral distance, the output signal-to-noise-ratio, and the mean opinion score (MOS) method to compare the four proposed variables step-size versions of the SBBSS algorithm with their original versions and with the two-channel fullband backward (2CFB) least mean square algorithm.  相似文献   

20.
为了解决含噪语句分割问题,也为了解决某些低信噪比环境下传统气导语句分割算法分割效果差、分割准确度低且算法自适应性弱等问题,提出一种基于骨导语音自适应的分段双门限语音分割方法。将骨导语音和气导语音同步采集,获取抗噪性能更好的骨导语音,然后在融合过零率与短时能量中引入随机动态阈值的自适应方法进行端点检测,最后利用分段双门限和语音聚类等手段实现语音分割,提高语音分割算法的鲁棒性。通过实验验证了所提算法的有效性和可行性,同时与其他语音分割算法进行了对比,证明该文所提分割算法精度更高,效果更好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号