首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
卷积混迭语音信号的联合块对角化盲分离方法   总被引:1,自引:0,他引:1  
张华  冯大政  庞继勇 《声学学报》2009,34(2):167-174
针对语音信号的卷积混迭模型,利用不同语音信号之间的近似独立和短时平稳特性,提出一种基于信号二阶统计量的联合块对角化方法,解决超定卷积盲分离问题。该方法采用非对角线上各子矩阵 F -范数的平方和作为联合块对角化性能的评判准则,将原四次代价函数转化为一组较为简单的二次子代价函数,每一子代价函数用于估计酉混迭矩阵的一个子矩阵。依次最小化各子函数,迭代搜索代价函数最小点,得到混迭矩阵的估计。理论分析及实验结果表明,所提方法不仅能够达到与类Jacobi经典方法同样好的分离效果,并且具有更低的计算复杂度、更快的收敛速度和对传输信道阶数、迭代初始值不敏感的特点。   相似文献   

2.
为实现噪声情况下的人声分离,提出了一种采用稀疏非负矩阵分解与深度吸引子网络的单通道人声分离算法。首先,通过训练得到人声与噪声的字典矩阵,将其作为先验信息从带噪混合语音中分离出人声与噪声的系数矩阵;然后,根据人声系数矩阵中不同的声源成分在嵌入空间中的相似性不同,使用深度吸引子网络将其分离为各声源语音的系数矩阵;最后,使用分离得到的各语音系数矩阵与人声的字典矩阵重构干净的分离语音。在不同噪声情况下的实验结果表明,本文算法能够在抑制背景噪声的同时提高分离语音的整体质量,优于结合声噪人声分离模型的对比算法。   相似文献   

3.
曾庆宁  王师琦 《声学学报》2021,46(5):775-784
针对传统多通道语音分离算法在扩散噪声下性能下降的问题,提出了一种用于语音分离及降噪的空间协方差模型及参数估计方法。该方法将扩散噪声视为独立声源,利用由导向矢量重构的空间协方差矩阵建模目标声源的空间特性,并通过空间协方差分析方法估计用于语音分离的多通道维纳滤波器。同时,还提出了一种联合该方法的后置滤波器参数框架,为输出信号降噪和失真的折中提供了更多选择。在扩散噪声下的单目标和多目标实验中,所提方法的语音提取和分离性能都优于对比算法,联合参数的后置滤波器可提供更为符合人们要求的降噪语音,验证了所提模型与参数估计方法的有效性。   相似文献   

4.
A new methodology of voice conversion in cepstrum eigenspace based on structured Gaussian mixture model is proposed for non-parallel corpora without joint training.For each speaker,the cepstrum features of speech are extracted,and mapped to the eigenspace which is formed by eigenvectors of its scatter matrix,thereby the Structured Gaussian Mixture Model in the EigenSpace(SGMM-ES)is trained.The source and target speaker's SGMM-ES are matched based on Acoustic Universal Structure(AUS)principle to achieve spectrum transform function.Experimental results show the speaker identification rate of conversion speech achieves95.25%,and the value of average cepstrum distortion is 1.25 which is 0.8%and 7.3%higher than the performance of SGMM method respectively.ABX and MOS evaluations indicate the conversion performance is quite close to the traditional method under the parallel corpora condition.The results show the eigenspace based structured Gaussian mixture model for voice conversion under the non-parallel corpora is effective.  相似文献   

5.
Aim at the underdetermined convolutive mixture model, a blind speech source separation method based on nonlinear time-frequency masking was proposed, where the approximate W-disjoint orthogonality (W-DO) property among independent speech signals in time-frequency domain is utilized. In this method, the observation mixture signal from multimicrophones is normalized to be independent of frequency in the time-frequency domain at first, then the dynamic clustering algorithm is adopted to obtain the active source information in each time-frequency slot, a nonlinear function via deflection angle from the cluster center is selected for time-frequency masking, finally the blind separation of mixture speech signals can be achieved by inverse STFT (short-time Fourier transformation). This method can not only solve the problem of frequency permutation which may be met in most classic frequency-domain blind separation techniques, but also suppress the spatial direction diffusion of the separation matrix. The simulation results demonstrate that the proposed separation method is better than the typical BLUES method, the signal-noise-ratio gain (SNRG) increases 1.58 dB averagely.  相似文献   

6.
This work proposes a method to reconstruct an acoustic speech signal solely from a stream of mel-frequency cepstral coefficients (MFCCs) as may be encountered in a distributed speech recognition (DSR) system. Previous methods for speech reconstruction have required, in addition to the MFCC vectors, fundamental frequency and voicing components. In this work the voicing classification and fundamental frequency are predicted from the MFCC vectors themselves using two maximum a posteriori (MAP) methods. The first method enables fundamental frequency prediction by modeling the joint density of MFCCs and fundamental frequency using a single Gaussian mixture model (GMM). The second scheme uses a set of hidden Markov models (HMMs) to link together a set of state-dependent GMMs, which enables a more localized modeling of the joint density of MFCCs and fundamental frequency. Experimental results on speaker-independent male and female speech show that accurate voicing classification and fundamental frequency prediction is attained when compared to hand-corrected reference fundamental frequency measurements. The use of the predicted fundamental frequency and voicing for speech reconstruction is shown to give very similar speech quality to that obtained using the reference fundamental frequency and voicing.  相似文献   

7.
陈雪勤  赵鹤鸣 《声学学报》2013,38(2):195-200
为了改善耳语音转换中声道系统的转换性能,针对定值转换方法在非特定人耳语音转换系统中效果不理想的情况,提出使用通用背景模型建立独立于说话人的声道系统转换模型。进一步针对在通用背景模型中由于较大分量数产生的声学概率密度统计模型的误差问题,提出基于最小谱失真度的后验概率和有效高斯分量选择方法优化特征矢量的转换性能。定义了板仓一斋田谱失真测度的性能指标对该模型进行分析比较,实验表明,基于通用背景模型的转换特征矢量平均谱失真度性能指标优于定值偏移方法,且稳定性明显好于定值偏移方法。通用背景模型基础上有效高斯分量选择方法可进一步将性能指标提高5.11%,主观听觉测试表明本文方法可改善转换语音的清晰度和准确度。   相似文献   

8.
A frequency bin-wise nonlinear masking algorithm is proposed in the spectrogram domain for speech segregation in convolutive mixtures. The contributive weight from each speech source to a time-frequency unit of the mixture spectrogram is estimated by a nonlinear function based on location cues. For each sound source, a non-binary mask is formed from the estimated weights and is multiplied to the mixture spectrogram to extract the sound. Head-related transfer functions (HRTFs) are used to simulate convolutive sound mixtures perceived by listeners. Simulation results show our proposed method outperforms convolutive independent component analysis and degenerate unmixing and estimation technique methods in almost all test conditions.  相似文献   

9.
针对非平行语料非联合训练条件下的语音转换,提出一种基于倒谱本征空间结构化高斯混合模型的方法。提取说话人语音倒谱特征参数之后,根据其散布矩阵计算本征向量构造倒谱本征空间并训练结构化高斯混合模型SGMM-ES(Structured Gaussian Mixture Model in Eigen Space)。源和目标说话人各自独立训练的SGMM-ES根据全局声学结构AUS(Acoustical Universal Structure)原理进行匹配对准,最终得到基于倒谱本征空间的短时谱转换函数。实验结果表明,转换语音的目标说话人平均识别率达到95.25%,平均谱失真度为1.25,相对基于原始倒谱特征空间的SGMM方法分别提高了0.8%和7.3%,而ABX和MOS测评表明转换性能非常接近于传统平行语料方法。这一结果说明采用倒谱本征空间结构化高斯混合模型进行非平行语料条件下的语音转换是有效的。   相似文献   

10.
顾晓江  赵鹤鸣  吕岗 《声学学报》2012,37(2):198-203
为了提高信道差异下短时耳语说话人的识别率,提出了一种在模型域和特征域进行混合补偿的方法。该方法首先在模型训练阶段以联合因子分析法为基础,通过估计训练语音的说话人空间和信道空间,提取出说话人因子,消除信道因子,其次在测试阶段,将测试语音的信道因子映射到特征空间,实施特征补偿,从而在模型和特征两方面去除信道信息,提高识别率。实验结果显示,在三种不同的信道训练环境下,混合补偿法都取得了相似的识别率,且新方法对短时耳语音的测试效果要优于联合因子分析法。   相似文献   

11.
In order to increase short time whispered speaker recognition rate in variable channel conditions,the hybrid compensation in model and feature domains was proposed.This method is based on joint factor analysis in training model stage.It extracts speaker factor and eliminates channel factor by estimating training speech speaker and channel spaces.Then in the test stage,the test speech channel factor is projected into feature space to engage in feature compensation,so it can remove channel information both in model and feature domains in order to improve recognition rate.The experiment result shows that the hybrid compensation can obtain the similar recognition rate in the three different training channel conditions and this method is more effective than joint factor analysis in the test of short whispered speech.  相似文献   

12.
Codebook-based single-microphone noise suppressors, which exploit prior knowledge about speech and noise statistics, provide better performance in nonstationary noise. However, as the enhancement involves a joint optimization over speech and noise codebooks, this results in high computational complexity. A codebook-based method is proposed that uses a reference signal observed by a bone-conduction microphone, and a mapping between air- and bone-conduction codebook entries generated during an offline training phase. A smaller subset of air-conducted speech codebook entries that accurately models the clean speech signal is selected using this reference signal. Experiments support the expected improvement in performance at low computational complexity.  相似文献   

13.
俞一彪  曾道建  姜莹 《声学学报》2012,37(3):346-352
提出一种基于完全独立的说话人语音模型进行语音转换的方法。首先每个说话人采用各自的语料训练结构化高斯混合模型(Structured Gaussian Mixture Model,SGMM),然后根据源和目标说话人各自的模型采用全局声学结构(AcousticalUniversal Structure,AUS)进行匹配和高斯分布对准,最终得到相应的转换函数进行语音转换。ABX和MOS实验表明可以得到与传统的平行语料联合训练方法接近的转换性能,并且转换语音的目标说话人识别正确率达到94.5%。实验结果充分说明了本文提出的方法不仅具有较好的转换性能,而且具有较小的训练量和很好的系统扩展性。   相似文献   

14.
针对含噪语音难以实现有效的语音转换,本文提出了一种采用联合字典优化的噪声鲁棒性语音转换算法。在联合字典的构成中,语音字典采用后向剔除算法(Backward Elimination algorithm,BE)进行优化,同时引入噪声字典,使得含噪语音与联合字典相匹配。实验结果表明,在保证转换效果的前提下,后向剔除算法能够减少字典帧数,降低计算量。在低信噪比和多种噪声环境下,本文算法与传统NMF算法和基于谱减法消噪的NMF转换算法相比具有更好的转换效果,噪声字典的引入提升了语音转换系统的噪声鲁棒性。   相似文献   

15.
为了提高汉语语音的谎言检测准确率,提出了一种对信号倒谱参数进行稀疏分解的方法。首先,采用小波包滤波器组对语音信号进行多频带划分,求得子频带对数能量并进行离散余弦变换以提取小波包频带倒谱系数,结合梅尔频率谱系数得到倒谱参数;其次,依据K-奇异值分解方法分别利用说谎和非说谎两种状态下的语音倒谱参数集训练得到过完备混合字典,在此字典上根据正交匹配追踪算法对参数集进行稀疏编码提取稀疏特征;最终进行多种分类模型下的识别实验·实验结果表明,稀疏分解方法相比传统参数降维方法具有更好的优化性能,本文推荐的稀疏谱特征最佳识别率达到78.34%,优于其他特征参数,显著提高了谎言检测识别准确率。   相似文献   

16.
石倩  陈航艇  张鹏远 《声学学报》2022,47(1):139-150
提出了波达方向初始化空间混合概率模型的语音增强算法.通过声源定位估计出声源波达方向,再根据此计算相对传递函数,进而构造空间协方差矩阵来初始化空间混合概率模型.论证了相对传递函数在作为模型参数中语音协方差矩阵的主特征向量时,空间混合概率模型对应的概率分布可达到最大值,进而使期望最大化算法在迭代时更易收敛,以得到期望的掩蔽...  相似文献   

17.
In order to improve the performance of deception detection based on Chinese speech signals, a method of sparse decomposition on spectral feature is proposed. First, the wavelet packet transform is applied to divide the speech signal into multiple sub-bands. Band cepstral features of wavelet packets are obtained by operating the discrete cosine transform on loga?rithmic energy of each sub-band. The cepstral feature is generated by combing Mel Frequency Cepstral Coefficient and Wavelet Packet Band Cepstral Coefficient. Second, K-singular value decomposition algorithm is employed to achieve the training of an over-complete mixture dictionary based on both the truth and deceptive feature sets, and an orthogonal matching pursuit algorithm is used for sparse coding according to the mixture dictionary to get sparse feature.Finally, recognition experiments axe performed with various classified modules. Experimental results show that the sparse decomposition method has better performance comparied with con?ventional dimension reduced methods. The recognition accuracy of the method proposed in this paper is 78.34%, which is higher than methods using other features, improving the recognition ability of deception detection system significantly.  相似文献   

18.
针对目前有监督语音增强忽略了纯净语音、噪声与带噪语音之间的幅度谱相似性对增强效果影响等问题,提出了一种联合精确比值掩蔽(ARM)与深度神经网络(DNN)的语音增强方法。该方法利用纯净语音与带噪语音、噪声与带噪语音的幅度谱归一化互相关系数,设计了一种基于时频域理想比值掩蔽的精确比值掩蔽作为目标掩蔽;然后以纯净语音和噪声幅度谱为训练目标的DNN为基线,通过该DNN的输出来估计目标掩蔽,并对基线DNN和目标掩蔽进行联合优化,增强语音由目标掩蔽从带噪语音中估计得到;此外,考虑到纯净语音与噪声的区分性信息,采用一种区分性训练函数代替均方误差(MSE)函数作为基线DNN的目标函数,以使网络输出更加准确。实验表明,区分性训练函数提升了基线DNN以及整个联合优化网络的增强效果;在匹配噪声和不匹配噪声下,相比于其它常见DNN方法,本文方法取得了更高的平均客观语音质量评估(PESQ)和短时客观可懂度(STOI),增强后的语音保留了更多语音成分,同时对噪声的抑制效果更加明显。   相似文献   

19.
A noise robust voice conversion algorithm based on joint dictionary optimization is proposed to effectively convert noisy source speech into the target one. In composition of the joint dictionary, speech dictionary is optimized using backward elimination algorithm. At the same time, a noise dictionary is introduced to match the noisy speech. The experimental results show that the backward elimination algorithm can reduce the number of dictionary frames and reduce the amount of calculation while ensuring the conversion effect. In low SNR and multiple noise environments, the algorithm has better conversion effect than both the traditional NMF algorithm and the NMF conversion algorithm plus spectral subtraction de-noising. The proposed algorithm improves the robustness of voice conversion system.  相似文献   

20.
Monaural speech segregation has proven to be extremely challenging. While efforts in computational auditory scene analysis have led to considerable progress in voiced speech segregation, little attention has been given to unvoiced speech, which lacks harmonic structure and has weaker energy, hence more susceptible to interference. This study proposes a new approach to the problem of segregating unvoiced speech from nonspeech interference. The study first addresses the question of how much speech is unvoiced. The segregation process occurs in two stages: Segmentation and grouping. In segmentation, the proposed model decomposes an input mixture into contiguous time-frequency segments by a multiscale analysis of event onsets and offsets. Grouping of unvoiced segments is based on Bayesian classification of acoustic-phonetic features. The proposed model for unvoiced speech segregation joins an existing model for voiced speech segregation to produce an overall system that can deal with both voiced and unvoiced speech. Systematic evaluation shows that the proposed system extracts a majority of unvoiced speech without including much interference, and it performs substantially better than spectral subtraction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号