首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 168 毫秒
1.
针对非平行语料非联合训练条件下的语音转换,提出一种基于倒谱本征空间结构化高斯混合模型的方法。提取说话人语音倒谱特征参数之后,根据其散布矩阵计算本征向量构造倒谱本征空间并训练结构化高斯混合模型SGMM-ES(Structured Gaussian Mixture Model in Eigen Space)。源和目标说话人各自独立训练的SGMM-ES根据全局声学结构AUS(Acoustical Universal Structure)原理进行匹配对准,最终得到基于倒谱本征空间的短时谱转换函数。实验结果表明,转换语音的目标说话人平均识别率达到95.25%,平均谱失真度为1.25,相对基于原始倒谱特征空间的SGMM方法分别提高了0.8%和7.3%,而ABX和MOS测评表明转换性能非常接近于传统平行语料方法。这一结果说明采用倒谱本征空间结构化高斯混合模型进行非平行语料条件下的语音转换是有效的。   相似文献   

2.
针对含噪语音难以实现有效的语音转换,本文提出了一种采用联合字典优化的噪声鲁棒性语音转换算法。在联合字典的构成中,语音字典采用后向剔除算法(Backward Elimination algorithm,BE)进行优化,同时引入噪声字典,使得含噪语音与联合字典相匹配。实验结果表明,在保证转换效果的前提下,后向剔除算法能够减少字典帧数,降低计算量。在低信噪比和多种噪声环境下,本文算法与传统NMF算法和基于谱减法消噪的NMF转换算法相比具有更好的转换效果,噪声字典的引入提升了语音转换系统的噪声鲁棒性。   相似文献   

3.
俞一彪  曾道建  姜莹 《声学学报》2012,37(3):346-352
提出一种基于完全独立的说话人语音模型进行语音转换的方法。首先每个说话人采用各自的语料训练结构化高斯混合模型(Structured Gaussian Mixture Model,SGMM),然后根据源和目标说话人各自的模型采用全局声学结构(AcousticalUniversal Structure,AUS)进行匹配和高斯分布对准,最终得到相应的转换函数进行语音转换。ABX和MOS实验表明可以得到与传统的平行语料联合训练方法接近的转换性能,并且转换语音的目标说话人识别正确率达到94.5%。实验结果充分说明了本文提出的方法不仅具有较好的转换性能,而且具有较小的训练量和很好的系统扩展性。   相似文献   

4.
针对非平行语料非联合训练条件下的语音转换,提出一种基于倒谱本征空间结构化高斯混合模型的方法。提取说话人语音倒谱特征参数之后,根据其散布矩阵计算本征向量构造倒谱本征空间并训练结构化高斯混合模型SGMM-ES(Structured Gaussian Mixture Model in Eigen Space)。源和目标说话人各自独立训练的SGMM-ES根据全局声学结构AUS(Acoustical Universal Structure)原理进行匹配对准,最终得到基于倒谱本征空间的短时谱转换函数。实验结果表明,转换语音的目标说话人平均识别率达到95.25%,平均谱失真度为1.25,相对基于原始倒谱特征空间的SGMM方法分别提高了0.8%和7.3%,而ABX和MOS测评表明转换性能非常接近于传统平行语料方法。这一结果说明采用倒谱本征空间结构化高斯混合模型进行非平行语料条件下的语音转换是有效的。  相似文献   

5.
现阶段用于语音转换的深度学习方法多是通过使用大量的训练数据来生成高质量的语音。本文提出了一种基于平均模型和误差削减网络的语音转换框架,可用于有限数量的训练数据。首先,基于CBHG网络的平均模型使用排除源说话人和目标说话人的多说话人语音数据进行训练;然后,在有限数量的目标语音数据下对平均模型执行自适应训练;最后,提出一种误差削减网络,可以进一步改善转换后语音的质量。实验表明,所提出的语音转换框架可以灵活地处理有限的训练数据,并且在客观和主观评估方面均优于传统框架。  相似文献   

6.
提出了一种融合梅尔谱增强与特征解耦的噪声鲁棒语音转换模型,即MENR-VC模型。该模型采用3个编码器提取语音内容、基频和说话人身份矢量特征,并引入互信息作为相关性度量指标,通过最小化互信息进行矢量特征解耦,实现对说话人身份的转换。为了改善含噪语音的频谱质量,模型使用深度复数循环卷积网络对含噪梅尔谱进行增强,并将其作为说话人编码器的输入;同时,在训练过程中,引入梅尔谱增强损失函数对模型整体损失函数进行了改进。仿真实验结果表明,与同类最优的噪声鲁棒语音转换方法相比,所提模型得到的转换语音在语音自然度和说话人相似度的平均意见得分方面,分别提高了0.12和0.07。解决了语音转换模型在使用含噪语音进行训练时,会导致深度神经网络训练过程难以收敛,转换语音质量大幅下降的问题。  相似文献   

7.
张文林  屈丹  李弼程 《声学学报》2014,39(4):523-530
针对现有子空间自适应方法无法确定最佳说话人子空间的问题,提出一种基于匹配追踪的说话人自适应方法。将说话人自适应视为一种高维信号的稀疏分解问题,利用本征音和参考说话人超矢量的各自优势联合构造说话人字典;依据匹配追踪原理,通过迭代优化,以后验方式确定最佳说话人子空间维数及其基矢量。引入冗余基矢量检测与去除机制以保证算法的稳定性,并通过快速递推算法得到新说话人坐标。基于汉语连续语音识别的有监督说话人自适应实验结果表明,与本征音及参考说话人加权方法相比,平均有调音节正识率相对提高了1.9%。   相似文献   

8.
为了从带噪信号中得到纯净的语音信号,提出了一种采用性别相关模型的单通道语音增强算法。具体而言,在训练阶段,分别训练了与性别相关的深度神经网络-非负矩阵分解模型用于估计非负矩阵分解中的权重参数;在测试阶段,提出了一种基于非负矩阵分解和组稀疏惩罚的算法用于判断测试语音中说话人的性别信息,然后再采用对应的模型估计权重,并结合已训练好的字典进行语音增强。实验结果表明所提算法在噪声抑制量及语音质量上,均优于一些基于非负矩阵分解的算法和基于深度神经网络的算法。   相似文献   

9.
俞一彪  王朔中 《声学学报》2005,30(6):536-541
提出了一种文本无关说话人识别的全特征矢量集模型及互信息评估方法,该模型通过对一组说话人语音数据在特征空间进行聚类而形成,全面地反映了说话人语音的个性特征。对于说话人语音的似然度计算与判决,则提出了一种互信息评估方法,该算法综合分析距离空间和信息空间的似然度,并运用最大互信息判决准则进行识别判决。实验分析了线性预测倒谱系数(LPCC)和Mel频率倒谱系数(MFCC)两种情况下应用全特征矢量集模型和互信息评估算法的说话人识别性能,并与高斯混合模型进行了比较。结果表明:全特征矢量集模型和互信息评估算法能够充分反映说话人语音特征,并能够有效评估说话人语音特征相似程度,具有很好的识别性能,是有效的。  相似文献   

10.
针对以往语音增强算法在非平稳噪声环境下性能急剧下降的问题,基于时频字典学习方法提出了一种新的单通道语音增强算法。首先,提出采用时频字典学习方法对噪声的频谱结构的先验信息进行建模,并将其融入到卷积非负矩阵分解的框架下;然后,在固定噪声时频字典情况下,推导了时变增益和语音时频字典的乘性迭代求解公式;最后,利用该迭代公式更新语音和噪声的时变增益系数以及语音的时频字典,通过语音时频字典和时变增益的卷积运算重构出语音的幅度谱并用二值时频掩蔽方法消除噪声干扰。实验结果表明,在多项语音质量评价指标上,本文算法都取得了更好的结果。在非平稳噪声和低信噪比环境下,相比于多带谱减法和非负稀疏编码去噪算法,本文算法更有效地消除了噪声,增强后的语音具有更好的质量。   相似文献   

11.
Under the condition of limited target speaker's corpus, this paper proposed an algorithm for voice conversion using unified tensor dictionary with limited corpus. Firstly,parallel speech of N speakers was selected randomly from the speech corpus to build the base of tensor dictionary. And then, after the operation of multi-series dynamic time warping for those chosen speech, N two-dimension basic dictionaries can be generated which constituted the unified tensor dictionary. During the conversion stage, the two dictionaries of source and target speaker were established by linear combination of the N basic dictionaries using the two speakers' speech. The experimental results showed that when the number of the basic speaker was 14, our algorithm can obtain the compared performance of the traditional NMFbased method with few target speaker corpus, which greatly facilitate the application of voice conversion system.  相似文献   

12.
A noise robust voice conversion algorithm based on joint dictionary optimization is proposed to effectively convert noisy source speech into the target one. In composition of the joint dictionary, speech dictionary is optimized using backward elimination algorithm. At the same time, a noise dictionary is introduced to match the noisy speech. The experimental results show that the backward elimination algorithm can reduce the number of dictionary frames and reduce the amount of calculation while ensuring the conversion effect. In low SNR and multiple noise environments, the algorithm has better conversion effect than both the traditional NMF algorithm and the NMF conversion algorithm plus spectral subtraction de-noising. The proposed algorithm improves the robustness of voice conversion system.  相似文献   

13.
The goal of cross-language voice conversion is to preserve the speech characteristics of one speaker when that speaker's speech is translated and used to synthesize speech in another language. In this paper, two preliminary studies, i.e., a statistical analysis of spectrum differences in different languages and the first attempt at a cross-language voice conversion, are reported. Speech uttered by a bilingual speaker is analyzed to examine spectrum difference between English and Japanese. Experimental results are (1) the codebook size for mixed speech from English and Japanese should be almost twice the codebook size of either English or Japanese; (2) although many code vectors occurred in both English and Japanese, some have a tendency to predominate in one language or the other; (3) code vectors that predominantly occurred in English are contained in the phonemes /r/, /ae/, /f/, /s/, and code vectors that predominantly occurred in Japanese are contained in /i/, /u/, /N/; and (4) judged from listening tests, listeners cannot reliably indicate the distinction between English speech decoded by a Japanese codebook and English speech decoded by an English codebook. A voice conversion algorithm based on codebook mapping was applied to cross-language voice conversion, and its performance was somewhat less effective than for voice conversion in the same language.  相似文献   

14.
The voice conversion (VC) technique recently has emerged as a new branch of speech synthesis dealing with speaker identity. In this work, a linear prediction (LP) analysis is carried out on speech signals to obtain acoustical parameters related to speaker identity - the speech fundamental frequency, or pitch, voicing decision, signal energy, and vocal tract parameters. Once these parameters are established for two different speakers designated as source and target speakers, statistical mapping functions can then be applied to modify the established parameters. The mapping functions are derived from these parameters in such a way that the source parameters resemble those of the target. Finally, the modified parameters are used to produce the new speech signal. To illustrate the feasibility of the proposed approach, a simple to use voice conversion software has been developed. This VC technique has shown satisfactory results. The synthesized speech signal virtually matching that of the target speaker.  相似文献   

15.
Spectro-temporal modulations of speech encode speech structures and speaker characteristics. An algorithm which distinguishes speech from non-speech based on spectro-temporal modulation energies is proposed and evaluated in robust text-independent closed-set speaker identification simulations using the TIMIT and GRID corpora. Simulation results show the proposed method produces much higher speaker identification rates in all signal-to-noise ratio (SNR) conditions than the baseline system using mel-frequency cepstral coefficients. In addition, the proposed method also outperforms the system, which uses auditory-based nonnegative tensor cepstral coefficients [Q. Wu and L. Zhang, "Auditory sparse representation for robust speaker recognition based on tensor structure," EURASIP J. Audio, Speech, Music Process. 2008, 578612 (2008)], in low SNR (≤ 10 dB) conditions.  相似文献   

16.
A new methodology of voice conversion in cepstrum eigenspace based on structured Gaussian mixture model is proposed for non-parallel corpora without joint training.For each speaker,the cepstrum features of speech are extracted,and mapped to the eigenspace which is formed by eigenvectors of its scatter matrix,thereby the Structured Gaussian Mixture Model in the EigenSpace(SGMM-ES)is trained.The source and target speaker's SGMM-ES are matched based on Acoustic Universal Structure(AUS)principle to achieve spectrum transform function.Experimental results show the speaker identification rate of conversion speech achieves95.25%,and the value of average cepstrum distortion is 1.25 which is 0.8%and 7.3%higher than the performance of SGMM method respectively.ABX and MOS evaluations indicate the conversion performance is quite close to the traditional method under the parallel corpora condition.The results show the eigenspace based structured Gaussian mixture model for voice conversion under the non-parallel corpora is effective.  相似文献   

17.
惠琳  俞一彪 《声学学报》2017,42(6):762-768
提出一种短时频谱通用背景模型群与韵律参数相结合进行年龄语音转换的方法。谱参数转换方面,同一年龄段各说话者提取语音短时谱系数并建立高斯混合模型,然后依据语音特征相似性对说话者进行聚类,每一类训练一个通用背景模型,最终得到通用背景模型群和一组短时频谱转换函数。谱参数转换之后再对共振峰进一步微调。韵律参数转换方面,基频和语速分别建立单高斯和平均时长率模型来推导转换函数。实验结果显示,提出的方法在ABX和MOS等评价指标上比传统的双线性法有明显的优势,相对单一通用背景模型法的对数似然度变化率提高了4%。这一结果表明提出的方法能够使转换语音具有良好目标倾向性的同时有较好的语音质量,性能较传统方法有明显提升。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号