首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 490 毫秒
1.
提出了广义模型,将动态时间规正(DTW,DynamicTimeWarping)技术和隐马尔可夫模型(HMM,HiddenMarkovModel)统一到一个语音声学模型的框架内.分析表明,广义模型更接近语音实际情况并具有很小的存储量.还利用广义模型构造了汉语全音节语音识别器,和离散HMM及DTW的对比实验结果显示:对于特定人识别,广义模型的识别性能和DTW相当而高于离散HMM;对于非特定人识别,广义模型的识别性能高于DTW和离散HMM。  相似文献   

2.
基于随机轨迹模型的汉语连续语音识别方法研究   总被引:1,自引:0,他引:1  
本文在指出隐马尔可夫模型(HMM)不合理假设的基础上,介绍了随机轨迹模型(STM)的理论机制及优越性。随机轨迹模型将语音基元的声学观察表示为参数空间中轨迹的聚类,并将轨迹建模为状态随机序列概率密度函数的混合,该模型可以克服HMM的不合理假设,在理论上更合理。根据STM的特点及汉语语音特色,本文对汉语连续语音识别基元的选取进行了讨论,提出了音素类单元作为识别系统的识别基元。基于STM的汉语连续语音识别的实验结果证明了STM的有效性和音素类单元的一致性。  相似文献   

3.
景春进  陈东东  周琳琦 《应用声学》2014,22(8):2571-2573
针对舰艇指挥训练系统的特点,提出了一种利用语音识别技术提高其训练效率的方法;首先分析了舰艇指挥指令的语言特点,然后研究了基于Sphinx平台的汉语连续语音识别的相关问题,包括声学模型的训练、语言模型的训练及语音识别引擎等;最后设计并实现了一个非特定人,中等专用词汇量的连续汉语语音识别系统;实验采用了一定数量的数字和专用词汇进行验证,结果表明,经过声学模型训练后,该系统的识别率有较大提高;该方法对提高舰艇指挥训练系统的自动化水平具有一定的指导意义。  相似文献   

4.
针对语音识别中由于强噪声的影响而引起的Lombard和Loud效应进行研究,提出了基于训练数据的加性噪声和Lombard及Loud效应的联合补偿法。对于加性噪声是从谱减法的逆向角度对训练数据在频谱域采用谱加法;对于Lombard和Loud语音,则采用基于隐马尔可夫模型(HMM)状态标注的训练数据补偿,该方法同时考虑Lombard和Loud语音不同声学单元的不同状态在倒谱域的多种变化和多种变异情况下不同声学单元的音长及相对音长的变化。这种基于数据的多模式补偿使模型自动适应多种噪声和语音变异情况,在强噪声环境下具有很强的鲁棒性,并且不影响识别系统在正常环境或正常发音时的识别性能.同时,由于补偿是在训练过程中得到,不增加识别时的计算复杂度。  相似文献   

5.
汉语耳语音孤立字识别研究   总被引:6,自引:0,他引:6       下载免费PDF全文
杨莉莉  林玮  徐柏龄 《应用声学》2006,25(3):187-192
耳语音识别有着广泛的应用前景,是一个全新的课题.但是由于耳语音本身的特点,如声级低、没有基频等,给耳语音识别研究带来了困难.本文根据耳语音信号发音模型,结合耳语音的声学特性,建立了一个汉语耳语音孤立字识别系统.由于耳语音信噪比低,必须对其进行语音增强处理,同时在识别系统中应用声调信息提高了识别性能.实验结果说明了MFCC结合幅值包络可作为汉语耳语音自动识别的特征参数,在小字库内用HMM模型识别得出的识别率为90.4%.  相似文献   

6.
基于发音特征的汉语普通话语音声学建模   总被引:3,自引:0,他引:3  
将表征汉语普通话语音特点的发音特征引入汉语普通话语音识别的声学建模中,根据普通话发音特点,确定了用于区别普通话元音、辅音以及声调信息的9种发音特征,并以此为目标值训练神经网络得到语音信号属于各类发音特征的后验概率,将此概率作为语音识别的输入特征建立声学模型。在汉语普通话非特定人大词表自然口语对话识别系统中进行了实验验证,并与基于频谱特征的声学模型进行了比较,在相同解码速度下,由此方法建立的声学模型汉字错误率相对下降6.8%;将发音特征和频谱特征进行了融合实验,融合以后的识别系统相对基于频谱特征系统的汉字错误率相对下降10.1%。上述结果表明,基于发音特征的声学模型更加有效的实现了对语音特性的表征,通过利用发音特征和频谱特征的互补性,能够进一步实现对语音识别性能的提高。   相似文献   

7.
维吾尔语方言识别及相关声学分析   总被引:1,自引:0,他引:1       下载免费PDF全文
根据语音识别和声纹识别等语音应用研究的实际需要,首次对和田方言的声学特性和识别进行研究。首先选取和田方言语音进行人工多层级标注,对元音的共振峰、时长和音强进行统计分析,描绘出和田方言主体格局及男性和女性的发音特点。然后运用方差分析和非参数分析法对维吾尔语3种方言的共振峰样本进行检验,结果表明3种方言的男性元音、女性元音及整体元音的共振峰分布模式存在显著差异。最后,分别构建基于GMM-UBM (Gaussian Mixture Model-Universal Background Model)、DNN-UBM (Deep Neural Networks-Universal Background Model)和LSTM-UBM (Long Short Term MemoryUniversal Background Model)维吾尔语方言识别模型,对基于梅尔频率倒谱系数及其与共振峰频率组合做输入特征提取的方言i-vector区分性进行对比实验。实验结果表明融入共振峰系数的组合特征可以增加方言的辨识度,且LSTM-UBM模型较GMM-UBM和DNN-UBM能提取到更具区分性的方言i-vector。   相似文献   

8.
俞一彪  曾道建  姜莹 《声学学报》2012,37(3):346-352
提出一种基于完全独立的说话人语音模型进行语音转换的方法。首先每个说话人采用各自的语料训练结构化高斯混合模型(Structured Gaussian Mixture Model,SGMM),然后根据源和目标说话人各自的模型采用全局声学结构(AcousticalUniversal Structure,AUS)进行匹配和高斯分布对准,最终得到相应的转换函数进行语音转换。ABX和MOS实验表明可以得到与传统的平行语料联合训练方法接近的转换性能,并且转换语音的目标说话人识别正确率达到94.5%。实验结果充分说明了本文提出的方法不仅具有较好的转换性能,而且具有较小的训练量和很好的系统扩展性。   相似文献   

9.
当前基于深度神经网络模型中,虽然其隐含层可设置多层,对复杂问题适应能力强,但每层之间的节点连接是相互独立的,这种结构特性导致了在语音序列中无法利用上下文相关信息来提高识别效果,而传统的循环神经网络虽然做出了改进,但是只能对上文信息进行利用。针对以上问题,该文采用可以同时利用语音序列中上下文相关信息的双向循环神经网络模型与深度神经网络模型相结合,并应用于语音识别。构建具有5层隐含层的模型,其中第3层为双向循环神经网络结构,其他层采用深度神经网络结构。实验结果表明:加入了双向循环神经网络结构的模型与其他模型相比,较好地提高了识别正确率;噪声对双向循环神经网络汉语识别有重要影响,尤其是训练集和测试集附加噪声类型不同时,单一的含噪声语音的训练模型无法适应不同噪声类型的语音识别;调整神经网络模型中隐含层神经元数量后,识别正确率并不是一直随着隐含层中神经元数量的增加而增加,神经元数量数目增加到一定程度后正确率出现了降低的趋势。  相似文献   

10.
传统的语音识别方法,信噪比较低时识别率也较低。为了使语音识别更具有环境适应性、抗噪性,从非齐次隐马尔可夫模型(nonhomogeneous Hidden Markov Model,HMM)出发,结合自适应函数链神经元网络,训练出适应环境变化的混合语音模型,并采用该混合模型进行语音识别。实验结果表明,该模型适用于含噪语音的识别,特别是在低信噪情况下,可以相对提高识别率。  相似文献   

11.
Based on the actual needs of speech application research such as speech recognition and voiceprint recognition,the acoustic characteristics and recognition of Hotan dialect were studied for the first time.Firstly,the Hetian dialect voice was selected for artificial multi-level annotation,and the formant,duration and intensity of the vowel were analyzed to describe statistically the main pattern of Hetian dialect and the pronunciation characteristics of male and female.Then using the analysis of variance and nonparametric analysis to test the formant samples of the three dialects of Uygur language,the results show that there are significant differences in the formant distribution patterns of male vowels,female vowels and whole vowels in the three dialects.Finally,the GUM-UBM(Gaussian Mixture Model-Universal Background Model),DNN-UBM(Deep Neural Networks-Universal Background Model) and LSTM-UBM(Long Short Term Memory Network-Universal Background Model) Uyghur dialect recognition models are constructed respectively.Based on the Mel-frequency cepstrum coefficient and its combination with the formant frequency for the input feature extraction,the contrastive experiment of dialect i-vector distinctiveness is carried out.The experimental results show that the combined features of the formant coefficients can increase the recognition of the dialect,and the LSTM-UBM model can extract more discriminative dialects than the GMM-UBM and DNN-UBM.  相似文献   

12.
In this study, the problem of sparse enrollment data for in-set versus out-of-set speaker recognition is addressed. The challenge here is that both the training speaker data (5 s) and test material (2~6 s) is of limited test duration. The limited enrollment data result in a sparse acoustic model space for the desired speaker model. The focus of this study is on filling these acoustic holes by harvesting neighbor speaker information to leverage overall system performance. Acoustically similar speakers are selected from a separate available corpus via three different methods for speaker similarity measurement. The selected data from these similar acoustic speakers are exploited to fill the lack of phone coverage caused by the original sparse enrollment data. The proposed speaker modeling process mimics the naturally distributed acoustic space for conversational speech. The Gaussian mixture model (GMM) tagging process allows simulated natural conversation speech to be included for in-set speaker modeling, which maintains the original system requirement of text independent speaker recognition. A human listener evaluation is also performed to compare machine versus human speaker recognition performance, with machine performance of 95% compared to 72.2% accuracy for human in-set/out-of-set performance. Results show that for extreme sparse train/reference audio streams, human speaker recognition is not nearly as reliable as machine based speaker recognition. The proposed acoustic hole filling solution (MRNC) produces an averaging 7.42% relative improvement over a GMM-Cohort UBM baseline and a 19% relative improvement over the Eigenvoice baseline using the FISHER corpus.  相似文献   

13.
A statistical coarticulatory model is presented for spontaneous speech recognition, where knowledge of the dynamic, target-directed behavior in the vocal tract resonance is incorporated into the model design, training, and in likelihood computation. The principal advantage of the new model over the conventional HMM is the use of a compact, internal structure that parsimoniously represents long-span context dependence in the observable domain of speech acoustics without using additional, context-dependent model parameters. The new model is formulated mathematically as a constrained, nonstationary, and nonlinear dynamic system, for which a version of the generalized EM algorithm is developed and implemented for automatically learning the compact set of model parameters. A series of experiments for speech recognition and model synthesis using spontaneous speech data from the Switchboard corpus are reported. The promise of the new model is demonstrated by showing its consistently superior performance over a state-of-the-art benchmark HMM system under controlled experimental conditions. Experiments on model synthesis and analysis shed insight into the mechanism underlying such superiority in terms of the target-directed behavior and of the long-span context-dependence property, both inherent in the designed structure of the new dynamic model of speech.  相似文献   

14.
Loudness predicts prominence: fundamental frequency lends little   总被引:1,自引:0,他引:1  
We explored a database covering seven dialects of British and Irish English and three different styles of speech to find acoustic correlates of prominence. We built classifiers, trained the classifiers on human prominence/nonprominence judgments, and then evaluated how well they behaved. The classifiers operate on 452 ms windows centered on syllables, using different acoustic measures. By comparing the performance of classifiers based on different measures, we can learn how prominence is expressed in speech. Contrary to textbooks and common assumption, fundamental frequency (f0) played a minor role in distinguishing prominent syllables from the rest of the utterance. Instead, speakers primarily marked prominence with patterns of loudness and duration. Two other acoustic measures that we examined also played a minor role, comparable to f0. All dialects and speaking styles studied here share a common definition of prominence. The result is robust to differences in labeling practice and the dialect of the labeler.  相似文献   

15.
A probabilistic framework for a landmark-based approach to speech recognition is presented for obtaining multiple landmark sequences in continuous speech. The landmark detection module uses as input acoustic parameters (APs) that capture the acoustic correlates of some of the manner-based phonetic features. The landmarks include stop bursts, vowel onsets, syllabic peaks and dips, fricative onsets and offsets, and sonorant consonant onsets and offsets. Binary classifiers of the manner phonetic features-syllabic, sonorant and continuant-are used for probabilistic detection of these landmarks. The probabilistic framework exploits two properties of the acoustic cues of phonetic features-(1) sufficiency of acoustic cues of a phonetic feature for a probabilistic decision on that feature and (2) invariance of the acoustic cues of a phonetic feature with respect to other phonetic features. Probabilistic landmark sequences are constrained using manner class pronunciation models for isolated word recognition with known vocabulary. The performance of the system is compared with (1) the same probabilistic system but with mel-frequency cepstral coefficients (MFCCs), (2) a hidden Markov model (HMM) based system using APs and (3) a HMM based system using MFCCs.  相似文献   

16.
Inspired by recent evidence that a binary pattern may provide sufficient information for human speech recognition, this letter proposes a fundamentally different approach to robust automatic speech recognition. Specifically, recognition is performed by classifying binary masks corresponding to a word utterance. The proposed method is evaluated using a subset of the TIDigits corpus to perform isolated digit recognition. Despite dramatic reduction of speech information encoded in a binary mask, the proposed system performs surprisingly well. The system is compared with a traditional HMM based approach and is shown to perform well under low SNR conditions.  相似文献   

17.
18.
Studies by Shannon et al. [Science, 270, 303-304 (1995)], Van Tasell et al. [J. Acoust. Soc. Am. 82, 1152-1161 (1987)], and others show that human listeners can understand important aspects of the speech signal when spectral shape has been significantly degraded. These experiments suggest that temporal information is particularly important in human speech perception when the speech signal is heavily degraded. In this study, a system is developed that extracts linguistically relevant temporal information that can be used in the front end of an automatic speech recognition system. The parameters targeted include energy onset and offsets (computed using an adaptive algorithm) and measures of periodic and aperiodic content; together these are used to find abrupt acoustic events which signify landmarks. Overall detection rates for strongly robust events, robust events, and weak events in a portion of the TIMIT test database are 98.9%, 94.7%, and 52.1%, respectively. Error rates increase by less than 5% when the speech signals are spectrally impoverished. Use of the four temporal parameters as the front end of a hidden Markov model (HMM)-based system for the automatic recognition of the manner classes "sonorant," "fricative," "stop," and "silence" results in the same recognition accuracy achieved when the standard 39 cepstral-based parameters are used, 70.1%. The combination of the temporal parameters and cepstral parameters results in an accuracy of 74.8%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号