首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 311 毫秒
1.
The electrocardiogram (ECG) signal has become a popular biometric modality due to characteristics that make it suitable for developing reliable authentication systems. However, the long segment of signal required for recognition is still one of the limitations of existing ECG biometric recognition methods and affects its acceptability as a biometric modality. This paper investigates how a short segment of an ECG signal can be effectively used for biometric recognition, using deep-learning techniques. A small convolutional neural network (CNN) is designed to achieve better generalization capability by entropy enhancement of a short segment of a heartbeat signal. Additionally, it investigates how various blind and feature-dependent segments with different lengths affect the performance of the recognition system. Experiments were carried out on two databases for performance evaluation that included single and multisession records. In addition, a comparison was made between the performance of the proposed classifier and four well-known CNN models: GoogLeNet, ResNet, MobileNet and EfficientNet. Using a time–frequency domain representation of a short segment of an ECG signal around the R-peak, the proposed model achieved an accuracy of 99.90% for PTB, 98.20% for the ECG-ID mixed-session, and 94.18% for ECG-ID multisession datasets. Using the preprinted ResNet, we obtained 97.28% accuracy for 0.5-second segments around the R-peaks for ECG-ID multisession datasets, outperforming existing methods. It was found that the time–frequency domain representation of a short segment of an ECG signal can be feasible for biometric recognition by achieving better accuracy and acceptability of this modality.  相似文献   

2.
We study different methods of acoustic feature representation for the phoneme recognition problem using an artificial neural network. Feature representation methods are compared using the results of phoneme recognition and clustering of the parameters retrieved from speech signals. The best results of phoneme recognition are obtained by using a filter bank for acoustic feature representation. __________ Translated from Izvestiya Vysshikh Uchebnykh Zavedenii, Radiofizika, Vol. 50, No. 4, pp. 350–356, April 2007.  相似文献   

3.
语音是一种短时平稳时频信号,因此大多数的研究者都通过分帧来提取情感特征。然而,分帧后提取的特征为局部特征,无法准确反应情感语音动态特性,故单纯采用局部特征往往无法构建鲁棒的情感识别系统。针对这个问题,先在不分帧的语音信号里通过多尺度最优小波包分解提取语句级全局特征,分帧后再提取384维的语句级局部特征,并利用Fisher准则进行降维,最后提出一种弱尺度融合策略来将这两种语句级特征进行融合,再利用SVM进行情感分类。基于柏林情感库的实验结果表明本文方法较单纯使用语句级局部特征最后识别率提高了4.2%到13.8%,特别在小样本的情况下,语音情感识别率波动较小。   相似文献   

4.
Due to the drawbacks in Support Vector Machine(SVM)parameter optimization,an Improved Shuffled Frog Leaping Algorithm(Im-SFLA)was proposed,and the learning ability in practical speech emotion recognition was improved.Firstly,we introduced Simulated Annealing(SA),Immune Vaccination(Iv),Gaussian mutation and chaotic disturbance into the basic SFLA,which bManced the search efficiency and population diversity effectively.Secondly,Im-SFLA Was applied to the optimization of SVM parameters,and an Im-SFLA-SVM method Was proposed.Thirdly,the acoustic features of practical speech emotion,such aS ridgetiness,were analyzed.The pitch frequency,short-term energy,formant frequency and chaotic characteristics were analyzed corresponding to different emotion categories,and we constructed a 144-dimensional emotion feature vector for recognition and reduced to 4-dimension by adopting Linear Discriminant Analysis(LDA) Finally,the Im-SFLA-SVM method Was tested on the practical speech emotion database,and the recognition results were compared with Shuffled Frog Leaping Algorithm optimization-SVM(SFLA-SVM)method,Particle Swarm Optimization algorithm optimization-SVM(PSo-SVM) method,basic SVM,Gaussian Mixture Model(GMM)method and Back Propagation(BP)neural network method.The experimentM resuits showed that the average recognition rate of Im-SFLA-SVM method was 77.8%,which had improved 1.7%,2.7%,3.4%,4.7%and 7.8%respectively,compared with the other methods.The recognition of fidgetiness was significantly improve,thus verifying that Im-SFLA was an effective SVM parameter selection method,and the Im-SFLA-SVM method may significantly improve the practical speech emotion recognition.  相似文献   

5.
针对支持向量机(Support Vector Machine,SVM)的参数优化问题,提出了一种改进的混合蛙跳算法(Improved Shuffled Frog Leaping Algorithm,Im-SFLA),提高了其在实用语音情感识别中的学习能力。首先,我们在SFLA中引入了模拟退火(Simulated Annealing,SA)、免疫接种(Immune Vaccination,IV)、高斯变异和混沌扰动算子,平衡了搜索的高效性和种群的多样性;第二,利用Im-SFLA优化SVM的参数,提出了一种Im-SFLA-SVM方法;第三,分析了烦躁等实用语音情感的声学特征,重点分析了基音、短时能量、共振峰和混沌特征随情感类别的变化特性,构建出144维的情感特征向量并采用LDA降维到4维;最后,在实用语音情感数据库上测试了算法性能,将提出的算法与混合蛙跳算法(Shuffled Frog Leaping Algorithm,SFLA)优化SVM参数的方法(SFLA-SVM方法)、粒子群优化(Particle Swarm Optimization,PSO)算法优化SVM参数的方法(PSO-SVM方法)、基本SVM方法、高斯混合模型(Gaussian Mixture Model,GMM)方法和反向传播(Back Propagation,BP)神经网络法等进行对比。实验结果表明,采用Im-SFLA-SVM方法的平均识别率达到77.8%,分别高于SFLA-SVM方法、PSO-SVM方法、SVM方法、GMM方法和BP神经网络法各1.7%,2.7%,3.4%,4.7%,7.8%,并且对于烦躁这种实用情感的识别率提高效果最为明显,从而证实了Im-SFLA是一种有效的SVM参数选择方法,并且Im-SFLA-SVM方法能显著提升实用语音情感的识别率。   相似文献   

6.
发音错误检测与诊断(MDD)任务的专家标注数据稀缺。从添加发音模型更高效地利用有限数据建模发音规律,辅助基于音素识别的MDD的思路出发,提出一种同时融合声学和文本信息,在理论上更完备地建模发音错误产生过程的声学发音模型。基于发音错误产生过程不同部分的声学关联性,该模型通过与音素识别模型共享声学编码器网络参数,以多任务学习方式联合优化,实现辅助建模。并且,提出声学置信度掩蔽-预测训练方式进一步强化两个任务的联系,提高辅助建模效率。实验表明,声学发音模型能够有效建模发音错误规律;利用其辅助音素识别模型建模后,MDD系统在发音错误检测、诊断和音素识别上分别有4.9%,9.5%和14.0%的提升;声学置信度掩蔽-预测训练方法提高了辅助建模效率,掩蔽参数或联合优化参数选择也会影响辅助建模效果。  相似文献   

7.
In this study, the problem of sparse enrollment data for in-set versus out-of-set speaker recognition is addressed. The challenge here is that both the training speaker data (5 s) and test material (2~6 s) is of limited test duration. The limited enrollment data result in a sparse acoustic model space for the desired speaker model. The focus of this study is on filling these acoustic holes by harvesting neighbor speaker information to leverage overall system performance. Acoustically similar speakers are selected from a separate available corpus via three different methods for speaker similarity measurement. The selected data from these similar acoustic speakers are exploited to fill the lack of phone coverage caused by the original sparse enrollment data. The proposed speaker modeling process mimics the naturally distributed acoustic space for conversational speech. The Gaussian mixture model (GMM) tagging process allows simulated natural conversation speech to be included for in-set speaker modeling, which maintains the original system requirement of text independent speaker recognition. A human listener evaluation is also performed to compare machine versus human speaker recognition performance, with machine performance of 95% compared to 72.2% accuracy for human in-set/out-of-set performance. Results show that for extreme sparse train/reference audio streams, human speaker recognition is not nearly as reliable as machine based speaker recognition. The proposed acoustic hole filling solution (MRNC) produces an averaging 7.42% relative improvement over a GMM-Cohort UBM baseline and a 19% relative improvement over the Eigenvoice baseline using the FISHER corpus.  相似文献   

8.
Acoustic models that produce speech signals with information content similar to that provided to cochlear implant users provide a mechanism by which to investigate the effect of various implant-specific processing or hardware parameters independent of other complicating factors. This study compares speech recognition of normal-hearing subjects listening through normal and impaired acoustic models of cochlear implant speech processors. The channel interactions that were simulated to impair the model were based on psychophysical data measured from cochlear implant subjects and include pitch reversals, indiscriminable electrodes, and forward masking effects. In general, spectral interactions degraded speech recognition more than temporal interactions. These effects were frequency dependent with spectral interactions that affect lower-frequency information causing the greatest decrease in speech recognition, and interactions that affect higher-frequency information having the least impact. The results of this study indicate that channel interactions, quantified psychophysically, affect speech recognition to different degrees. Investigation of the effects that channel interactions have on speech recognition may guide future research whose goal is compensating for psychophysically measured channel interactions in cochlear implant subjects.  相似文献   

9.
朱应俊  周文君  朱川  马建敏 《应用声学》2023,42(5):1090-1098
为了使机器能够更好地理解人的情感并改善人机交互体验,可对语声特征及分类网络进行融合以提升情感识别性能。本文从网络融合的角度,把基于梅尔倒谱系数和逆梅尔倒谱系数的二维卷积神经网络和基于散射卷积网络系数的长短期记忆网络作为前端网络,提取前端网络的中间层作为话语级的特征表示,利用压缩-激励(SE)通道注意力机制对前端网络的中间层的权重进行调整并融合,然后由深度神经网络后端分类器输出情感分类结果。在汉语情感数据集中进行五折交叉验证的对比实验,实验结果表明,基于SE通道注意力机制的网络融合方式可以有效地利用不同前端网络在语声情感识别任务中的优势,提高语声情感识别的准确率。  相似文献   

10.
Speech emotion recognition based on statistical pitch model   总被引:1,自引:0,他引:1  
A modified Parzen-window method, which keep high resolution in low frequencies and keep smoothness in high frequencies, is proposed to obtain statistical model. Then, a gender classification method utilizing the statistical model is proposed, which have a 98% accuracy of gender classification while long sentence is dealt with. By separation the male voice and female voice, the mean and standard deviation of speech training samples with different emotion are used to create the corresponding emotion models. Then the Bhattacharyya distance between the test sample and statistical models of pitch, are utilized for emotion recognition in speech. The normalization of pitch for the male voice and female voice are also considered, in order to illustrate them into a uniform space. Finally, the speech emotion recognition experiment based on K Nearest Neighbor shows that, the correct rate of 81% is achieved, where it is only 73.85% if the traditional parameters are utilized.  相似文献   

11.
汉语耳语音孤立字识别研究   总被引:6,自引:0,他引:6       下载免费PDF全文
杨莉莉  林玮  徐柏龄 《应用声学》2006,25(3):187-192
耳语音识别有着广泛的应用前景,是一个全新的课题.但是由于耳语音本身的特点,如声级低、没有基频等,给耳语音识别研究带来了困难.本文根据耳语音信号发音模型,结合耳语音的声学特性,建立了一个汉语耳语音孤立字识别系统.由于耳语音信噪比低,必须对其进行语音增强处理,同时在识别系统中应用声调信息提高了识别性能.实验结果说明了MFCC结合幅值包络可作为汉语耳语音自动识别的特征参数,在小字库内用HMM模型识别得出的识别率为90.4%.  相似文献   

12.
张威  翟明浩  黄子龙  李巍  曹毅 《应用声学》2020,39(2):231-235
针对国内外缺少对振动轮噪声预估的问题,以某型振动轮为研究对象,首先基于动力学有限元理论对振动轮进行频率响应分析,其次采用声学边界元技术对振动轮辐射噪声进行了数值模拟,并通过实验验证了仿真结果的准确性,然后比较了垂直振动与圆周振动两种不同激振形式对辐射噪声的影响,得出垂直振动辐射噪声低的结论,最后对驾驶室声腔模态进行了仿真,与振动轮激振频率相近发生共振。通过调整激振频率,降低了司机耳旁噪声。所得研究成果可为振动轮辐射噪声的预估与改进提供一种切实可行的参考依据。  相似文献   

13.
王玮蔚  张秀再 《应用声学》2019,38(2):237-244
针对传统语音情感特征参数在进行情感分类时性能不佳的问题,该文提出了一种基于变分模态分解的语音情感识别方法。情感语音信号首先由变分模态分解提取固有模态函数,然后对所选主导固有模态函数进行重新聚合,再提取梅尔倒谱系数和各固有模态函数的希尔伯特边际谱。为了验证该文提出的特征性能,选用两种语音数据库(EMODB、RAVDESS)进行实验,按该文方法提取特征后使用极限学习机进行语音情感分类识别。实验结果表明:相比基于经验模态分解和集合经验模态分解的语音情感特征,该文提出的特征有更好的识别性能,验证了该方法的实用性。  相似文献   

14.
Accented speech recognition is more challenging than standard speech recognition due to the effects of phonetic and acoustic confusions. Phonetic confusion in accented speech occurs when an expected phone is pronounced as a different one, which leads to erroneous recognition. Acoustic confusion occurs when the pronounced phone is found to lie acoustically between two baseform models and can be equally recognized as either one. We propose that it is necessary to analyze and model these confusions separately in order to improve accented speech recognition without degrading standard speech recognition. Since low phonetic confusion units in accented speech do not give rise to automatic speech recognition errors, we focus on analyzing and reducing phonetic and acoustic confusability under high phonetic confusion conditions. We propose using likelihood ratio test to measure phonetic confusion, and asymmetric acoustic distance to measure acoustic confusion. Only accent-specific phonetic units with low acoustic confusion are used in an augmented pronunciation dictionary, while phonetic units with high acoustic confusion are reconstructed using decision tree merging. Experimental results show that our approach is effective and superior to methods modeling phonetic confusion or acoustic confusion alone in accented speech, with a significant 5.7% absolute WER reduction, without degrading standard speech recognition.  相似文献   

15.
褚钰  李田港  叶硕  叶光明 《应用声学》2020,39(2):223-230
为了解决传统卷积神经网络在识别中文语音时预测错误率较高、泛化性能弱的问题,首先以深度卷积神经网络(DCNN)-连接时序分类(CTC)为研究对象,深入分析了不同卷积层、池化层以及全连接层的组合对其性能的影响;其次,在上述模型的基础上,提出了多路卷积神经网络(MCNN)-连接时序分类(CTC),并联合SENet提出了深度SE-MCNN-CTC声学模型,该模型融合了MCNN与SENet的优势,既能加强卷积神经网络的深层信息的传递、避免梯度问题,又可以对提取的特征图进行自适应重标定。最终实验结果表明:SE-MCNN-CTC相较于DCNN-CTC错误率相对降低13.51%,模型最终的错误率达22.21%;算法改进后的声学模型可以有效地提升泛化性能。  相似文献   

16.
为了研究语音情感与语谱图特征间的关系,本文研究并提出一种面向语音情感识别的改进可辨别完全局部二值模式特征。首先,基于语谱图灰度图像,计算图像的完全局部二值符号模式(CLBP_S)、幅度模式(CLBP_M)的统计直方图。然后,将CLBP_S,CLBP_M统计直方图输入可区别特征学习模型中,训练得到全局显著性模式集合。最后,采用全局显著性模式集合对CLBP_S,CLBP_M直方图进行处理,将处理后的特征级联,得到面向语音情感识别的改进可辨别完全局部二值模式特征(IDisCLBP_SER)。基于柏林库、中文情感语音库的语音情感识别实验显示,IDisCLBP_SER特征召回率比纹理图像信息(TII)等特征提高了8%以上,比声学频谱特征平均提高了4%以上。而且,本文提出的特征可以和现有声学特征进行较好融合,融合后的特征召回率比现有声学特征召回率提高1%~4%。   相似文献   

17.
18.
语音识别赋予了计算机能够识别出语音内容的功能,是人机交互技术领域的重要研究内容。随着计算机技术的发展,语音识别已经得到了成熟的发展。但是关于方言的语音识别还有很大的发展空间。中国是一个幅员辽阔、人口众多的国家,因此方言种类繁多,其中有3000多万人交流使用的重庆方言就是其中之一。采集了重庆方言的部分词语的文本文件和对应的语音文件建立语料库,根据重庆方言的发音特点,选取重庆方言的声韵母作为声学建模基元,选取隐马尔可夫模型(Hidden Markov Model, HMM)为声学模型设计了一个基于HMM的重庆方言语音识别系统。在训练过程利用语料库中训练集语料对声学模型进行训练,形成HMM模型库;在识别过程利用语料库中的测试集语料进行识别测试。实验结果表明,该系统能够实现重庆方言的语音识别,并且识别的正确率为100%。  相似文献   

19.
Understanding speech in background noise, talker identification, and vocal emotion recognition are challenging for cochlear implant (CI) users due to poor spectral resolution and limited pitch cues with the CI. Recent studies have shown that bimodal CI users, that is, those CI users who wear a hearing aid (HA) in their non-implanted ear, receive benefit for understanding speech both in quiet and in noise. This study compared the efficacy of talker-identification training in two groups of young normal-hearing adults, listening to either acoustic simulations of unilateral CI or bimodal (CI+HA) hearing. Training resulted in improved identification of talkers for both groups with better overall performance for simulated bimodal hearing. Generalization of learning to sentence and emotion recognition also was assessed in both subject groups. Sentence recognition in quiet and in noise improved for both groups, no matter if the talkers had been heard during training or not. Generalization to improvements in emotion recognition for two unfamiliar talkers also was noted for both groups with the simulated bimodal-hearing group showing better overall emotion-recognition performance. Improvements in sentence recognition were retained a month after training in both groups. These results have potential implications for aural rehabilitation of conventional and bimodal CI users.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号