首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To clarify the role of formant frequency in the perception of pitch in whispering, we conducted a preliminary experiment to determine (1.) whether speakers change their pitch during whispering; (2.) whether listeners can perceive differences in pitch; and (3.) what the acoustical features are when speakers change their pitch. The listening test of whispered Japanese speech demonstrates that one can determine the perceived pitch of vowel /a/ as ordinary, high, or low. Acoustical analysis revealed that the perception of pitch corresponds to some formant frequencies. Further data with synthesized whispered voice are necessary to confirm the importance of the formant frequencies in detail for perceived pitch of whispered vowels.  相似文献   

2.
汉语耳语音孤立字识别研究   总被引:6,自引:0,他引:6       下载免费PDF全文
杨莉莉  林玮  徐柏龄 《应用声学》2006,25(3):187-192
耳语音识别有着广泛的应用前景,是一个全新的课题.但是由于耳语音本身的特点,如声级低、没有基频等,给耳语音识别研究带来了困难.本文根据耳语音信号发音模型,结合耳语音的声学特性,建立了一个汉语耳语音孤立字识别系统.由于耳语音信噪比低,必须对其进行语音增强处理,同时在识别系统中应用声调信息提高了识别性能.实验结果说明了MFCC结合幅值包络可作为汉语耳语音自动识别的特征参数,在小字库内用HMM模型识别得出的识别率为90.4%.  相似文献   

3.
基于多带解调分析和瞬时频率估计的耳语音话者识别   总被引:4,自引:0,他引:4  
王敏  赵鹤鸣 《声学学报》2010,35(4):471-476
为了改善耳语音话者识别的稳健性,提出了一种基于调幅-调频(AM-FM)模型的耳语音特征参数,瞬时频率估计(IFE)。根据语音产生的共振峰调制理论,采用多带解调分析(MDA)获得语音的瞬时包络和频率;然后根据包络幅度和频率的加权估计,得到语音的特征IFE来描绘语音的频率结构。将该特征用于耳语话者识别并和传统的Mel倒谱系数(MFCC)进行了比较。实验结果表明,随着测试人数的增加,IFE的识别效果略好于MFCC;在测试信道改变的情况下,与MFCC相比IFE的稳健性得到了有效的提高。   相似文献   

4.
基于听觉模型的耳语音的声韵切分   总被引:5,自引:0,他引:5       下载免费PDF全文
丁慧  栗学丽  徐柏龄 《应用声学》2004,23(2):20-25,44
本文分析了耳语音的特点,并根据生理声学及心理声学的基本理论与实验资料,提出了一种利用听觉模型来进行耳语音声韵切分的方法。这种适用于耳语音声韵切分的听觉感知模型主要分为四个层次:耳蜗对声音频率的分解机理;听觉系统的时域和频域非线性变化;中枢神经系统的侧抑制机理。这种模型能反映在噪声环境下人对低能量语音的听觉感知特性,因而适于耳语音识别,在耳语音声韵母切分实验中得到了满意的结果。  相似文献   

5.
提出了一种采用扩展型双线性变换将耳语音转换为正常语音的方法。根据耳语音在不同频段的共振峰偏移程度不同,将耳语音的频谱进行分段处理,在此基础上建立耳语音转换为正常语音的转换函数。由于耳语音在各频段相对于正常语音非线性偏移,在双线性变换函数中引入扩展因子,使其对频谱的非线性偏移与对共振峰带宽的压缩更加符合耳语音转换为正常语音的实际转换需求,有效减小了转换语音与正常语音的谱失真距离。实验结果表明,本文的转换语音在音质和可懂度上均得到了有效提高。   相似文献   

6.
Based on the actual needs of speech application research such as speech recognition and voiceprint recognition,the acoustic characteristics and recognition of Hotan dialect were studied for the first time.Firstly,the Hetian dialect voice was selected for artificial multi-level annotation,and the formant,duration and intensity of the vowel were analyzed to describe statistically the main pattern of Hetian dialect and the pronunciation characteristics of male and female.Then using the analysis of variance and nonparametric analysis to test the formant samples of the three dialects of Uygur language,the results show that there are significant differences in the formant distribution patterns of male vowels,female vowels and whole vowels in the three dialects.Finally,the GUM-UBM(Gaussian Mixture Model-Universal Background Model),DNN-UBM(Deep Neural Networks-Universal Background Model) and LSTM-UBM(Long Short Term Memory Network-Universal Background Model) Uyghur dialect recognition models are constructed respectively.Based on the Mel-frequency cepstrum coefficient and its combination with the formant frequency for the input feature extraction,the contrastive experiment of dialect i-vector distinctiveness is carried out.The experimental results show that the combined features of the formant coefficients can increase the recognition of the dialect,and the LSTM-UBM model can extract more discriminative dialects than the GMM-UBM and DNN-UBM.  相似文献   

7.
Recent studies have shown that time-varying changes in formant pattern contribute to the phonetic specification of vowels. This variation could be especially important in children's vowels, because children have higher fundamental frequencies (f0's) than adults, and formant-frequency estimation is generally less reliable when f0 is high. To investigate the contribution of time-varying changes in formant pattern to the identification of children's vowels, three experiments were carried out with natural and synthesized versions of 12 American English vowels spoken by children (ages 7, 5, and 3 years) as well as adult males and females. Experiment 1 showed that (i) vowels generated with a cascade formant synthesizer (with hand-tracked formants) were less accurately identified than natural versions; and (ii) vowels synthesized with steady-state formant frequencies were harder to identify than those which preserved the natural variation in formant pattern over time. The decline in intelligibility was similar across talker groups, and there was no evidence that formant movement plays a greater role in children's vowels compared to adults. Experiment 2 replicated these findings using a semi-automatic formant-tracking algorithm. Experiment 3 showed that the effects of formant movement were the same for vowels synthesized with noise excitation (as in whispered speech) and pulsed excitation (as in voiced speech), although, on average, the whispered vowels were less accurately identified than their voiced counterparts. Taken together, the results indicate that the cues provided by changes in the formant frequencies over time contribute materially to the intelligibility of vowels produced by children and adults, but these time-varying formant frequency cues do not interact with properties of the voicing source.  相似文献   

8.
维吾尔语方言识别及相关声学分析   总被引:1,自引:0,他引:1       下载免费PDF全文
根据语音识别和声纹识别等语音应用研究的实际需要,首次对和田方言的声学特性和识别进行研究。首先选取和田方言语音进行人工多层级标注,对元音的共振峰、时长和音强进行统计分析,描绘出和田方言主体格局及男性和女性的发音特点。然后运用方差分析和非参数分析法对维吾尔语3种方言的共振峰样本进行检验,结果表明3种方言的男性元音、女性元音及整体元音的共振峰分布模式存在显著差异。最后,分别构建基于GMM-UBM (Gaussian Mixture Model-Universal Background Model)、DNN-UBM (Deep Neural Networks-Universal Background Model)和LSTM-UBM (Long Short Term MemoryUniversal Background Model)维吾尔语方言识别模型,对基于梅尔频率倒谱系数及其与共振峰频率组合做输入特征提取的方言i-vector区分性进行对比实验。实验结果表明融入共振峰系数的组合特征可以增加方言的辨识度,且LSTM-UBM模型较GMM-UBM和DNN-UBM能提取到更具区分性的方言i-vector。   相似文献   

9.
Auditory feedback influences human speech production, as demonstrated by studies using rapid pitch and loudness changes. Feedback has also been investigated using the gradual manipulation of formants in adaptation studies with whispered speech. In the work reported here, the first formant of steady-state isolated vowels was unexpectedly altered within trials for voiced speech. This was achieved using a real-time formant tracking and filtering system developed for this purpose. The first formant of vowel /epsilon/ was manipulated 100% toward either /ae/ or /I/, and participants responded by altering their production with average Fl compensation as large as 16.3% and 10.6% of the applied formant shift, respectively. Compensation was estimated to begin <460 ms after stimulus onset. The rapid formant compensations found here suggest that auditory feedback control is similar for both F0 and formants.  相似文献   

10.
汉语耳语标准频谱的测量与计算   总被引:1,自引:0,他引:1  
孙飞  沈勇  李炬  安康 《声学学报》2010,35(4):477-480
提出了与GB7348-87《耳语标准频谱》不同的汉语耳语功率谱密度级随频率的变化关系。在消声室中测量以提高测量信噪比,使用实时分析仪测量单个人耳语发音的长期声压频谱,并且对每个人的长期声压频谱做自归一化,通过数学方法将多个样本"混录",计算出汉语耳语的功率谱密度级。汉语耳语标准频谱的测量和计算结果可为一切产生、传输、接收和处理汉语耳语信号的系统及电声器件的设计提供依据。   相似文献   

11.
The acoustic effects of the adjustment in vocal effort that is required when the distance between speaker and addressee is varied over a large range (0.3-187.5 m) were investigated in phonated and, at shorter distances, also in whispered speech. Several characteristics were studied in the same sentence produced by men, women, and 7-year-old boys and girls: duration of vowels and consonants, pausing and occurrence of creaky voice, mean and range of F0, certain formant frequencies (F1 in [a] and F3), sound-pressure level (SPL) of voiced segments and [s], and spectral emphasis. In addition to levels and emphasis, vowel duration, F0, and F1 were substantially affected. "Vocal effort" was defined as the communication distance estimated by a group of listeners for each utterance. Most of the observed effects correlated better with this measure than with the actual distance, since some additional factors affected the speakers' choice. Differences between speaker groups emerged in segment durations, pausing behavior, and in the extent to which the SPL of [s] was affected. The whispered versions are compared with the phonated versions produced by the same speakers at the same distance. Several effects of whispering are found to be similar to those of increasing vocal effort.  相似文献   

12.
陈斌  张连海  王波  屈丹 《声学学报》2012,37(1):104-112
提出了一种基于声韵母能量分布和共振峰结构特性的汉语连续语音声韵母边界检测方法。该方法首先将语音经过Seneff听觉感知模型得到听觉谱,然后基于听觉谱,选取全频带能量、低频带能量、谱重心、高低频能量比、中高频能量等特征参数对各声韵母类别能量分布和共振峰结构特性进行描述,最后根据特征参数变化剧烈的点确定出声韵母边界,并采用包络的一阶差分和基于样点的Kullback-Leibler距离对得到的边界进行修正。实验结果表明,对8 kHz采样的语音边界检测准确率可达到93.7%;信噪比10dB的语音边界检测准确率可达到85.3%以上;经过参数编码后语音边界检测准确率可达86 7%以上。   相似文献   

13.
To reduce degradation in speech recognition due to varied characteristics of different speakers,a method of perceptual frequency warping based on subglottal resonances for speaker normalization is proposed.The warping factor is extracted from the second subglottal resonance using acoustic coupling between subglottis and vocal tract.The second subglottal resonance is independent of the speech content,which reflects the speaker characteristics more than the third formant.The perceptual minimum variation distortionless response(PMVDR) coefficient is normalized,which is more robust and has better anti-noise capability than MFCC. The normalized coefficients are used in the speech-mode training and speech recognition.Experiments show that the word error rate,as compared with MFCC and the spectrum warping by the third formant,decreases by 4%and 3%respectively in clean speech recognition,and by 9%and 5%respectively in a noisy environment.The results indicate that the proposed method can improve the word recognition accuracy in a speaker-independent recognition system.  相似文献   

14.
This study investigated the role of the amplitude envelope in the vicinity of consonantal release in the perception of the stop-glide contrast. Three sets of acoustic [b-w] continua, each in the vowel environments [a] and [i], were synthesized using parameters derived from natural speech. In the first set, amplitude, formant frequency, and duration characteristics were interpolated between exemplar stop and glide endpoints. In the second set, formant frequency and duration characteristics were interpolated, but all stimuli were given a stop amplitude envelope. The third set was like the second, except that all stimuli were given a glide amplitude envelope. Subjects were given both forced-choice and free-identification tasks. The results of the forced-choice task indicated that amplitude cues were able to override transition slope, duration, and formant frequency cues in the perception of the stop-glide contrast. However, results from the free-identification task showed that, although presence of a stop amplitude envelope turned all stimuli otherwise labeled as glides to stops, the presence of a glide amplitude envelope changed stimuli labeled otherwise as stops to fricatives rather than to glides. These results support the view that the amplitude envelope in the vicinity of the consonantal release is a critical acoustic property for the continuant / noncontinuant contrast. The results are discussed in relation to a theory of acoustic invariance.  相似文献   

15.
Recent studies have demonstrated that mothers exaggerate phonetic properties of infant-directed (ID) speech. However, these studies focused on a single acoustic dimension (frequency), whereas speech sounds are composed of multiple acoustic cues. Moreover, little is known about how mothers adjust phonetic properties of speech to children with hearing loss. This study examined mothers' production of frequency and duration cues to the American English tense/lax vowel contrast in speech to profoundly deaf (N?=?14) and normal-hearing (N?=?14) infants, and to an adult experimenter. First and second formant frequencies and vowel duration of tense (/i/,?/u/) and lax (/I/,?/?/) vowels were measured. Results demonstrated that for both infant groups mothers hyperarticulated the acoustic vowel space and increased vowel duration in ID speech relative to adult-directed speech. Mean F2 values were decreased for the /u/ vowel and increased for the /I/ vowel, and vowel duration was longer for the /i/, /u/, and /I/ vowels in ID speech. However, neither acoustic cue differed in speech to hearing-impaired or normal-hearing infants. These results suggest that both formant frequencies and vowel duration that differentiate American English tense/lx vowel contrasts are modified in ID speech regardless of the hearing status of the addressee.  相似文献   

16.
基于修正Mel域掩蔽模型和无语音概率的耳语音增强   总被引:1,自引:0,他引:1  
提出了一种基于修正Mel域听觉掩蔽模型和无语音概率的耳语音增强方法。该方法根据耳语音的发音特点对Mel频率进行修正,对每一帧耳语音信号进行Mel域频带滤波,同时通过无语音概率(SAP)动态地确定每个频带的听觉掩蔽阈值,对不同的听觉掩蔽阈值自适应地调整谱减系数来进行耳语音增强。对增强后的耳语音进行客观和主观测试,结果表明,该方法与其它谱减法相比,能将残留噪声和背景噪声控制在人耳掩蔽阈值下,取得更小的语音失真,主观听觉也得到了很大的改善。   相似文献   

17.
Vowel and consonant confusion matrices were collected in the hearing alone (H), lipreading alone (L), and hearing plus lipreading (HL) conditions for 28 patients participating in the clinical trial of the multiple-channel cochlear implant. All patients were profound-to-totally deaf and "hearing" refers to the presentation of auditory information via the implant. The average scores were 49% for vowels and 37% for consonants in the H condition and the HL scores were significantly higher than the L scores. Information transmission and multidimensional scaling analyses showed that different speech features were conveyed at different levels in the H and L conditions. In the HL condition, the visual and auditory signals provided independent information sources for each feature. For vowels, the auditory signal was the major source of duration information, while the visual signal was the major source of first and second formant frequency information. The implant provided information about the amplitude envelope of the speech and the estimated frequency of the main spectral peak between 800 and 4000 Hz, which was useful for consonant recognition. A speech processor that coded the estimated frequency and amplitude of an additional peak between 300 and 1000 Hz was shown to increase the vowel and consonant recognition in the H condition by improving the transmission of first formant and voicing information.  相似文献   

18.
In order to increase short time whispered speaker recognition rate in variable channel conditions,the hybrid compensation in model and feature domains was proposed.This method is based on joint factor analysis in training model stage.It extracts speaker factor and eliminates channel factor by estimating training speech speaker and channel spaces.Then in the test stage,the test speech channel factor is projected into feature space to engage in feature compensation,so it can remove channel information both in model and feature domains in order to improve recognition rate.The experiment result shows that the hybrid compensation can obtain the similar recognition rate in the three different training channel conditions and this method is more effective than joint factor analysis in the test of short whispered speech.  相似文献   

19.
顾晓江  赵鹤鸣  吕岗 《声学学报》2012,37(2):198-203
为了提高信道差异下短时耳语说话人的识别率,提出了一种在模型域和特征域进行混合补偿的方法。该方法首先在模型训练阶段以联合因子分析法为基础,通过估计训练语音的说话人空间和信道空间,提取出说话人因子,消除信道因子,其次在测试阶段,将测试语音的信道因子映射到特征空间,实施特征补偿,从而在模型和特征两方面去除信道信息,提高识别率。实验结果显示,在三种不同的信道训练环境下,混合补偿法都取得了相似的识别率,且新方法对短时耳语音的测试效果要优于联合因子分析法。   相似文献   

20.
提出在参数的提取过程中用不同的感知规整因子对不同人的参数归一化,从而实现在非特定人语音识别中对不同人的归一化处理。感知规整因子是基于声门上和声门下之间耦合作用产生声门下共鸣频率来估算的,与采用声道第三共振峰作为基准频率的方法比较,它能较多的滤除语义信息的影响,更好地体现说话人的个性特征。本文提取抗噪性能优于Mel倒谱参数的感知最小方差无失真参数作为识别特征,语音模型用经典的隐马尔可夫模型(HMM)。实验证明,本文方法与传统的语音识别参数和用声道第三共振峰进行谱规整的方法相比,在干净语音中单词错误识别率分别下降了4%和3%,在噪声环境下分别下降了9%和5%,有效地改善了非特定人语音识别系统的性能。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号