首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This study examined the perceptual specialization for native-language speech sounds, by comparing native Hindi and English speakers in their perception of a graded set of English /w/-/v/ stimuli that varied in similarity to natural speech. The results demonstrated that language experience does not affect general auditory processes for these types of sounds; there were strong cross-language differences for speech stimuli, and none for stimuli that were nonspeech. However, the cross-language differences extended into a gray area of speech-like stimuli that were difficult to classify, suggesting that the specialization occurred in phonetic processing prior to categorization.  相似文献   

2.
A number of studies have been conducted on the improvement of the sound quality of electrical artificial laryngeal speech, the speech produced has been difficult to hear compared to a natural voice. For this reason, it is necessary to effectively improve the frequency characteristics of the input signal. In the present study, to improve the sound quality of vocalization using an electrical artifi- cial larynx, first, a comparison of the frequency characteristics between the real and artificial voices was conducted, and three filters that can make the frequency characteristics of the artificial voice closer to those of a real voice were generated. Then, the influence of these filters on the quality of the artificial voice was investigated via physical measurement and a subjective evaluation experiment targeted at Japanese five vowels. It was found that the intelligibility of artificial /a/ and /o/ sounds was improved, whereas little improvement was observed in the case of /i/, /u/, and /e/. The obtained results confirmed the effect of optimizing the input signal into the vibration speaker and indicated areas for further improvement.  相似文献   

3.
针对非平行语料非联合训练条件下的语音转换,提出一种基于倒谱本征空间结构化高斯混合模型的方法。提取说话人语音倒谱特征参数之后,根据其散布矩阵计算本征向量构造倒谱本征空间并训练结构化高斯混合模型SGMM-ES(Structured Gaussian Mixture Model in Eigen Space)。源和目标说话人各自独立训练的SGMM-ES根据全局声学结构AUS(Acoustical Universal Structure)原理进行匹配对准,最终得到基于倒谱本征空间的短时谱转换函数。实验结果表明,转换语音的目标说话人平均识别率达到95.25%,平均谱失真度为1.25,相对基于原始倒谱特征空间的SGMM方法分别提高了0.8%和7.3%,而ABX和MOS测评表明转换性能非常接近于传统平行语料方法。这一结果说明采用倒谱本征空间结构化高斯混合模型进行非平行语料条件下的语音转换是有效的。   相似文献   

4.
惠琳  俞一彪 《声学学报》2017,42(6):762-768
提出一种短时频谱通用背景模型群与韵律参数相结合进行年龄语音转换的方法。谱参数转换方面,同一年龄段各说话者提取语音短时谱系数并建立高斯混合模型,然后依据语音特征相似性对说话者进行聚类,每一类训练一个通用背景模型,最终得到通用背景模型群和一组短时频谱转换函数。谱参数转换之后再对共振峰进一步微调。韵律参数转换方面,基频和语速分别建立单高斯和平均时长率模型来推导转换函数。实验结果显示,提出的方法在ABX和MOS等评价指标上比传统的双线性法有明显的优势,相对单一通用背景模型法的对数似然度变化率提高了4%。这一结果表明提出的方法能够使转换语音具有良好目标倾向性的同时有较好的语音质量,性能较传统方法有明显提升。   相似文献   

5.
This study examines cross-linguistic variation in the location of shared vowels in the vowel space across five languages (Cantonese, American English, Greek, Japanese, and Korean) and three age groups (2-year-olds, 5-year-olds, and adults). The vowels /a/, /i/, and /u/ were elicited in familiar words using a word repetition task. The productions of target words were recorded and transcribed by native speakers of each language. For correctly produced vowels, first and second formant frequencies were measured. In order to remove the effect of vocal tract size on these measurements, a normalization approach that calculates distance and angular displacement from the speaker centroid was adopted. Language-specific differences in the location of shared vowels in the formant values as well as the shape of the vowel spaces were observed for both adults and children.  相似文献   

6.
谷东  简志华 《声学学报》2018,43(5):864-872
针对目标说话人可能存在语料不足的情况,本文提出了一种有限语料下的统一张量字典语音转换算法。从语料库中选取N个说话人作为语音张量字典的基础说话人,通过多序列动态时间规整算法使这N个说话人的平行语音段对齐,从而建立由N个二维基础字典构成的张量字典。在语音转换阶段,源、目标说话人语音都可以通过张量字典中各基础字典的线性组合,构造出各自的语音字典,实现了语音转换。实验结果表明,当基础说话人个数达到14时,只需要极少的目标说话人语料,便可获得与传统的基于非负矩阵分解转换算法相当的转换效果,这极大地方便了语音转换系统的应用。   相似文献   

7.
俞一彪  曾道建  姜莹 《声学学报》2012,37(3):346-352
提出一种基于完全独立的说话人语音模型进行语音转换的方法。首先每个说话人采用各自的语料训练结构化高斯混合模型(Structured Gaussian Mixture Model,SGMM),然后根据源和目标说话人各自的模型采用全局声学结构(AcousticalUniversal Structure,AUS)进行匹配和高斯分布对准,最终得到相应的转换函数进行语音转换。ABX和MOS实验表明可以得到与传统的平行语料联合训练方法接近的转换性能,并且转换语音的目标说话人识别正确率达到94.5%。实验结果充分说明了本文提出的方法不仅具有较好的转换性能,而且具有较小的训练量和很好的系统扩展性。   相似文献   

8.
The voice conversion (VC) technique recently has emerged as a new branch of speech synthesis dealing with speaker identity. In this work, a linear prediction (LP) analysis is carried out on speech signals to obtain acoustical parameters related to speaker identity - the speech fundamental frequency, or pitch, voicing decision, signal energy, and vocal tract parameters. Once these parameters are established for two different speakers designated as source and target speakers, statistical mapping functions can then be applied to modify the established parameters. The mapping functions are derived from these parameters in such a way that the source parameters resemble those of the target. Finally, the modified parameters are used to produce the new speech signal. To illustrate the feasibility of the proposed approach, a simple to use voice conversion software has been developed. This VC technique has shown satisfactory results. The synthesized speech signal virtually matching that of the target speaker.  相似文献   

9.
Under the condition of limited target speaker's corpus, this paper proposed an algorithm for voice conversion using unified tensor dictionary with limited corpus. Firstly,parallel speech of N speakers was selected randomly from the speech corpus to build the base of tensor dictionary. And then, after the operation of multi-series dynamic time warping for those chosen speech, N two-dimension basic dictionaries can be generated which constituted the unified tensor dictionary. During the conversion stage, the two dictionaries of source and target speaker were established by linear combination of the N basic dictionaries using the two speakers' speech. The experimental results showed that when the number of the basic speaker was 14, our algorithm can obtain the compared performance of the traditional NMFbased method with few target speaker corpus, which greatly facilitate the application of voice conversion system.  相似文献   

10.
Both English and Japanese have two voiceless sibilant fricatives, an anterior fricative /s/ contrasting with a more posterior fricative /∫/. When children acquire sibilant fricatives, English children typically substitute [s] for /∫/, whereas Japanese children typically substitute [∫] for /s/. This study examined English- and Japanese-speaking adults' perception of children's productions of voiceless sibilant fricatives to investigate whether the apparent asymmetry in the acquisition of voiceless sibilant fricatives reported previously in the two languages was due in part to how adults perceive children's speech. The results of this study show that adult speakers of English and Japanese weighed acoustic parameters differently when identifying fricatives produced by children and that these differences explain, in part, the apparent cross-language asymmetry in fricative acquisition. This study shows that generalizations about universal and language-specific patterns in speech-sound development cannot be determined without considering all sources of variation including speech perception.  相似文献   

11.
OBJECTIVES/HYPOTHESIS: The purpose of this study was to examine the temporal-acoustic differences between trained singers and nonsingers during speech and singing tasks. METHODS: Thirty male participants were separated into two groups of 15 according to level of vocal training (ie, trained or untrained). The participants spoke and sang carrier phrases containing English voiced and voiceless bilabial stops, and voice onset time (VOT) was measured for the stop consonant productions. RESULTS: Mixed analyses of variance revealed a significant main effect between speech and singing for /p/ and /b/, with VOT durations longer during speech than singing for /p/, and the opposite true for /b/. Furthermore, a significant phonatory task by vocal training interaction was observed for /p/ productions. CONCLUSIONS: The results indicated that the type of phonatory task influences VOT and that these influences are most obvious in trained singers secondary to the articulatory and phonatory adjustments learned during vocal training.  相似文献   

12.
A new methodology of voice conversion in cepstrum eigenspace based on structured Gaussian mixture model is proposed for non-parallel corpora without joint training.For each speaker,the cepstrum features of speech are extracted,and mapped to the eigenspace which is formed by eigenvectors of its scatter matrix,thereby the Structured Gaussian Mixture Model in the EigenSpace(SGMM-ES)is trained.The source and target speaker's SGMM-ES are matched based on Acoustic Universal Structure(AUS)principle to achieve spectrum transform function.Experimental results show the speaker identification rate of conversion speech achieves95.25%,and the value of average cepstrum distortion is 1.25 which is 0.8%and 7.3%higher than the performance of SGMM method respectively.ABX and MOS evaluations indicate the conversion performance is quite close to the traditional method under the parallel corpora condition.The results show the eigenspace based structured Gaussian mixture model for voice conversion under the non-parallel corpora is effective.  相似文献   

13.
本讨论了引入人耳听觉特性的迭代维纳滤波在语音分离中的应用,即用矢量量化形成的码本反映目标话的语音特征,通过计算滤波结果与这一特征的匹配度来模拟人耳在“鸡尾酒会效应”中的注意力机制。实验结果表明这一方法有很好的效果。  相似文献   

14.
Spectral- and cepstral-based acoustic measures are preferable to time-based measures for accurately representing dysphonic voices during continuous speech. Although these measures show promising relationships to perceptual voice quality ratings, less is known regarding their ability to differentiate normal from dysphonic voice during continuous speech and the consistency of these measures across multiple utterances by the same speaker. The purpose of this study was to determine whether spectral moments of the long-term average spectrum (LTAS) (spectral mean, standard deviation, skewness, and kurtosis) and cepstral peak prominence measures were significantly different for speakers with and without voice disorders when assessed during continuous speech. The consistency of these measures within a speaker across utterances was also addressed. Continuous speech samples from 27 subjects without voice disorders and 27 subjects with mixed voice disorders were acoustically analyzed. In addition, voice samples were perceptually rated for overall severity. Acoustic analyses were performed on three continuous speech stimuli from a reading passage: two full sentences and one constituent phrase. Significant between-group differences were found for both cepstral measures and three LTAS measures (P < 0.001): spectral mean, skewness, and kurtosis. These five measures also showed moderate to strong correlations to overall voice severity. Furthermore, high degrees of within-speaker consistency (correlation coefficients ≥0.89) across utterances with varying length and phonemic content were evidenced for both subject groups.  相似文献   

15.
This study investigated the effects of age and hearing loss on perception of accented speech presented in quiet and noise. The relative importance of alterations in phonetic segments vs. temporal patterns in a carrier phrase with accented speech also was examined. English sentences recorded by a native English speaker and a native Spanish speaker, together with hybrid sentences that varied the native language of the speaker of the carrier phrase and the final target word of the sentence were presented to younger and older listeners with normal hearing and older listeners with hearing loss in quiet and noise. Effects of age and hearing loss were observed in both listening environments, but varied with speaker accent. All groups exhibited lower recognition performance for the final target word spoken by the accented speaker compared to that spoken by the native speaker, indicating that alterations in segmental cues due to accent play a prominent role in intelligibility. Effects of the carrier phrase were minimal. The findings indicate that recognition of accented speech, especially in noise, is a particularly challenging communication task for older people.  相似文献   

16.
This study investigated the relationship among the magnitude of jaw opening, intrinsic fundamental frequency (F0), and glottal parameters in natural speech. Acoustic, jaw opening, and electroglottographic (EGG) signals were simultaneously recorded. The subjects were 10 healthy men with New Zealand English as their native language. Subjects were asked to repeat a standard nonemphasized sentence in which one of the target vowels (/a/, /e/, /i/, /o/, and /u/) was embedded in various contexts. The glottal parameters F0, open quotient (OQ), and speed quotient (SQ) were measured from the EGG signal. Results of a series of one-way repeated-measures analyses of variance (ANOVA) showed a significant vowel effect on the magnitude of jaw opening [F(4, 24) = 25.512, P < .001], F0 [F(4, 28) = 45.415, P < .001] and speed quotient [F(4, 28) = 5.233, P = .003], but not on the open quotient [F(4, 28) = 0.501, P = .735]. The magnitude of jaw opening was found to be inversely related with F0 (r = -0.624, n = 25, P = .0009). These findings showed that the magnitude of jaw opening was related to F0 and that jaw opening might be a control signal for simulation of long-term F0 variation to achieve a higher degree of naturalness in artificial voice.  相似文献   

17.
Past studies have shown that when formants are perturbed in real time, speakers spontaneously compensate for the perturbation by changing their formant frequencies in the opposite direction to the perturbation. Further, the pattern of these results suggests that the processing of auditory feedback error operates at a purely acoustic level. This hypothesis was tested by comparing the response of three language groups to real-time formant perturbations, (1) native English speakers producing an English vowel /ε/, (2) native Japanese speakers producing a Japanese vowel (/e([inverted perpendicular])/), and (3) native Japanese speakers learning English, producing /ε/. All three groups showed similar production patterns when F1 was decreased; however, when F1 was increased, the Japanese groups did not compensate as much as the native English speakers. Due to this asymmetry, the hypothesis that the compensatory production for formant perturbation operates at a purely acoustic level was rejected. Rather, some level of phonological processing influences the feedback processing behavior.  相似文献   

18.
In order to assist physically handicapped persons in their movements, we developed an embedded isolated word speech recognition system (ASR) applied to voice control of smart wheelchairs. However, in spite of the existence in the industrial market of several kinds of electric wheelchairs, the problem remains the need to manually control this device by hand via joystick; which limits their use especially by people with severe disabilities. Thus, a significant number of disabled people cannot use a standard electric wheelchair or drive it with difficulty. The proposed solution is to use the voice to control and drive the wheelchair instead of classical joysticks. The intelligent chair is equipped with an obstacle detection system consisting of ultrasonic sensors, a moving navigation algorithm and a speech acquisition and recognition module for voice control embedded in a DSP card. The ASR architecture consists of two main modules. The first one is the speech parameterization module (features extraction) and the second module is the classifier which identifies the speech and generates the control word to motors power unit. The training and recognition phases are based on Hidden Markov Models (HMM), K-means, Baum-Welch and Viterbi algorithms. The database consists of 39 isolated speaker words (13 words pronounced 3 times under different environments and conditions). The simulations are tested under Matlab environment and the real-time implementation is performed by C language with code composer studio embedded in a TMS 320 C6416 DSP kit. The results and experiments obtained gave promising recognition ratio and accuracy around 99% in clean environment. However, the system accuracy decreases considerably in noisy environments, especially for SNR values below 5 dB (in street: 78%, in factory: 52%).  相似文献   

19.
The detection of French accent by American listeners   总被引:1,自引:0,他引:1  
The five experiments presented here examine the ability of listeners to detect a foreign accent. Computer editing techniques were used to isolate progressively shorter excerpts of the English spoken by native speakers of American English and French. Native English-speaking listeners judged the speech samples in one- and two-interval forced-choiced tests. They were able to detect foreign accent equally well when presented with speech edited from phrases read in isolation and produced in a spontaneous story. The listeners accurately identified the French talkers (63%-95% of the time) no matter how short were the speech samples presented: entire phrases (e.g., "two little dogs"), syllables (/tu/ or /ti/), portions of syllables corresponding to the phonetic segments /t/,/i/,/u/, and even just the first 30 ms of "two" (roughly, the release burst of /t/). Both phonetically trained listeners familiar with French-accented English and unsophisticated listeners were able to accurately detect accent. These results suggest that listeners develop very detailed phonetic category prototypes against which to evaluate speech sounds occurring in their native language.  相似文献   

20.
Training Japanese listeners to identify English /r/ and /l/: a first report   总被引:5,自引:0,他引:5  
Native speakers of Japanese learning English generally have difficulty differentiating the phonemes /r/ and /l/, even after years of experience with English. Previous research that attempted to train Japanese listeners to distinguish this contrast using synthetic stimuli reported little success, especially when transfer to natural tokens containing /r/ and /l/ was tested. In the present study, a different training procedure that emphasized variability among stimulus tokens was used. Japanese subjects were trained in a minimal pair identification paradigm using multiple natural exemplars contrasting /r/ and /l/ from a variety of phonetic environments as stimuli. A pretest-posttest design containing natural tokens was used to assess the effects of training. Results from six subjects showed that the new procedure was more robust than earlier training techniques. Small but reliable differences in performance were obtained between pretest and posttest scores. The results demonstrate the importance of stimulus variability and task-related factors in training nonnative speakers to perceive novel phonetic contrasts that are not distinctive in their native language.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号