首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 71 毫秒
1.
采用低维特征映射的耳语音向正常音转换   总被引:1,自引:0,他引:1       下载免费PDF全文
在将耳语音转换为正常音时,为了研究降维后语音特征对耳语音转换的影响,分别对耳语音和正常音谱包络进行自适应编码以提取耳语音和正常音的低维特征,然后使用BP网络建立耳语音和正常音低维谱包络特征之间的映射关系以及正常音基频和耳语音低维谱包络特征之间的关系。转换时,根据耳语音低维谱包络特征获得对应正常音的低维谱包络特征和基频,对低维谱包络特征进行解码后获得对应的正常音谱包络。实验结果表明,采用此方法转换后的语音与正常音之间的倒谱距离相比高斯混合模型方法下降了10%,转换后语音的自然度和可懂度都有所提高。   相似文献   

2.
汉语耳语音孤立字识别研究   总被引:6,自引:0,他引:6       下载免费PDF全文
杨莉莉  林玮  徐柏龄 《应用声学》2006,25(3):187-192
耳语音识别有着广泛的应用前景,是一个全新的课题.但是由于耳语音本身的特点,如声级低、没有基频等,给耳语音识别研究带来了困难.本文根据耳语音信号发音模型,结合耳语音的声学特性,建立了一个汉语耳语音孤立字识别系统.由于耳语音信噪比低,必须对其进行语音增强处理,同时在识别系统中应用声调信息提高了识别性能.实验结果说明了MFCC结合幅值包络可作为汉语耳语音自动识别的特征参数,在小字库内用HMM模型识别得出的识别率为90.4%.  相似文献   

3.
提出两种基于非对称代价函数的耳语音增强算法,将语音增强过程中的放大失真和压缩失真区分对待。Modified ItakuraSaito(MIS)算法对放大失真给予更多的惩罚,而Kullback-Leibler(KL)算法则对压缩失真给予更多的惩罚。实验结果表明,在低于—6 dB的低信噪比情况中,经MIS算法增强后的耳语音的可懂度相比传统算法有显著提高;而KL算法则获得了同最小均方误差语音增强算法近似的可懂度提高效果,证实了耳语音中的放大失真和压缩失真对于耳语音可懂度的影响并不相同,低信噪比时较大的压缩失真有助于提高耳语音可懂度,而高信噪比时的压缩失真对耳语音可懂度影响较小。  相似文献   

4.
基于听觉模型的耳语音的声韵切分   总被引:5,自引:0,他引:5       下载免费PDF全文
丁慧  栗学丽  徐柏龄 《应用声学》2004,23(2):20-25,44
本文分析了耳语音的特点,并根据生理声学及心理声学的基本理论与实验资料,提出了一种利用听觉模型来进行耳语音声韵切分的方法。这种适用于耳语音声韵切分的听觉感知模型主要分为四个层次:耳蜗对声音频率的分解机理;听觉系统的时域和频域非线性变化;中枢神经系统的侧抑制机理。这种模型能反映在噪声环境下人对低能量语音的听觉感知特性,因而适于耳语音识别,在耳语音声韵母切分实验中得到了满意的结果。  相似文献   

5.
语音质量的客观评价可以代替昂贵的人工评分,但是目前客观指标的计算通常需要纯净的参考语音,这在许多实际声学系统中很难获得。为此提出了一种融合辅助目标学习和卷积循环网络(CRN)的非侵入式语音质量评价算法。为降低算法的复杂度,算法采用基于仿人耳听觉特性滤波器的Bark频率倒谱系数(BFCCs)作为CRN的输入。算法首先构建一个卷积神经网络(CNN)从BFCCs中提取帧级特征。然后,构建双向的长短记忆网络,在帧级特征中建模长期的时间依赖性和序列特征。最后,利用自注意力机制自适应地从帧级特征中筛选出有用信息,将其整合至话语层面的特征中,并将这些话语级特征映射为客观得分。为改善质量评测的有效性,算法采用多任务训练策略,引入语音激活检测(VAD)作为辅助学习目标。基于开源数据库的实验显示,与其他非侵入式算法相比,提出的算法和平均主观意见分(MOS)具有更好的相关性。而且,算法参数规模较小且对ITU-T P.808发布的带有主观MOS的失真语音数据库具有良好的泛化能力,接近语音质量感知评估(PESQ)指标的精度。   相似文献   

6.
提出人耳掩蔽效应与阶STSA- MMSE(Short Time Spectral Amplitude-Minimum Mean Square Error)算法动态结合的语音增强算法.该算法通过引入参量提高了STSA-MMSE 算法的实时性,同时结合人耳掩蔽效应,动态的确定增强滤波器的传递函数以适应语音信号的变化,来提高语音质量.实验结果表明,和STSA-MMSE 算法相比,该算法在实时性方面有很大改善,并使降噪后的语音信号有较小的失真,同时很好地抑制了音乐噪音.  相似文献   

7.
提出了全局谱参数下的耳语说话人状态因子分析方法。首先,根据耳语听辨实验结果,提出导入唤醒度-愉悦度因子对说话人状态进行三级度量;其次,提取耳语音正弦模型、人耳听觉模型下的谱参数,结合其他短时频谱参量,进行轨迹跟踪并计算各参数的全局统计变量,作为特征参数来实现耳语说话人状态的分类。实验结果显示,正弦模型及人耳听觉模型的全局谱参数可将耳语说话人状态因子分类系统的准确率提高至90%。该分类方法及状态因子描述方案提供了耳语音说话人状态分析的有效途径。  相似文献   

8.
徐舜  刘郁林  柏森 《应用声学》2008,27(3):173-180
盲分离算法能在缺少混合系统参数的条件下仅由观测信号估计初始源,但分离信号存在固有的排列模糊性,这往往导致两次批处理过程中同一信号"对不准",因此很难获得连续的源信号。本文针对盲声源分离中存在的相同问题,根据语音和其他音频信号的特征差异,提出一种修正的自相关函数并以其值作为一个特征基元来表征声音信号的时序相关特性,同时用平均声门波形状参数作为另一个特征基元来表征语音产生的生理效应。以这两个参数作为识别不同音频信号的二维模式特征,采用一种模糊聚类算法提取多路盲分离语音。本方法有效克服了批处理盲声源分离中的信号排列顺序的不确定性,并通过选择合适的阈值提取多路连续语音。仿真给出了5路混合音频信号中盲提取两路连续语音的实验结果。  相似文献   

9.
提出人耳掩蔽效应与■阶STSA-MMSE(Short Time Spectral Amplitude-Minimum Mean SquareError)算法动态结合的语音增强算法.该算法通过引入参量■提高了STSA-MMSE算法的实时性,同时结合人耳掩蔽效应,动态的确定增强滤波器的传递函数以适应语音信号的变化,来提高语音质量.实验结果表明,和STSA-MMSE算法相比,该算法在实时性方面有很大改善,并使降噪后的语音信号有较小的失真,同时很好地抑制了音乐噪音.  相似文献   

10.
中远距离(>10 km)水声语音通信时,由于可利用带宽窄、复杂多变等不利因素对信息传输率的制约,语音编码速率应降到尽可能的低。利用水声信道传播时延大的特点,结合人耳听觉感知的特性,在深入研究混合激励线性预测编码(MELP)标准之后,提出一种语音编码速率可调节的变比特率语音编码算法。其平均码速率约600 bps,主观语音质量评估平均得分(PESQ MOS)约2.8分。对该编码算法性能进行了计算机仿真和海上实验验证。实验及仿真表明,在误码率不高于10-3时,本算法表现良好且稳定,合成语音清晰可懂,易于辨认说话人。  相似文献   

11.
The goal of cross-language voice conversion is to preserve the speech characteristics of one speaker when that speaker's speech is translated and used to synthesize speech in another language. In this paper, two preliminary studies, i.e., a statistical analysis of spectrum differences in different languages and the first attempt at a cross-language voice conversion, are reported. Speech uttered by a bilingual speaker is analyzed to examine spectrum difference between English and Japanese. Experimental results are (1) the codebook size for mixed speech from English and Japanese should be almost twice the codebook size of either English or Japanese; (2) although many code vectors occurred in both English and Japanese, some have a tendency to predominate in one language or the other; (3) code vectors that predominantly occurred in English are contained in the phonemes /r/, /ae/, /f/, /s/, and code vectors that predominantly occurred in Japanese are contained in /i/, /u/, /N/; and (4) judged from listening tests, listeners cannot reliably indicate the distinction between English speech decoded by a Japanese codebook and English speech decoded by an English codebook. A voice conversion algorithm based on codebook mapping was applied to cross-language voice conversion, and its performance was somewhat less effective than for voice conversion in the same language.  相似文献   

12.
肖东  莫福源  陈庚  马力 《应用声学》2012,31(2):109-117
线谱频率(Line Spectral Frequency,LSF)是线性预测频谱系数(Linear Predication Coefficient,LPC)有效的编码形式。语音线性预测模型中,LPC反映了声道调制的模型,是影响语音听觉感知重要的参数之一。在混合激励线性预测语音编码(Mixed Excitation Linear Prediction,MELP)标准中,对LSF采用4级码本进行分级式矢量量化。首先,为减少其量化冗余度以降低编码速率,本文提出了一种改进的选择算法,生成了一个2级码本替换之。其次,为提高合成语音质量,依据LSF矢量量化的精度与合成语音质量的关系的实验结果,提出根据人耳听觉感知特性进行LSF量化和评价的方法,并予以实验证明。  相似文献   

13.
Codebook-based single-microphone noise suppressors, which exploit prior knowledge about speech and noise statistics, provide better performance in nonstationary noise. However, as the enhancement involves a joint optimization over speech and noise codebooks, this results in high computational complexity. A codebook-based method is proposed that uses a reference signal observed by a bone-conduction microphone, and a mapping between air- and bone-conduction codebook entries generated during an offline training phase. A smaller subset of air-conducted speech codebook entries that accurately models the clean speech signal is selected using this reference signal. Experiments support the expected improvement in performance at low computational complexity.  相似文献   

14.
Although single-microphone noise reduction methods perform well in stationary noise environments, their performance in non-stationary conditions remains unsatisfactory. Use of prior knowledge about speech and noise power spectral densities in the form of trained codebooks has been previously shown to address this limitation. While it is possible to use trained speech codebooks in a practical system, the variety of noise types encountered in practice makes the use of trained noise codebooks less practical. This letter presents a method that uses a generic noise codebook for speech enhancement that can be generated on-the-fly and provides good performance.  相似文献   

15.
Based on two well-known auditory models, it is investigated whether the squared error between an original signal and a phase-distorted signal is a perceptually relevant measure for distortions in the Fourier phase spectrum of periodic signals obtained from speech. Both the performance of phase vector quantizers and the direct relationship between the squared error and two perceptual distortion measures are studied. The results indicate that for small values the squared error correlates well to the perceptual measures. However, for large errors, an increase in squared error does not, on average, lead to an increase in the perceptual measures. Empirical rate-perceptual distortion curves and listening tests confirm that, for low to medium codebook sizes, the average perceived distortion does not decrease with increasing codebook size when the squared error is used as encoding criterion.  相似文献   

16.
In this paper, a novel greyscale image coding technique based on vector quantization (VQ) is proposed. In VQ, the reconstructed image quality is restricted by the codebook used in the image encoding/decoding procedures. To provide a better image quality using a fixed-sized codebook, the codebook expansion technique is introduced in the proposed technique. In addition, the block prediction technique and the relatively address technique are employed to cut down the required storage cost of the compressed codes. From the results, it is shown that the proposed technique adaptively provides better image quality at low bit rates than VQ.  相似文献   

17.
The purpose of the present study was to examine the benefits of providing audible speech to listeners with sensorineural hearing loss when the speech is presented in a background noise. Previous studies have shown that when listeners have a severe hearing loss in the higher frequencies, providing audible speech (in a quiet background) to these higher frequencies usually results in no improvement in speech recognition. In the present experiments, speech was presented in a background of multitalker babble to listeners with various severities of hearing loss. The signal was low-pass filtered at numerous cutoff frequencies and speech recognition was measured as additional high-frequency speech information was provided to the hearing-impaired listeners. It was found in all cases, regardless of hearing loss or frequency range, that providing audible speech resulted in an increase in recognition score. The change in recognition as the cutoff frequency was increased, along with the amount of audible speech information in each condition (articulation index), was used to calculate the "efficiency" of providing audible speech. Efficiencies were positive for all degrees of hearing loss. However, the gains in recognition were small, and the maximum score obtained by an listener was low, due to the noise background. An analysis of error patterns showed that due to the limited speech audibility in a noise background, even severely impaired listeners used additional speech audibility in the high frequencies to improve their perception of the "easier" features of speech including voicing.  相似文献   

18.
Image super-resolution as high-quality image enlargement is achieved by some type of restoration for high-frequency components that deteriorate through the image enlargement. The estimation methods using the given image itself are effective for the restoration, and we have proposed a method employing the codebook describing edge blurring properties that are derived from the given image. It is, however, unfavourable to apply those image-dependent methods to movies whose scene varies momentarily. In this paper, an image-independent codebook incorporating local edge patterns of images is proposed, and then the predefined codebook is applied. The effectiveness is shown through some experiments.  相似文献   

19.
To examine spectral effects on declines in speech recognition in noise at high levels, word recognition for 18 young adults with normal hearing was assessed for low-pass-filtered speech and speech-shaped maskers or high-pass-filtered speech and speech-shaped maskers at three speech levels (70, 77, and 84 dB SPL) for each of three signal-to-noise ratios (+8, +3, and -2 dB). An additional low-level noise produced equivalent masked thresholds for all subjects. Pure-tone thresholds were measured in quiet and in all maskers. If word recognition was determined entirely by signal-to-noise ratio, and was independent of signal levels and the spectral content of speech and maskers, scores should remain constant with increasing level for both low- and high-frequency speech and maskers. Recognition of low-frequency speech in low-frequency maskers and high-frequency speech in high-frequency maskers decreased significantly with increasing speech level when signal-to-noise ratio was held constant. For low-frequency speech and speech-shaped maskers, the decline was attributed to nonlinear growth of masking which reduced the "effective" signal-to-noise ratio at high levels, similar to previous results for broadband speech and speech-shaped maskers. Masking growth and reduced "effective" signal-to-noise ratio accounted for some but not all the decline in recognition of high-frequency speech in high-frequency maskers.  相似文献   

20.
Three experiments measured constancy in speech perception, using natural-speech messages or noise-band vocoder versions of them. The eight vocoder-bands had equally log-spaced center-frequencies and the shapes of corresponding "auditory" filters. Consequently, the bands had the temporal envelopes that arise in these auditory filters when the speech is played. The "sir" or "stir" test-words were distinguished by degrees of amplitude modulation, and played in the context; "next you'll get _ to click on." Listeners identified test-words appropriately, even in the vocoder conditions where the speech had a "noise-like" quality. Constancy was assessed by comparing the identification of test-words with low or high levels of room reflections across conditions where the context had either a low or a high level of reflections. Constancy was obtained with both the natural and the vocoded speech, indicating that the effect arises through temporal-envelope processing. Two further experiments assessed perceptual weighting of the different bands, both in the test word and in the context. The resulting weighting functions both increase monotonically with frequency, following the spectral characteristics of the test-word's [s]. It is suggested that these two weighting functions are similar because they both come about through the perceptual grouping of the test-word's bands.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号