首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An important speech cue is that of voice onset time (VOT), a cue for the perception of voicing and aspiration in word-initial stops. Preaspiration, an [h]-like sound between a vowel and the following stop, can be cued by voice offset time, a cue which in most respects mirrors VOT. In Icelandic VOffT is much more sensitive to the duration of the preceding vowel than is VOT to the duration of the following vowel. This has been explained by noting that preaspiration can only follow a phonemically short vowel. Lengthening of the vowel, either by changing its duration or by moving the spectrum towards that appropriate for a long vowel, will thus demand a longer VOffT to cue preaspiration. An experiment is reported showing that this greater effect that vowel quantity has on the perception of VOffT than on the perception of VOT cannot be explained by the effect of F1 frequency at vowel offset.  相似文献   

2.
F1 structure provides information for final-consonant voicing   总被引:1,自引:0,他引:1  
Previous research has shown that F1 offset frequencies are generally lower for vowels preceding voiced consonants than for vowels preceding voiceless consonants. Furthermore, it has been shown that listeners use these differences in offset frequency in making judgments about final-consonant voicing. A recent production study [W. Summers, J. Acoust. Soc. Am. 82, 847-863 (1987)] reported that F1 frequency differences due to postvocalic voicing are not limited to the final transition or offset region of the preceding vowel. Vowels preceding voiced consonants showed lower F1 onset frequencies and lower F1 steady-state frequencies than vowels preceding voiceless consonants. The present study examined whether F1 frequency differences in the initial transition and steady-state regions of preceding vowels affect final-consonant voicing judgments in perception. The results suggest that F1 frequency differences in these early portions of preceding vowels do, in fact, influence listeners' judgments of postvocalic consonantal voicing.  相似文献   

3.
Four experiments explored the relative contributions of spectral content and phonetic labeling in effects of context on vowel perception. Two 10-step series of CVC syllables ([bVb] and [dVd]) varying acoustically in F2 midpoint frequency and varying perceptually in vowel height from [delta] to [epsilon] were synthesized. In a forced-choice identification task, listeners more often labeled vowels as [delta] in [dVd] context than in [bVb] context. To examine whether spectral content predicts this effect, nonspeech-speech hybrid series were created by appending 70-ms sine-wave glides following the trajectory of CVC F2's to 60-ms members of a steady-state vowel series varying in F2 frequency. In addition, a second hybrid series was created by appending constant-frequency sine-wave tones equivalent in frequency to CVC F2 onset/offset frequencies. Vowels flanked by frequency-modulated glides or steady-state tones modeling [dVd] were more often labeled as [delta] than were the same vowels surrounded by nonspeech modeling [bVb]. These results suggest that spectral content is important in understanding vowel context effects. A final experiment tested whether spectral content can modulate vowel perception when phonetic labeling remains intact. Voiceless consonants, with lower-amplitude more-diffuse spectra, were found to exert less of an influence on vowel perception than do their voiced counterparts. The data are discussed in terms of a general perceptual account of context effects in speech perception.  相似文献   

4.
Different patterns of performance across vowels and consonants in tests of categorization and discrimination indicate that vowels tend to be perceived more continuously, or less categorically, than consonants. The present experiments examined whether analogous differences in perception would arise in nonspeech sounds that share critical transient acoustic cues of consonants and steady-state spectral cues of simplified synthetic vowels. Listeners were trained to categorize novel nonspeech sounds varying along a continuum defined by a steady-state cue, a rapidly-changing cue, or both cues. Listeners' categorization of stimuli varying on the rapidly changing cue showed a sharp category boundary and posttraining discrimination was well predicted from the assumption of categorical perception. Listeners more accurately discriminated but less accurately categorized steady-state nonspeech stimuli. When listeners categorized stimuli defined by both rapidly-changing and steady-state cues, discrimination performance was accurate and the categorization function exhibited a sharp boundary. These data are similar to those found in experiments with dynamic vowels, which are defined by both steady-state and rapidly-changing acoustic cues. A general account for the speech and nonspeech patterns is proposed based on the supposition that the perceptual trace of rapidly-changing sounds decays faster than the trace of steady-state sounds.  相似文献   

5.
The perception of voicing in final velar stop consonants was investigated by systematically varying vowel duration, change in offset frequency of the final first formant (F1) transition, and rate of frequency change in the final F1 transition for several vowel contexts. Consonant-vowel-consonant (CVC) continua were synthesized for each of three vowels, [i,I,ae], which represent a range of relatively low to relatively high-F1 steady-state values. Subjects responded to the stimuli under both an open- and closed-response condition. Results of the study show that both vowel duration and F1 offset properties influence perception of final consonant voicing, with the salience of the F1 offset property higher for vowels with high-F1 steady-state frequencies than low-F1 steady-state frequencies, and the opposite occurring for the vowel duration property. When F1 onset and offset frequencies were controlled, rate of the F1 transition change had inconsistent and minimal effects on perception of final consonant voicing. Thus the findings suggest that it is the termination value of the F1 offset transition rather than rate and/or duration of frequency change, which cues voicing in final velar stop consonants during the transition period preceding closure.  相似文献   

6.
A production study was conducted to investigate the effect of vowel lengthening before voiced obstruents, and the possible influence that the openness versus closedness of syllables have on the temporal structure of vowels in some languages. The results revealed that vowels were significantly longer when followed by voiced consonants than voiceless consonants. Vowel duration did not, however, vary with syllable structure. However, vowels in open syllables followed by [+ voiced] consonants tended to be longer than when the following consonants were [- voiced]. These results are discussed in the context of current knowledge of other languages.  相似文献   

7.
Synthesis (carrier) signals in acoustic models embody assumptions about perception of auditory electric stimulation. This study compared speech intelligibility of consonants and vowels processed through a set of nine acoustic models that used Spectral Peak (SPEAK) and Advanced Combination Encoder (ACE)-like speech processing, using synthesis signals which were representative of signals used previously in acoustic models as well as two new ones. Performance of the synthesis signals was determined in terms of correspondence with cochlear implant (CI) listener results for 12 attributes of phoneme perception (consonant and vowel recognition; F1, F2, and duration information transmission for vowels; voicing, manner, place of articulation, affrication, burst, nasality, and amplitude envelope information transmission for consonants) using four measures of performance. Modulated synthesis signals produced the best correspondence with CI consonant intelligibility, while sinusoids, narrow noise bands, and varying noise bands produced the best correspondence with CI vowel intelligibility. The signals that performed best overall (in terms of correspondence with both vowel and consonant attributes) were modulated and unmodulated noise bands of varying bandwidth that corresponded to a linearly varying excitation width of 0.4 mm at the apical to 8 mm at the basal channels.  相似文献   

8.
The third formant and the second formant were found on average to cue the place of articulation of intervocalic stop consonants equally well when the stop consonants occurred before the vowel/i/. This result and others provide some support for the notion that the fundamental resonance of the front cavity plays an important role in the perception of the phonetic dimension of place of articulation.  相似文献   

9.
This study examines the perception of short and long vowels in Arabic and Japanese by three groups of listeners differing in their first languages (L1): Arabic, Japanese, and Persian. While Persian uses the same alphabet as Arabic and Iranian students learn Arabic in school, the two languages are typologically unrelated. Further, unlike Arabic or Japanese, vowel length may no longer be contrastive in modern Persian. In this study, a question of interest was whether Persian listeners' foreign language learning experience or Japanese listeners' L1 phonological experience might help them to accurately process short and long vowels in Arabic. In Experiment 1, Arabic and Japanese listeners were more accurate than Persian listeners in discriminating vowel length contrasts in their own L1 only. In Experiment 2, Arabic and Japanese listeners were more accurate than Persian listeners in identifying the length categories in the "other" unknown language as well as in their own L1. The difference in the listeners' perceptual performance between the two experiments supports the view that long-term L1 representations may be invoked to a greater extent in the identification than discrimination test. The present results highlight the importance of selecting the appropriate test for assessing cross-language speech perception.  相似文献   

10.
Many studies have noted great variability in speech perception ability among postlingually deafened adults with cochlear implants. This study examined phoneme misperceptions for 30 cochlear implant listeners using either the Nucleus-22 or Clarion version 1.2 device to examine whether listeners with better overall speech perception differed qualitatively from poorer listeners in their perception of vowel and consonant features. In the first analysis, simple regressions were used to predict the mean percent-correct scores for consonants and vowels for the better group of listeners from those of the poorer group. A strong relationship between the two groups was found for consonant identification, and a weak, nonsignificant relationship was found for vowel identification. In the second analysis, it was found that less information was transmitted for consonant and vowel features to the poorer listeners than to the better listeners; however, the pattern of information transmission was similar across groups. Taken together, results suggest that the performance difference between the two groups is primarily quantitative. The results underscore the importance of examining individuals' perception of individual phoneme features when attempting to relate speech perception to other predictor variables.  相似文献   

11.
An algorithm is presented that correctly detects the large majority of vowel onsets in fluent speech. The algorithm is based on the simple assumption that vowel onsets are characterized by the appearance of rapidly increasing resonance peaks in the amplitude spectrum. Application to carefully articulated, isolated words results in a high number of false alarms, predominantly before consonants that can function as vowels in a different context such as another language or as a syllabic consonant. After applying some modifications in the setting of some parameters, this number of false alarms for isolated words can be reduced significantly, without the risk of a large number of missed detections. The temporal accuracy of the algorithm is better than 20 ms. This accuracy is determined with respect to the perceptual moment of occurrence of a vowel onset as determined by a phonetician.  相似文献   

12.
Research on the perception of vowels in the last several years has given rise to new conceptions of vowels as articulatory, acoustic, and perceptual events. Starting from a "simple" target model in which vowels were characterized articulatorily as static vocal tract shapes and acoustically as points in a first and second formant (F1/F2) vowel space, this paper briefly traces the evolution of vowel theory in the 1970s and 1980s in two directions. (1) Elaborated target models represent vowels as target zones in perceptual spaces whose dimensions are specified as formant ratios. These models have been developed primarily to account for perceivers' solution of the "speaker normalization" problem. (2) Dynamic specification models emphasize the importance of formant trajectory patterns in specifying vowel identity. These models deal primarily with the problem of "target undershoot" associated with the coarticulation of vowels with consonants in natural speech and with the issue of "vowel-inherent spectral change" or diphthongization of English vowels. Perceptual studies are summarized that motivate these theoretical developments.  相似文献   

13.
Previous research has suggested that speech loudness is determined primarily by the vowel in consonant-vowel-consonant (CVC) monosyllabic words, and that consonant intensity has a negligible effect. The current study further examines the unique aspects of speech loudness by manipulating consonant-vowel intensity ratios (CVRs), while holding the vowel constant at a comfortable listening level (70 dB), to determine the extent to which vowels and consonants contribute differentially to the loudness of monosyllabic words with voiced and voiceless consonants. The loudness of words edited to have CVRs ranging from -6 to +6?dB was compared to that of standard words with unaltered CVR by 10 normal-hearing listeners in an adaptive procedure. Loudness and overall level as a function of CVR were compared for four CVC word types: both voiceless consonants modified; only initial voiceless consonants modified; both voiced consonants modified; and only initial voiced consonants modified. Results indicate that the loudness of CVC monosyllabic words is not based strictly on the level of the vowel; rather, the overall level of the word and the level of the vowel contribute approximately equally. In addition to furthering the basic understanding of speech perception, the current results may be of value for the coding of loudness by hearing aids and cochlear implants.  相似文献   

14.
The theory of relational acoustic invariance [Pickett, E. R., et al. (1999). Phonetica 56, 135-157] was tested with the Japanese stop quantity distinction in disyllables spoken at various rates. The questions were whether the perceptual boundary between the two phonemic categories of single and geminate stops is invariant across rates, and whether there is a close correspondence between the perception and production boundaries. The durational ratio of stop closure to word (where the "word" was defined as disyllables) was previously found to be an invariant parameter that classified the two categories in production, but the present study found that this ratio varied with different speaking rates in perception. However, regression and discriminant analyses of perception and production data showed that treating stop closure as a function of word duration with an intercept term represented the perception and production boundaries very well. This result indicated that the durational ratio of adjusted stop closure (i.e., closure with an added constant) to the word was invariant and distinguished the two phonemic categories clearly. Taken together, the results support the relational acoustic invariance theory, and help refine the theory with regard to exactly what form 'invariance' can take.  相似文献   

15.
Cues to the voicing distinction for final /f,s,v,z/ were assessed for 24 impaired- and 11 normal-hearing listeners. In base-line tests the listeners identified the consonants in recorded /d circumflex C/ syllables. To assess the importance of various cues, tests were conducted of the syllables altered by deletion and/or temporal adjustment of segments containing acoustic patterns related to the voicing distinction for the fricatives. The results showed that decreasing the duration of /circumflex/ preceding /v/ or /z/, and lengthening the /circumflex/ preceding /f/ or /s/, considerably reduced the correctness of voicing perception for the hearing-impaired group, while showing no effect for the normal-hearing group. For the normals, voicing perception deteriorated for /f/ and /s/ when the frications were deleted from the syllables, and for /v/ and /z/ when the vowel offsets were removed from the syllables with duration-adjusted vowels and deleted frications. We conclude that some hearing-impaired listeners rely to a greater extent on vowel duration as a voicing cue than do normal-hearing listeners.  相似文献   

16.
Dynamic specification of coarticulated vowels   总被引:1,自引:0,他引:1  
An adequate theory of vowel perception must account for perceptual constancy over variations in the acoustic structure of coarticulated vowels contributed by speakers, speaking rate, and consonantal context. We modified recorded consonant-vowel-consonant syllables electronically to investigate the perceptual efficacy of three types of acoustic information for vowel identification: (1) static spectral "targets," (2) duration of syllabic nuclei, and (3) formant transitions into and out of the vowel nucleus. Vowels in /b/-vowel-/b/ syllables spoken by one adult male (experiment 1) and by two females and two males (experiment 2) served as the corpus, and seven modified syllable conditions were generated in which different parts of the digitized waveforms of the syllables were deleted and the temporal relationships of the remaining parts were manipulated. Results of identification tests by untrained listeners indicated that dynamic spectral information, contained in initial and final transitions taken together, was sufficient for accurate identification of vowels even when vowel nuclei were attenuated to silence. Furthermore, the dynamic spectral information appeared to be efficacious even when durational parameters specifying intrinsic vowel length were eliminated.  相似文献   

17.
语音中元音和辅音的听觉感知研究   总被引:1,自引:0,他引:1       下载免费PDF全文
本文对语音中元音和辅音的听觉感知研究进行综述。80多年前基于无意义音节的权威实验结果表明辅音对人的听感知更为重要,由于实验者在学术上的成就和权威性,这一结论成为了常识,直到近20年前基于自然语句的实验挑战了这个结论并引发了新一轮的研究。本文主要围绕元音和辅音对语音感知的相对重要性、元音和辅音的稳态信息和边界动态信息对语音感知的影响以及相关研究的潜在应用等进行较为系统的介绍,最后给出了总结与展望。  相似文献   

18.
In stuttered repetitions of a syllable, the vowel that occurs often sounds like schwa even when schwa is not intended. In this article, acoustic analyses are reported which show that the spectral properties of stuttered vowels are similar to the following fluent vowel, so it would appear that the stutterers are articulating the vowel appropriately. Though spectral properties of the stuttered vowels are normal, others are unusual: The stuttered vowels are low in amplitude and short in duration. In two experiments, the effects of amplitude and duration on perception of these vowels are examined. It is shown that, if the amplitude of stuttered vowels is made normal and their duration is lengthened, they sound more like the intended vowels. These experiments lead to the conclusion that low amplitude and short duration are the factors that cause stuttered vowels to sound like schwa. This differs from the view of certain clinicians and theorists who contend that stutterers actually articulate /schwa/'s when these are heard in stuttered speech. Implications for stuttering therapy are considered.  相似文献   

19.
20.
The purpose of this paper is to propose and evaluate a new model of vowel perception which assumes that vowel identity is recognized by a template-matching process involving the comparison of narrow band input spectra with a set of smoothed spectral-shape templates that are learned through ordinary exposure to speech. In the present simulation of this process, the input spectra are computed over a sufficiently long window to resolve individual harmonics of voiced speech. Prior to template creation and pattern matching, the narrow band spectra are amplitude equalized by a spectrum-level normalization process, and the information-bearing spectral peaks are enhanced by a "flooring" procedure that zeroes out spectral values below a threshold function consisting of a center-weighted running average of spectral amplitudes. Templates for each vowel category are created simply by averaging the narrow band spectra of like vowels spoken by a panel of talkers. In the present implementation, separate templates are used for men, women, and children. The pattern matching is implemented with a simple city-block distance measure given by the sum of the channel-by-channel differences between the narrow band input spectrum (level-equalized and floored) and each vowel template. Spectral movement is taken into account by computing the distance measure at several points throughout the course of the vowel. The input spectrum is assigned to the vowel template that results in the smallest difference accumulated over the sequence of spectral slices. The model was evaluated using a large database consisting of 12 vowels in /hVd/ context spoken by 45 men, 48 women, and 46 children. The narrow band model classified vowels in this database with a degree of accuracy (91.4%) approaching that of human listeners.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号