首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
What's in a whisper?   总被引:4,自引:0,他引:4  
Whispering is a common, natural way of reducing speech perceptibility, but whether and how whispering affects consonant identification and the acoustic features presumed important for it in normal speech perception are unknown. In this experiment, untrained listeners identified 18 different whispered initial consonants significantly better than chance in nonsense syllables. The phonetic features of place and manner of articulation and, to a lesser extent, voicing, were correctly identified. Confusion matrix and acoustic analyses indicated preservation of resonance characteristics for place and manner of articulation and suggested the use of burst, aspiration, or frication duration and intensity, and/or first-formant cutback for voicing decisions.  相似文献   

2.
3.
A method for distinguishing burst onsets of voiceless stop consonants in terms of place of articulation is described. Four speakers produced the voiceless stops in word-initial position in six vowel contexts. A metric was devised to extract the characteristic burst-friction components at burst onset. The burst-friction components, derived from the metric as sensory formants, were then transformed into log frequency ratios and plotted as points in an auditory-perceptual space (APS). In the APS, each place of articulation was seen to be associated with a distinct region, or target zone. The metric was then applied to a test set of words with voiceless stops preceding ten different vowel contexts as produced by eight new speakers. The present method of analyzing voiceless stops in English enabled us to distinguish place of articulation in these new stimuli with 70% accuracy.  相似文献   

4.
Seven listener groups, varying in terms of the nasal consonant inventory of their native language, orthographically labeled and rated a set of naturally produced non-native nasal consonants varying in place of articulation. The seven listener groups included speakers of Malayalam, Marathi, Punjabi, Tamil, Oriya, Bengali, and American English. The stimulus set included bilabial, dental, alveolar, and retroflex nasals from Malayalam, Marathi, and Oriya. The stimulus set and nasal consonant inventories of the seven listener groups were described by both phonemic and allophonic representations. The study was designed to determine the extent to which phonemic and allophonic representations of perceptual categories can be used to predict a listener group's identification of non-native sounds. The results of the experiment showed that allophonic representations were more successful in predicting the native category that listeners used to label a non-native sound in a majority of trials. However, both representations frequently failed to accurately predict the goodness of fit between a non-native sound and a perceptual category. The results demonstrate that the labeling and rating of non-native stimuli were conditioned by a degree of language-specific phonetic detail that corresponds to perceptually relevant cues to native language contrasts.  相似文献   

5.
This study investigated whether any perceptually useful coarticulatory information is carried by the release burst of the first of two successive, nonhomorganic stop consonants. The CV portions of natural VCCV utterances were replaced with matched synthetic stimuli from a continuum spanning the three places of stop articulation. There was a sizable effect of coarticulatory cues in the natural-speech portion on the perception of the second stop consonant. Moreover, when the natural VC portions including the final release burst were presented in isolation, listeners were significantly better than chance at guessing the identity of the following, "missing" syllable-initial stop. The hypothesis that the release burst of a syllable-final stop contains significant coarticulatory information about the place of articulation of a following, nonhomorganic stop was further confirmed in acoustic analyses which revealed significant effects of CV context on the spectral properties of the release bursts. The relationship between acoustic stimulus properties and listeners' perceptual responses was not straightforward, however.  相似文献   

6.
According to recent theoretical accounts of place of articulation perception, global, invariant properties of the stop CV syllable onset spectrum serve as primary, innate cues to place of articulation, whereas contextually variable formant transitions constitute secondary, learned cues. By this view, one might expect that young infants would find the discrimination of place of articulation contrasts signaled by formant transition differences more difficult than those cued by gross spectral differences. Using an operant head-turning paradigm, we found that 6-month-old infants were able to discriminate two-formant stimuli contrasting in place of articulation as well as they did five-formant + burst stimuli. Apparently, neither the global properties of the onset spectrum nor simply the additional acoustic information contained in the five-formant + burst stimuli afford the infant any advantage in the discrimination task. Rather, formant transition information provides a sufficient basis for discriminating place of articulation differences.  相似文献   

7.
In English, voiced and voiceless syllable-initial stop consonants differ in both fundamental frequency at the onset of voicing (onset F0) and voice onset time (VOT). Although both correlates, alone, can cue the voicing contrast, listeners weight VOT more heavily when both are available. Such differential weighting may arise from differences in the perceptual distance between voicing categories along the VOT versus onset F0 dimensions, or it may arise from a bias to pay more attention to VOT than to onset F0. The present experiment examines listeners' use of these two cues when classifying stimuli in which perceptual distance was artificially equated along the two dimensions. Listeners were also trained to categorize stimuli based on one cue at the expense of another. Equating perceptual distance eliminated the expected bias toward VOT before training, but successfully learning to base decisions more on VOT and less on onset F0 was easier than vice versa. Perceptual distance along both dimensions increased for both groups after training, but only VOT-trained listeners showed a decrease in Garner interference. Results lend qualified support to an attentional model of phonetic learning in which learning involves strategic redeployment of selective attention across integral acoustic cues.  相似文献   

8.
The phonetic identification ability of an individual (SS) who exhibits the best, or equal to the best, speech understanding of patients using the Symbion four-channel cochlear implant is described. It has been found that SS: (1) can use aspects of signal duration to form categories that are isomorphic with the phonetic categories established by listeners with normal auditory function; (2) can combine temporal and spectral cues in a normal fashion to form categories; (3) can use aspects of fricative noises to form categories that correspond to normal phonetic categories; (4) uses information from both F1 and higher formants in vowel identification; and (5) appears to identify stop consonant place of articulation on the basis of information provided by the center frequency of the burst and by the abruptness of frequency change following signal onset. SS has difficulty identifying stop consonants from the information provided by formant transitions and cannot differentially identify signals that have identical F1's and relatively low-frequency F2's. SS's performance suggests that simple speech processing strategies (filtering of the signal into four bands) and monopolar electrode design are viable options in the design of cochlear prostheses.  相似文献   

9.
Slope and y-intercepts of locus equations have previously been shown to successfully classify place of articulation for English voiced stop consonants when derived from measurements at vowel onset and vowel midpoint. However, listeners are capable of identifying English voiced stops when less than 30 ms of vowel is presented. The present results show that modified locus equation measurements made within the first several pitch periods of a vowel following an English voiced stop were also successful at classifying place of articulation, consistent with the amount of vocalic information necessary for perceptual identification of English voiced stops /b d g/.  相似文献   

10.
On the basis of theoretical considerations and the results of experiments with synthetic consonant-vowel syllables, it has been hypothesized that the short-time spectrum sampled at the onset of a stop consonant should exhibit gross properties that uniquely specify the consonantal place of articulation independent of the following vowel. The aim of this paper is to test this hypothesis by measuring the spectrum sampled at the onsets and offsets of a large number of consonant-vowel (CV) and vowel-consonant (VC) syllables containing both voiced and voiceless stops produced by several speakers. Templates were devised in an attempt to capture three classes of spectral shapes: diffuse-rising, diffuse-falling, and compact, corresponding to alveolar, labial, and velar consonants, respectively. Spectra were derived from the utterances by sampling at the consonantal release of CV syllables and at the implosion and burst release of VC syllables, and these spectra (smoothed by a linear prediction algorithm) were matched against the templates. It was found that about 85% of the spectra at initial consonant release and at final burst release were correctly classified by the templates, although there was some variability across vowel contexts. The spectra sampled at the implosion were not consistently classified. A preliminary examination of spectra sampled at the release of nasal consonants in CV syllables showed a somewhat lower accuracy of classification by the same templates. Overall, the results support an hypothesis that, in natural speech, the acoustic characteristics of stop consonants, specified in terms of the gross spectral shape sampled at the discontinuity in the acoustic signal, show invariant properties independent of the adjacent vowel or of the voicing characteristics of the consonant. The implication is that the auditory system is endowed with detectors that are sensitive to these kinds of gross spectral shapes, and that the existence of these detectors helps the infant to organize the sounds of speech into their natural classes.  相似文献   

11.
The speech production skills of 12 dysphasic children and of 12 normal children were compared. The dysphasic children were found to have significantly greater difficulty than the normal children in producing stop consonants. In addition, it was found that seven of the dysphasic children, who had difficulty in perceiving initial stop consonants, had greater difficulty in producing stop consonants than the remaining five dysphasic children who showed no such perceptual difficulty. A detailed phonetic analysis indicated that the dysphasic children seldom omitted stops or substituted nonstop for stop consonants. Instead, their errors were predominantly of voicing or place of articulation. Acoustic analyses suggested that the voicing errors were related to lack of precise control over the timing of speech events, specifically, voice onset time for initial stops and vowel duration preceding final stops. The number of voicing errors on final stops, however, was greater than expected on the basis of lack of differentiation of vowel duration alone. They appeared also to be related to a tendency in the dysphasic children to produce final stops with exaggerated aspiration. The possible relationship of poor timing control in speech production in these children and auditory temporal processing deficits in speech perception is discussed.  相似文献   

12.
The goal of this study was to determine whether acoustic properties could be derived for English labial and alveolar nasal consonants that remain stable across vowel contexts, speakers, and syllable positions. In experiment I, critical band analyses were conducted of five tokens each of [m] and [n] followed by the vowels [i e a o u] spoken by three speakers. Comparison of the nature of the changes in the spectral patterns from the murmur to the release showed that, for labials, there was a greater change in energy in the region of Bark 5-7 relative to that of Bark 11-14, whereas, for alveolars, there was a greater change in energy from the murmur to the release in the region of Bark 11-14 relative to that of Bark 5-7. Quantitative analyses of each token indicated that over 89% of the utterances could be appropriately classified for place of articulation by comparing the proportion of energy change in these spectral regions. In experiment II, the spectral patterns of labial and alveolar nasals produced in the context of [s] + nasal ([ m n]) + vowel ([ i e a o u]) by two speakers were explored. The same analysis procedures were used as in experiment I. Eighty-four percent of the utterances were appropriately classified, although labial consonants were less consistently classified than in experiment I. The properties associated with nasal place of articulation found in this study are discussed in relation to those associated with place of articulation in stop consonants and are considered from the viewpoint of a more general theory of acoustic invariance.  相似文献   

13.
The purpose of this study was to determine the role of static, dynamic, and integrated cues for perception in three adult age groups, and to determine whether age has an effect on both consonant and vowel perception, as predicted by the "age-related deficit hypothesis." Eight adult subjects in each of the age ranges of young (ages 20-26), middle aged (ages 52-59), and old (ages 70-76) listened to synthesized syllables composed of combinations of [b d g] and [i u a]. The synthesis parameters included manipulations of the following stimulus variables: formant transition (moving or straight), noise burst (present or absent), and voicing duration (10, 30, or 46 ms). Vowel perception was high across all conditions and there were no significant differences among age groups. Consonant identification showed a definite effect of age. Young and middle-aged adults were significantly better than older adults at identifying consonants from secondary cues only. Older adults relied on the integration of static and dynamic cues to a greater extent than younger and middle-aged listeners for identification of place of articulation of stop consonants. Duration facilitated correct stop-consonant identification in the young and middle-aged groups for the no-burst conditions, but not in the old group. These findings for the duration of stop-consonant transitions indicate reductions in processing speed with age. In general, the results did not support the age-related deficit hypothesis for adult identification of vowels and consonants from dynamic spectral cues.  相似文献   

14.
The voice onset time (VOT) of a stop consonant is the interval between its burst onset and voicing onset. Among a variety of research topics on VOT, one that has been studied for years is how VOTs are efficiently measured. Manual annotation is a feasible way, but it becomes a time-consuming task when the corpus size is large. This paper proposes an automatic VOT estimation method based on an onset detection algorithm. At first, a forced alignment is applied to identify the locations of stop consonants. Then a random forest based onset detector searches each stop segment for its burst and voicing onsets to estimate a VOT. The proposed onset detection can detect the onsets in an efficient and accurate manner with only a small amount of training data. The evaluation data extracted from the TIMIT corpus were 2344 words with a word-initial stop. The experimental results showed that 83.4% of the estimations deviate less than 10 ms from their manually labeled values, and 96.5% of the estimations deviate by less than 20 ms. Some factors that influence the proposed estimation method, such as place of articulation, voicing of a stop consonant, and quality of succeeding vowel, were also investigated.  相似文献   

15.
The current study explores the role of the amplitude of the fricative noise in the perception of place of articulation in voiceless fricative consonants. The amplitude of the fricative noise in naturally produced fricative-vowel utterances was varied relative to the vowel and potential changes in perceptual responses were investigated. The amplitude of the fricative noise for [s] and [s] was reduced such that the amplitude of the noise relative to the vowel was similar to [f] and [O], and, conversely, the amplitude of the fricative noise of [f] and [O] was increased such that the amplitude of the noise relative to the vowel was similar to [s] and [s]. The fricative noise was presented to listeners in both its vowel context and in isolation. Results indicated that, when the spectral properties of the fricative noise and formant transitions are compatible, the perceptual effects of the amplitude manipulation of the amplitude of the noise had a small effect on the overall identification of place of articulation, and when effects emerged, they varied across the different fricative stimuli. Moreover, although decreasing the amplitude of [s] and [s] resulted in an increase in [f] and [O] responses, increasing the amplitude of [f] and [O] did not result in an increase in [s] and [s] responses. Implications of these findings for phonetic feature theories are considered.  相似文献   

16.
Two recent accounts of the acoustic cues which specify place of articulation in syllable-initial stop consonants claim that they are located in the initial portions of the CV waveform and are context-free. Stevens and Blumstein [J. Acoust. Soc. Am. 64, 1358-1368 (1978)] have described the perceptually relevant spectral properties of these cues as static, while Kewley-Port [J. Acoust. Soc. Am. 73, 322-335 (1983)] describes these cues as dynamic. Three perceptual experiments were conducted to test predictions derived from these accounts. Experiment 1 confirmed that acoustic cues for place of articulation are located in the initial 20-40 ms of natural stop-vowel syllables. Next, short synthetic CV's modeled after natural syllables were generated using either a digital, parallel-resonance synthesizer in experiment 2 or linear prediction synthesis in experiment 3. One set of synthetic stimuli preserved the static spectral properties proposed by Stevens and Blumstein. Another set of synthetic stimuli preserved the dynamic properties suggested by Kewley-Port. Listeners in both experiments identified place of articulation significantly better from stimuli which preserved dynamic acoustic properties than from those based on static onset spectra. Evidently, the dynamic structure of the initial stop-vowel articulatory gesture can be preserved in context-free acoustic cues which listeners use to identify place of articulation.  相似文献   

17.
F1 structure provides information for final-consonant voicing   总被引:1,自引:0,他引:1  
Previous research has shown that F1 offset frequencies are generally lower for vowels preceding voiced consonants than for vowels preceding voiceless consonants. Furthermore, it has been shown that listeners use these differences in offset frequency in making judgments about final-consonant voicing. A recent production study [W. Summers, J. Acoust. Soc. Am. 82, 847-863 (1987)] reported that F1 frequency differences due to postvocalic voicing are not limited to the final transition or offset region of the preceding vowel. Vowels preceding voiced consonants showed lower F1 onset frequencies and lower F1 steady-state frequencies than vowels preceding voiceless consonants. The present study examined whether F1 frequency differences in the initial transition and steady-state regions of preceding vowels affect final-consonant voicing judgments in perception. The results suggest that F1 frequency differences in these early portions of preceding vowels do, in fact, influence listeners' judgments of postvocalic consonantal voicing.  相似文献   

18.
Acoustic measurements were conducted to determine the degree to which vowel duration, closure duration, and their ratio distinguish voicing of word-final stop consonants across variations in sentential and phonetic environments. Subjects read CVC test words containing three different vowels and ending in stops of three different places of articulation. The test words were produced either in nonphrase-final or phrase-final position and in several local phonetic environments within each of these sentence positions. Our measurements revealed that vowel duration most consistently distinguished voicing categories for the test words. Closure duration failed to consistently distinguish voicing categories across the contextual variables manipulated, as did the ratio of closure and vowel duration. Our results suggest that vowel duration is the most reliable correlate of voicing for word-final stops in connected speech.  相似文献   

19.
The detection of French accent by American listeners   总被引:1,自引:0,他引:1  
The five experiments presented here examine the ability of listeners to detect a foreign accent. Computer editing techniques were used to isolate progressively shorter excerpts of the English spoken by native speakers of American English and French. Native English-speaking listeners judged the speech samples in one- and two-interval forced-choiced tests. They were able to detect foreign accent equally well when presented with speech edited from phrases read in isolation and produced in a spontaneous story. The listeners accurately identified the French talkers (63%-95% of the time) no matter how short were the speech samples presented: entire phrases (e.g., "two little dogs"), syllables (/tu/ or /ti/), portions of syllables corresponding to the phonetic segments /t/,/i/,/u/, and even just the first 30 ms of "two" (roughly, the release burst of /t/). Both phonetically trained listeners familiar with French-accented English and unsophisticated listeners were able to accurately detect accent. These results suggest that listeners develop very detailed phonetic category prototypes against which to evaluate speech sounds occurring in their native language.  相似文献   

20.
This study concentrates on one of the commonly occurring phonetic variations in English: the stop-like modification of the dental fricative /e/. The variant exhibits a drastic change from the canonical /e/; the manner of articulation is changed from one that is fricative to one that is stop-like. Furthermore, the place of articulation of stop-like /e/ has been a point of uncertainty, leading to confusion between stop-like /e/ and /d/. In this study, acoustic and spectral moment measures were taken from 100 stop-like /e/ and 102 /d/ tokens produced by 59 male and 23 female speakers in the TIMIT corpus. Data analysis indicated that stop-like /e/ is significantly different from /d/ in burst amplitude, burst spectrum shape, burst peak frequency, second formant at following-vowel onset, and spectral moments. Moreover, the acoustic differences from /d/ are consistent with those expected for a dental stop-like /e/. Automatic classification experiments involving these acoustic measures suggested that they are salient in distinguishing stop-like /e/ from /d/.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号