首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
This paper investigates the mechanisms controlling the phonemic quantity contrast and speech rate in nonsense p(1)Np(2)a words read by five Slovak speakers in normal and fast speech rate. N represents a syllable nucleus, which in Slovak corresponds to long and short vowels and liquid consonants. The movements of the lips and the tongue were recorded with an electromagnetometry system. Together with the acoustic durations of p(1), N, and p(2), gestural characteristics of three core movements were extracted: p(1) lip opening, tongue movement for (N)ucleus, and p(2) lip closing. The results show that, although consonantal and vocalic nuclei are predictably different on many kinematic measures, their common phonological behavior as syllabic nuclei may be linked to a stable temporal coordination of the consonantal gestures flanking the nucleus. The functional contrast between phonemic duration and speech rate was reflected in the bias in the control mechanisms they employed: the strategies robustly used for signaling phonemic duration, such as the degree of coproduction of the two lip movements, showed a minimal effect of speech rate, while measures greatly affected by speech rate, such as p(2) acoustic duration, or the degree of p(1)-N gestural coproduction, tended to be minimally influenced by phonemic quantity.  相似文献   

2.
A number of studies, involving English, Swedish, French, and Spanish, have shown that, for sequences of rounded vowels separated by nonlabial consonants, both EMG activity and lip protrusion diminish during the intervocalic consonant interval, producing a "trough" pattern. A two-part study was conducted to (a) compare patterns of protrusion movement (upper and lower lip) and EMG activity (orbicularis oris) for speakers of English and Turkish, a language where phonological rules constrain vowels within a word to agree in rounding and (b) determine which of two current models of coarticulation, the "look-ahead" and "coproduction" models, best explained the data. Results showed Turkish speakers producing "plateau" patterns of movement rather than troughs, and unimodal rather than bimodal patterns of EMG activity. In the second part of the study, one prediction of the coproduction model, that articulatory gestures have stable profiles across contexts, was tested by adding and subtracting movement data signals to synthesize naturally occurring patterns. Results suggest English and Turkish may have different modes of coarticulatory organization.  相似文献   

3.
Coarticulation studies in speech of deaf individuals have so far focused on intrasyllabic patterning of various consonant-vowel sequences. In this study, both inter- and intrasyllabic patterning were examined in disyllables /symbol see text #CVC/ and the effects of phonetic context, speaking rate, and segment type were explored. Systematic observation of F2 and durational measurements in disyllables minimally contrasting in vocalic ([i], [u,][a]) and in consonant ([b], [d]) context, respectively, was made at selected locations in the disyllable, in order to relate inferences about articulatory adjustments with their temporal coordinates. Results indicated that intervocalic coarticulation across hearing and deaf speakers varied as a function of the phonetic composition of disyllables (b_b or d_d). The deaf speakers showed reduced intervocalic coarticulation for bilabial but not for alveolar disyllables compared to the hearing speakers. Furthermore, they showed less marked consonant influences on the schwa and stressed vowel of disyllables compared to the hearing controls. Rate effects were minimal and did not alter the coarticulatory patterns observed across hearing status. The above findings modify the conclusions drawn from previous studies and suggest that the speech of deaf and hearing speakers is guided by different gestural organization.  相似文献   

4.
5.
A model of the vocal-tract area function is described that consists of four tiers. The first tier is a vowel substrate defined by a system of spatial eigenmodes and a neutral area function determined from MRI-based vocal-tract data. The input parameters to the first tier are coefficient values that, when multiplied by the appropriate eigenmode and added to the neutral area function, construct a desired vowel. The second tier consists of a consonant shaping function defined along the length of the vocal tract that can be used to modify the vowel substrate such that a constriction is formed. Input parameters consist of the location, area, and range of the constriction. Location and area roughly correspond to the standard phonetic specifications of place and degree of constriction, whereas the range defines the amount of vocal-tract length over which the constriction will influence the tract shape. The third tier allows length modifications for articulatory maneuvers such as lip rounding/spreading and larynx lowering/raising. Finally, the fourth tier provides control of the level of acoustic coupling of the vocal tract to the nasal tract. All parameters can be specified either as static or time varying, which allows for multiple levels of coarticulation or coproduction.  相似文献   

6.
Overlap-masking degrades speech intelligibility in reverberation [R. H. Bolt and A. D. MacDonald, J. Acoust. Soc. Am. 21(6), 577-580 (1949)]. To reduce the effect of this degradation, steady-state suppression has been proposed as a preprocessing technique [Arai et al., Proc. Autumn Meet. Acoust. Soc. Jpn., 2001; Acoust. Sci. Tech. 23(8), 229-232 (2002)]. This technique automatically suppresses steady-state portions of speech that have more energy but are less crucial for speech perception. The present paper explores the effect of steady-state suppression on syllable identification preceded by /a/ under various reverberant conditions. In each of two perception experiments, stimuli were presented to 22 subjects with normal hearing. The stimuli consisted of mono-syllables in a carrier phrase with and without steady-state suppression and were presented under different reverberant conditions using artificial impulse responses. The results indicate that steady-state suppression statistically improves consonant identification for reverberation times of 0.7 to 1.2 s. Analysis of confusion matrices shows that identification of voiced consonants, stop and nasal consonants, and bilabial, alveolar, and velar consonants were especially improved by steady-state suppression. The steady-state suppression is demonstrated to be an effective preprocessing method for improving syllable identification by reducing the effect of overlap-masking under specific reverberant conditions.  相似文献   

7.
This study focuses on the initial component of the stop consonant release burst, the release transient. In theory, the transient, because of its impulselike source, should contain much information about the vocal tract configuration at release, but it is usually weak in intensity and difficult to isolate from the accompanying frication in natural speech. For this investigation, a human talker produced isolated release transients of /b,d,g/ in nine vocalic contexts by whispering these syllables very quietly. He also produced the corresponding CV syllables with regular phonation for comparison. Spectral analyses showed the isolated transients to have a clearly defined formant structure, which was not seen in natural release bursts, whose spectra were dominated by the frication noise. The formant frequencies varied systematically with both consonant place of articulation and vocalic context. Perceptual experiments showed that listeners can identify both consonants and vowels from isolated transients, though not very accurately. Knowing one of the two segments in advance did not help, but when the transients were followed by a compatible synthetic, steady-state vowel, consonant identification improved somewhat. On the whole, isolated transients, despite their clear formant structure, provided only partial information for consonant identification, but no less so, it seems, than excerpted natural release bursts. The information conveyed by artificially isolated transients and by natural (frication-dominated) release bursts appears to be perceptually equivalent.  相似文献   

8.
The purpose of this letter is to explore some reasons for what appear to be conflicting reports regarding the nature and extent of anticipatory coarticulation, in general, and anticipatory lip rounding, in particular. Analyses of labial electromyographic and kinematic data using a minimal-pair paradigm allowed for the differentiation of consonantal and vocalic effects, supporting a frame versus a feature-spreading model of coarticulation. It is believed that the apparent conflicts of previous studies of anticipatory coarticulation might be resolved if experimental design made more use of contrastive minimal pairs and relied less on assumptions about feature specifications of phones.  相似文献   

9.
The influence of vocalic context on various temporal and spectral properties of preceding acoustic segments was investigated in utterances containing [schwa No. CV] sequences produced by two girls aged 4;8 and 9;5 years and by their father. The younger (but not the older) child's speech showed a systematic lowering of [s] noise and [th] release burst spectra before [u] as compared to [i] and [ae]. The older child's speech, on the other hand, showed an orderly relationship of the second-formant frequency in [] to the transconsonantal vowel. Both children tended to produce longer [s] noises and voice onset times as well as higher second-formant peaks at constriction noise offset before [i] than before [u] and [ae]. All effects except the first were shown by the adult who, in addition, produced first-formant frequencies in [] that anticipated the transconsonantal vowel. These observations suggest that different forms of anticipatory coarticulation may have different causes and may follow different developmental patterns. A strategy for future research is suggested.  相似文献   

10.
The effect of speaking rate variations on second formant (F2) trajectories was investigated for a continuum of rates. F2 trajectories for the schwa preceding a voiced bilabial stop, and one of three target vocalic nuclei following the stop, were generated for utterances of the form "Put a bV here, where V was /i/,/ae/ or /oI/. Discrete spectral measures at the vowel-consonant and consonant-vowel interfaces, as well as vowel target values, were examined as potential parameters of rate variation; several different whole-trajectory analyses were also explored. Results suggested that a discrete measure at the vowel consonant (schwa-consonant) interface, the F2off value, was in many cases a good index of rate variation, provided the rates were not unusually slow (vowel durations less than 200 ms). The relationship of the spectral measure at the consonant-vowel interface, F2 onset, as well as that of the "target" for this vowel, was less clearly related to rate variation. Whole-trajectory analyses indicated that the rate effect cannot be captured by linear compressions and expansions of some prototype trajectory. Moreover, the effect of rate manipulation on formant trajectories interacts with speaker and vocalic nucleus type, making it difficult to specify general rules for these effects. However, there is evidence that a small number of speaker strategies may emerge from a careful qualitative and quantitative analysis of whole formant trajectories. Results are discussed in terms of models of speech production and a group of speech disorders that is usually associated with anomalies of speaking rate, and hence of formant frequency trajectories.  相似文献   

11.
On the basis of theoretical considerations and the results of experiments with synthetic consonant-vowel syllables, it has been hypothesized that the short-time spectrum sampled at the onset of a stop consonant should exhibit gross properties that uniquely specify the consonantal place of articulation independent of the following vowel. The aim of this paper is to test this hypothesis by measuring the spectrum sampled at the onsets and offsets of a large number of consonant-vowel (CV) and vowel-consonant (VC) syllables containing both voiced and voiceless stops produced by several speakers. Templates were devised in an attempt to capture three classes of spectral shapes: diffuse-rising, diffuse-falling, and compact, corresponding to alveolar, labial, and velar consonants, respectively. Spectra were derived from the utterances by sampling at the consonantal release of CV syllables and at the implosion and burst release of VC syllables, and these spectra (smoothed by a linear prediction algorithm) were matched against the templates. It was found that about 85% of the spectra at initial consonant release and at final burst release were correctly classified by the templates, although there was some variability across vowel contexts. The spectra sampled at the implosion were not consistently classified. A preliminary examination of spectra sampled at the release of nasal consonants in CV syllables showed a somewhat lower accuracy of classification by the same templates. Overall, the results support an hypothesis that, in natural speech, the acoustic characteristics of stop consonants, specified in terms of the gross spectral shape sampled at the discontinuity in the acoustic signal, show invariant properties independent of the adjacent vowel or of the voicing characteristics of the consonant. The implication is that the auditory system is endowed with detectors that are sensitive to these kinds of gross spectral shapes, and that the existence of these detectors helps the infant to organize the sounds of speech into their natural classes.  相似文献   

12.
Limited consonant phonemic information can be conveyed by the temporal characteristics of speech. In the two experiments reported here, the effects of practice and of multiple talkers on identification of temporal consonant information were evaluated. Naturally produced /aCa/disyllables were used to create "temporal-only" stimuli having instantaneous amplitudes identical to the natural speech stimuli, but flat spectra. Practice improved normal-hearing subjects' identification of temporal-only stimuli from a single talker over that reported earlier for a different group of unpracticed subjects [J. Acoust. Soc. Am. 82, 1152-1161 (1987)]. When the number of talkers was increased to six, however, performance was poorer than that observed for one talker, demonstrating that subjects had been able to learn the individual stimulus items derived from the speech of the single talker. Even after practice, subjects varied greatly in their abilities to extract temporal information related to consonant voicing and manner. Identification of consonant place was uniformly poor in the multiple-talker situation, indicating that for these stimuli consonant place is cued via spectral information. Comparison of consonant identification by users of multi-channel cochlear implants showed that the implant users' identification of temporal consonant information was largely within the range predicted from the normal data. In the instances where the implant users were performing especially well, they were identifying consonant place information at levels well beyond those predicted by the normal-subject data. Comparison of implant-user performance with the temporal-only data reported here can help determine whether the speech information available to the implant user consists of entirely temporal cues, or is augmented by spectral cues.  相似文献   

13.
Research on children's speech perception and production suggests that consonant voicing and place contrasts may be acquired early in life, at least in word-onset position. However, little is known about the development of the acoustic correlates of later-acquired, word-final coda contrasts. This is of particular interest in languages like English where many grammatical morphemes are realized as codas. This study therefore examined how various non-spectral acoustic cues vary as a function of stop coda voicing (voiced vs. voiceless) and place (alveolar vs. velar) in the spontaneous speech of 6 American-English-speaking mother-child dyads. The results indicate that children as young as 1;6 exhibited many adult-like acoustic cues to voicing and place contrasts, including longer vowels and more frequent use of voice bar with voiced codas, and a greater number of bursts and longer post-release noise for velar codas. However, 1;6-year-olds overall exhibited longer durations and more frequent occurrence of these cues compared to mothers, with decreasing values by 2;6. Thus, English-speaking 1;6-year-olds already exhibit adult-like use of some of the cues to coda voicing and place, though implementation is not yet fully adult-like. Physiological and contextual correlates of these findings are discussed.  相似文献   

14.
Durations of the vocalic portions of speech are influenced by a large number of linguistic and nonlinguistic factors (e.g., stress and speaking rate). However, each factor affecting vowel duration may influence articulation in a unique manner. The present study examined the effects of stress and final-consonant voicing on the detailed structure of articulatory and acoustic patterns in consonant-vowel-consonant (CVC) utterances. Jaw movement trajectories and F 1 trajectories were examined for a corpus of utterances differing in stress and final-consonant voicing. Jaw lowering and raising gestures were more rapid, longer in duration, and spatially more extensive for stressed versus unstressed utterances. At the acoustic level, stressed utterances showed more rapid initial F 1 transitions and more extreme F 1 steady-state frequencies than unstressed utterances. In contrast to the results obtained in the analysis of stress, decreases in vowel duration due to devoicing did not result in a reduction in the velocity or spatial extent of the articulatory gestures. Similarly, at the acoustic level, the reductions in formant transition slopes and steady-state frequencies demonstrated by the shorter, unstressed utterances did not occur for the shorter, voiceless utterances. The results demonstrate that stress-related and voicing-related changes in vowel duration are accomplished by separate and distinct changes in speech production with observable consequences at both the articulatory and acoustic levels.  相似文献   

15.
This study explores the following hypothesis: forward looping movements of the tongue that are observed in VCV sequences are due partly to the anatomical arrangement of the tongue muscles, how they are used to produce a velar closure, and how the tongue interacts with the palate during consonantal closure. The study uses an anatomically based two-dimensional biomechanical tongue model. Tissue elastic properties are accounted for in finite-element modeling, and movement is controlled by constant-rate control parameter shifts. Tongue raising and lowering movements are produced by the model mainly with the combined actions of the genioglossus, styloglossus, and hyoglossus. Simulations of V1CV2 movements were made, where C is a velar consonant and V is [a], [i], or [u]. Both vowels and consonants are specified in terms of targets, but for the consonant the target is virtual, and cannot be reached because it is beyond the surface of the palate. If V1 is the vowel [a] or [u], the resulting trajectory describes a movement that begins to loop forward before consonant closure and continues to slide along the palate during the closure. This pattern is very stable when moderate changes are made to the specification of the target consonant location and agrees with data published in the literature. If V1 is the vowel [i], looping patterns are also observed, but their orientation was quite sensitive to small changes in the location of the consonant target. These findings also agree with patterns of variability observed in measurements from human speakers, but they contradict data published by Houde [Ph.D. dissertation (1967)]. These observations support the idea that the biomechanical properties of the tongue could be the main factor responsible for the forward loops when V1 is a back vowel, regardless of whether V2 is a back vowel or a front vowel. In the [i] context it seems that additional factors have to be taken into consideration in order to explain the observations made on some speakers.  相似文献   

16.
Medial movements of the lateral pharyngeal wall at the level of the velopharyngeal port were examined by using a computerized ultrasound system. Subjects produced CVNVC sequences involving all combinations of the vowels /a/ and /u/ and the nasal consonants /n/ and /m/. The effects of both vowels on the CVN and NVC gestures (opening and closing of the velopharyngeal port, respectively) were assessed in terms of movement amplitude, duration, and movement onset time. The amplitude of both opening and closing gestures of the lateral pharyngeal wall was less in the context of the vowel /u/ than the vowel /a/. In addition, the onset of the opening gesture towards the nasal consonant was related to the identity of both the initial and the final vowels. The characteristics of the functional coupling of the velum and lateral pharyngeal wall in speech are discussed.  相似文献   

17.
Vertical lingual movement data for the alveolopalatal consonants /?/ and /?/ and for the dorsovelar consonant /k/ in Catalan /aCa/ sequences produced by three speakers reveal that the tongue body travels a smaller distance at a slower speed and in a longer time during the lowering period extending from the consonant into the following vowel (CV) than during the rising period extending from the preceding vowel into the consonant (VC). For two speakers, two-phase trajectories characterized by two successive velocity peaks occur more frequently during the former period than during the latter, whether associated with tongue blade and dorsum (for alveolopalatals) or with the tongue dorsum articulator alone (for velars). Greater tongue dorsum involvement for /?/ and /k/ than for /?/ accounts for a different kinematic relationship between the four articulatory phases. The lingual gesture for alveolopalatals and, less so, that for velars may exert more prominent spatial and temporal effects on V2 than on V1 which is in agreement with the salience of the C-to-V carryover component associated with these consonants according to previous coarticulation studies. These kinematic and coarticulation data may be attributed to tongue dorsum biomechanics to a large extent.  相似文献   

18.
The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects' ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects' consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech.  相似文献   

19.
Individuals with congenital velopharyngeal impairment generally maintain adequate levels of intraoral pressures during consonant production by increasing respiratory effort. The purpose of the present study was to determine if normal individuals respond to a decrease in velopharyngeal resistance in a similar way. The velar mechanism was perturbed by having subjects voluntarily lower the soft palate during a series of words involving plosive consonants. The pressure-flow technique was used to measure oral pressures, calculate velopharyngeal orifice resistance, and estimate velopharyngeal orifice area. Inductive plethysmography was used to measure breathing volumes associated with the words. The data indicate that, in most instances, intraoral pressure remained at appropriate levels (greater than 3.0 cm H2O) after velar lowering. Speech breathing volume did not change during inspiration, but increased during speech expiration when the velopharyngeal port was open. The difference was statistically significant (p less than 0.01). Duration of the utterance did not change across conditions. A mechanical model was then used to determine how intraoral pressure would be affected by simulating the same conditions in a passive system. The modeling data revealed that pressure would drop threefold. It was concluded that increased respiratory volumes tend to stabilize intraoral pressure when vocal tract resistance is experimentally reduced.  相似文献   

20.
This article describes a model in which the acoustic speech signal is processed to yield a discrete representation of the speech stream in terms of a sequence of segments, each of which is described by a set (or bundle) of binary distinctive features. These distinctive features specify the phonemic contrasts that are used in the language, such that a change in the value of a feature can potentially generate a new word. This model is a part of a more general model that derives a word sequence from this feature representation, the words being represented in a lexicon by sequences of feature bundles. The processing of the signal proceeds in three steps: (1) Detection of peaks, valleys, and discontinuities in particular frequency ranges of the signal leads to identification of acoustic landmarks. The type of landmark provides evidence for a subset of distinctive features called articulator-free features (e.g., [vowel], [consonant], [continuant]). (2) Acoustic parameters are derived from the signal near the landmarks to provide evidence for the actions of particular articulators, and acoustic cues are extracted by sampling selected attributes of these parameters in these regions. The selection of cues that are extracted depends on the type of landmark and on the environment in which it occurs. (3) The cues obtained in step (2) are combined, taking context into account, to provide estimates of "articulator-bound" features associated with each landmark (e.g., [lips], [high], [nasal]). These articulator-bound features, combined with the articulator-free features in (1), constitute the sequence of feature bundles that forms the output of the model. Examples of cues that are used, and justification for this selection, are given, as well as examples of the process of inferring the underlying features for a segment when there is variability in the signal due to enhancement gestures (recruited by a speaker to make a contrast more salient) or due to overlap of gestures from neighboring segments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号