首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study reassessed the role of the nasal murmur and formant transitions as perceptual cues for place of articulation in nasal consonants across a number of vowel environments. Five types of computer-edited stimuli were generated from natural utterances consisting of [m n] followed by [i e a o u]: (1) full murmurs; (2) transitions plus vowel segments; (3) the last six pulses of the murmur; (4) the six pulses starting from the beginning of the formant transitions; and (5) the six pulses surrounding the nasal release (three pulses before and three pulses after). Results showed that the murmur provided as much information for the perception of place of articulation as did the transitions. Moreover, the highest performance scores for place of articulation were obtained in the six-pulse condition containing both murmur and transition information. The data support the view that it is the combination of nasal murmur plus formant transitions which forms an integrated property for the perception of place of articulation.  相似文献   

2.
This study explored the claim that invariant acoustic properties corresponding to phonetic features generalize across languages. Experiment I examined whether the same invariant properties can characterize diffuse stop consonants in Malayalam, French, and English. Results showed that, contrary to theoretical predictions, we could not distinguish labials from dentals, nor could we classify dentals and alveolars together in terms of the same invariant properties. We developed an alternative metric based on the change in the distribution of spectral energy from the burst onset to the onset of voicing. This metric classified over 91% of the stops in Malayalam, French, and English. In experiment II, we investigated whether the invariant properties defined by the metric are used by English-speaking listeners in making phonetic decisions for place of articulation. Prototype CV syllables--[b d] in the context of [i e a o u]--were synthesized. The gross shape of the spectrum was manipulated first at the burst onset, then at the onset of voicing, such that the stimulus configuration had the spectral properties prescribed by our metric for labial and dental consonants, while the formant frequencies and transitions were appropriate to the contrasting place of articulation. Results of identification tests showed that listeners were able to perceive place of articulation as a function of the relative distribution of spectral energy specified by the metric.  相似文献   

3.
The experiment reported here explores the ability of 4- to 5-day-old neonates to discriminate consonantal place of articulation and vowel quality using shortened CV syllables similar to those used by Blumstein and Stevens [J. Acoust. Soc. Am. 67, 648-662 (1980)], without vowel steady-state information. The results show that the initial 34-44 ms of CV stimuli provide infants with sufficient information to discriminate place of articulation differences in stop consonants ([ba] vs [da], [ba] vs [ga], [bi] vs [di], and [bi] vs [gi]) and following vowel quality ([ba] vs [bi], [da] vs [di], and [ga] vs [gi]). These results suggest that infants can discriminate syllables on the basis of the onset properties of CV signals. Furthermore, this experiment indicates that neonates require little or no exposure to speech to succeed in such a discrimination task.  相似文献   

4.
We have examined the effects of the relative amplitude of the release burst on perception of the place of articulation of utterance-initial voiceless and voiced stop consonants. The amplitude of the burst, which occurs within the first 10-15 ms following consonant release, was systematically varied in 5-dB steps from -10 to +10 dB relative to a "normal" burst amplitude for two labial-to-alveolar synthetic speech continua--one comprising voiceless stops and the other, voiced stops. The distribution of spectral energy in the bursts for the labial and alveolar stops at the ends of the continuum was consistent with the spectrum shapes observed in natural utterances, and intermediate shapes were used for intermediate stimuli on the continuum. The results of identification tests with these stimuli showed that the relative amplitude of the burst significantly affected the perception of the place of articulation of both voiceless and voiced stops, but the effect was greater for the former than the latter. The results are consistent with a view that two basic properties contribute to the labial-alveolar distinction in English. One of these is determined by the time course of the change in amplitude in the high-frequency range (above 2500 Hz) in the few tens of ms following consonantal release, and the other is determined by the frequencies of spectral peaks associated with the second and third formants in relation to the first formant.  相似文献   

5.
The formant hypothesis of vowel perception, where the lowest two or three formant frequencies are essential cues for vowel quality perception, is widely accepted. There has, however, been some controversy suggesting that formant frequencies are not sufficient and that the whole spectral shape is necessary for perception. Three psychophysical experiments were performed to study this question. In the first experiment, the first or second formant peak of stimuli was suppressed as much as possible while still maintaining the original spectral shape. The responses to these stimuli were not radically different from the ones for the unsuppressed control. In the second experiment, F2-suppressed stimuli, whose amplitude ratios of high- to low-frequency components were systemically changed, were used. The results indicate that the ratio changes can affect perceived vowel quality, especially its place of articulation. In the third experiment, the full-formant stimuli, whose amplitude ratios were changed from the original and whose F2's were kept constant, were used. The results suggest that the amplitude ratio is equal to or more effective than F2 as a cue for place of articulation. We conclude that formant frequencies are not exclusive cues and that the whole spectral shape can be crucial for vowel perception.  相似文献   

6.
The effects of mild-to-moderate hearing impairment on the perceptual importance of three acoustic correlates of stop consonant place of articulation were examined. Normal-hearing and hearing-impaired adults identified a stimulus set comprising all possible combinations of the levels of three factors: formant transition type (three levels), spectral tilt type (three levels), and abruptness of frequency change (two levels). The levels of these factors correspond to those appropriate for /b/, /d/, and /g/ in the /ae/ environment. Normal-hearing subjects responded primarily in accord with the place of articulation specified by the formant transitions. Hearing-impaired subjects showed less-than-normal reliance on formant transitions and greater-than-normal reliance on spectral tilt and abruptness of frequency change. These results suggest that hearing impairment affects the perceptual importance of cues to stop consonant identity, increasing the importance of information provided by both temporal characteristics and gross spectral shape and decreasing the importance of information provided by the formant transitions.  相似文献   

7.
Two recent accounts of the acoustic cues which specify place of articulation in syllable-initial stop consonants claim that they are located in the initial portions of the CV waveform and are context-free. Stevens and Blumstein [J. Acoust. Soc. Am. 64, 1358-1368 (1978)] have described the perceptually relevant spectral properties of these cues as static, while Kewley-Port [J. Acoust. Soc. Am. 73, 322-335 (1983)] describes these cues as dynamic. Three perceptual experiments were conducted to test predictions derived from these accounts. Experiment 1 confirmed that acoustic cues for place of articulation are located in the initial 20-40 ms of natural stop-vowel syllables. Next, short synthetic CV's modeled after natural syllables were generated using either a digital, parallel-resonance synthesizer in experiment 2 or linear prediction synthesis in experiment 3. One set of synthetic stimuli preserved the static spectral properties proposed by Stevens and Blumstein. Another set of synthetic stimuli preserved the dynamic properties suggested by Kewley-Port. Listeners in both experiments identified place of articulation significantly better from stimuli which preserved dynamic acoustic properties than from those based on static onset spectra. Evidently, the dynamic structure of the initial stop-vowel articulatory gesture can be preserved in context-free acoustic cues which listeners use to identify place of articulation.  相似文献   

8.
The contribution of the nasal murmur and the vocalic formant transitions to perception of the [m]-[n] distinction in utterance-initial position preceding [i,a,u] was investigated, extending the recent work of Kurowski and Blumstein [J. Acoust. Soc. Am. 76, 383-390 (1984)]. A variety of waveform-editing procedures were applied to syllables produced by six different talkers. Listeners' judgments of the edited stimuli confirmed that the nasal murmur makes a significant contribution to place of articulation perception. Murmur and transition information appeared to be integrated at a genuinely perceptual, not an abstract cognitive, level. This was particularly evident in [-i] context, where only the simultaneous presence of murmur and transition components permitted accurate place of articulation identification. The perceptual information seemed to be purely relational in this case. It also seemed to be context specific, since the spectral change from the murmur to the vowel onset did not follow an invariant pattern across front and back vowels.  相似文献   

9.
An important problem in speech perception is to determine how humans extract the perceptually invariant place of articulation information in the speech wave across variable acoustic contexts. Although analyses have been developed that attempted to classify the voiced stops /b/ versus /d/ from stimulus onset information, most of the human perceptual research to date suggests that formant transition information is more important than onset information. The purpose of the present study was to determine if animal subjects, specifically Japanese macaque monkeys, are capable of categorizing /b/ versus /d/ in synthesized consonant-vowel (CV) syllables using only formant transition information. Three monkeys were trained to differentiate CV syllables with a "go-left" versus a "go-right" label. All monkeys first learned to differentiate a /za/ versus /da/ manner contrast and easily transferred to three new vowel contexts /[symbol: see text], epsilon, I/. Next, two of the three monkeys learned to differentiate a /ba/ versus /da/ stop place contrast, but were unable to transfer it to the different vowel contexts. These results suggest that animals may not use the same mechanisms as humans do for classifying place contrasts, and call for further investigation of animal perception of formant transition information versus stimulus onset information in place contrasts.  相似文献   

10.
This study investigated whether any perceptually useful coarticulatory information is carried by the release burst of the first of two successive, nonhomorganic stop consonants. The CV portions of natural VCCV utterances were replaced with matched synthetic stimuli from a continuum spanning the three places of stop articulation. There was a sizable effect of coarticulatory cues in the natural-speech portion on the perception of the second stop consonant. Moreover, when the natural VC portions including the final release burst were presented in isolation, listeners were significantly better than chance at guessing the identity of the following, "missing" syllable-initial stop. The hypothesis that the release burst of a syllable-final stop contains significant coarticulatory information about the place of articulation of a following, nonhomorganic stop was further confirmed in acoustic analyses which revealed significant effects of CV context on the spectral properties of the release bursts. The relationship between acoustic stimulus properties and listeners' perceptual responses was not straightforward, however.  相似文献   

11.
This study assessed the acoustic and perceptual effect of noise on vowel and stop-consonant spectra. Multi-talker babble and speech-shaped noise were added to vowel and stop stimuli at -5 to +10 dB S/N, and the effect of noise was quantified in terms of (a) spectral envelope differences between the noisy and clean spectra in three frequency bands, (b) presence of reliable F1 and F2 information in noise, and (c) changes in burst frequency and slope. Acoustic analysis indicated that F1 was detected more reliably than F2 and the largest spectral envelope differences between the noisy and clean vowel spectra occurred in the mid-frequency band. This finding suggests that in extremely noisy conditions listeners must be relying on relatively accurate F1 frequency information along with partial F2 information to identify vowels. Stop consonant recognition remained high even at -5 dB despite the disruption of burst cues due to additive noise, suggesting that listeners must be relying on other cues, perhaps formant transitions, to identify stops.  相似文献   

12.
Previous studies [Lisker, J. Acoust. Soc. Am. 57, 1547-1551 (1975); Summerfield and Haggard, J. Acoust. Soc. Am. 62, 435-448 (1977)] have shown that voice onset time (VOT) and the onset frequency of the first formant are important perceptual cues of voicing in syllable-initial plosives. Most prior work, however, has focused on speech perception in quiet environments. The present study seeks to determine which cues are important for the perception of voicing in syllable-initial plosives in the presence of noise. Perceptual experiments were conducted using stimuli consisting of naturally spoken consonant-vowel syllables by four talkers in various levels of additive white Gaussian noise. Plosives sharing the same place of articulation and vowel context (e.g., /pa,ba/) were presented to subjects in two alternate forced choice identification tasks, and a threshold signal-to-noise-ratio (SNR) value (corresponding to the 79% correct classification score) was estimated for each voiced/voiceless pair. The threshold SNR values were then correlated with several acoustic measurements of the speech tokens. Results indicate that the onset frequency of the first formant is critical in perceiving voicing in syllable-initial plosives in additive white Gaussian noise, while the VOT duration is not.  相似文献   

13.
Two experiments determined the just noticeable difference (jnd) in onset frequency for speech formant transitions followed by a 1800-Hz steady state. Influences of transition duration (30, 45, 60, and 120 ms), transition-onset region (above or below 1800 Hz), and the rate of transition were examined. An overall improvement in discrimination with duration was observed suggesting better frequency resolution and, consequently, better use of pitch/timbre cues with longer transitions. In addition, falling transitions (with onsets above 1800 Hz) were better discriminated than rising, and changing onset to produce increments in transition rate-of-change in frequency yielded smaller jnd's than changing onset to produce decrements. The shortest transitions displayed additional rate-related effects. This last observation may be due to differences in the degree of dispersion of activity in the cochlea when high-rate transitions are effectively treated as non-time-varying, wideband events. The other results may reflect mechanisms that extract the temporal envelopes of signals: Envelope slope and magnitude differences are proposed to provide discriminative cues that supplement or supplant weaker spectrally based pitch/timbre cues for transitions in the short-to-moderate duration range. It is speculated that these cues may also support some speech perceptual decisions.  相似文献   

14.
15.
A method for distinguishing burst onsets of voiceless stop consonants in terms of place of articulation is described. Four speakers produced the voiceless stops in word-initial position in six vowel contexts. A metric was devised to extract the characteristic burst-friction components at burst onset. The burst-friction components, derived from the metric as sensory formants, were then transformed into log frequency ratios and plotted as points in an auditory-perceptual space (APS). In the APS, each place of articulation was seen to be associated with a distinct region, or target zone. The metric was then applied to a test set of words with voiceless stops preceding ten different vowel contexts as produced by eight new speakers. The present method of analyzing voiceless stops in English enabled us to distinguish place of articulation in these new stimuli with 70% accuracy.  相似文献   

16.
Previous studies have shown that infants discriminate voice onset time (VOT) differences for certain speech contrasts categorically. In addition, investigations of nonspeech processing by infants also yield evidence of categorical discrimination of temporal-order differences. These findings have led some researchers to argue that common auditory mechanisms underlie the infant's discrimination of timing differences in speech and nonspeech contrasts [e.g., Jusczyk et al., J. Acoust. Soc. Am. 67, 262-270 (1980)]. Nevertheless, some discrepancies in the location of the infant's category boundaries for different kinds of contrasts have been noted [e.g., Eilers et al. (1980)]. Because different procedures were used to study the different kinds of contrasts, interpretation of the discrepancies between the studies has been difficult. In the present study, three different continua were examined: [ba]-[pa] stimuli, which differed in VOT; [du]-[tu] stimuli, which differed in VOT but which lacked format transitions; nonspeech formant onset time (FOT) stimuli that varied in the time that lower harmonics increased in amplitude. An experiment with adults indicated a close match between the perceptual boundaries for the three series. Similarly, tests with 2-month-old infants using high amplitude sucking procedure yielded estimates of perceptual category boundaries at between 20 and 40 ms for all three stimulus series.  相似文献   

17.
This study concentrates on one of the commonly occurring phonetic variations in English: the stop-like modification of the dental fricative /e/. The variant exhibits a drastic change from the canonical /e/; the manner of articulation is changed from one that is fricative to one that is stop-like. Furthermore, the place of articulation of stop-like /e/ has been a point of uncertainty, leading to confusion between stop-like /e/ and /d/. In this study, acoustic and spectral moment measures were taken from 100 stop-like /e/ and 102 /d/ tokens produced by 59 male and 23 female speakers in the TIMIT corpus. Data analysis indicated that stop-like /e/ is significantly different from /d/ in burst amplitude, burst spectrum shape, burst peak frequency, second formant at following-vowel onset, and spectral moments. Moreover, the acoustic differences from /d/ are consistent with those expected for a dental stop-like /e/. Automatic classification experiments involving these acoustic measures suggested that they are salient in distinguishing stop-like /e/ from /d/.  相似文献   

18.
These studies investigated formant frequency discrimination by Japanese macaques (Macaca fuscata) using an AX discrimination procedure and techniques of operant conditioning. Nonhuman subjects were significantly more sensitive to increments in the center frequency of either the first (F1) or second (F2) formant of single-formant complexes than to corresponding pure-tone frequency shifts. Furthermore, difference limens (DLs) for multiformant signals were not significantly different than those for single-formant stimuli. These results suggest that Japanese monkeys process formant and pure-tone frequency increments differentially and that the same mechanisms mediate formant frequency discrimination in single-formant and vowel-like complexes. The importance of two of the cues available to mediate formant frequency discrimination, changes in the phase and the amplitude spectra of the signals, was investigated by independently manipulating these two parameters. Results of the studies indicated that phase cues were not a significant feature of formant frequency discrimination by Japanese macaques. Rather, subjects attended to relative level changes in harmonics within a narrow frequency range near F1 and F2 to detect formant frequency increments. These findings are compared to human formant discrimination data and suggest that both species rely on detecting alterations in spectral shape to discriminate formant frequency shifts. Implications of the results for animal models of speech perception are discussed.  相似文献   

19.
The phonetic identification ability of an individual (SS) who exhibits the best, or equal to the best, speech understanding of patients using the Symbion four-channel cochlear implant is described. It has been found that SS: (1) can use aspects of signal duration to form categories that are isomorphic with the phonetic categories established by listeners with normal auditory function; (2) can combine temporal and spectral cues in a normal fashion to form categories; (3) can use aspects of fricative noises to form categories that correspond to normal phonetic categories; (4) uses information from both F1 and higher formants in vowel identification; and (5) appears to identify stop consonant place of articulation on the basis of information provided by the center frequency of the burst and by the abruptness of frequency change following signal onset. SS has difficulty identifying stop consonants from the information provided by formant transitions and cannot differentially identify signals that have identical F1's and relatively low-frequency F2's. SS's performance suggests that simple speech processing strategies (filtering of the signal into four bands) and monopolar electrode design are viable options in the design of cochlear prostheses.  相似文献   

20.
In order to assess the limitations imposed on a cochlear implant system by a wearable speech processor, the parameters extracted from a set of 11 vowels and 24 consonants were examined. An estimate of the fundamental frequency EF 0 was derived from the zero crossings of the low-pass filtered envelope of the waveform. Estimates of the first and second formant frequencies EF 1 and EF 2 were derived from the zero crossings of the waveform, which was filtered in the ranges 300-1000 and 800-4000 Hz. Estimates of the formant amplitudes EA 1 and EA 2 were derived by peak detectors operating on the outputs of the same filters. For vowels, these parameters corresponded well to the first and second formants and gave sufficient information to identify each vowel. For consonants, the relative levels and onset times of EA 1 and EA 2 and the EF 0 values gave cues to voicing. The variation in time of EA 1, EA 2, EF 1, and EF 2 gave cues to the manner of articulation. Cues to the place of articulation were given by EF 1 and EF 2. When pink noise was added, the parameters were gradually degraded as the signal-to-noise ratio decreased. Consonants were affected more than vowels, and EF 2 was affected more than EF 1. Results for three good patients using a speech processor that coded EF 0 as an electric pulse rate, EF 1 and EF 2 as electrode positions, and EA 1 and EA 2 as electric current levels confirmed that the parameters were useful for recognition of vowels and consonants. Average scores were 76% for recognition of 11 vowels and 71% for 12 consonants in the hearing-alone condition. The error rates were 4% for voicing, 12% for manner, and 25% for place.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号