首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The research presented here concerns the simultaneous grouping of the components of a vocal sound source. McAdams [J. Acoust. Soc. Am. 86, 2148-2159 (1989)] found that when three simultaneous vowels at different pitches were presented with subaudio frequency modulation, subjects judged them as being more prominent than when no vibrato was present. In a normal voice, when the harmonics of a vowel undergo frequency modulation they also undergo an amplitude modulation that traces the spectral envelope. Hypothetically, this spectral tracing could be one of the criteria used by the ear to group components of each vowel, which may help explain the lack of effect of frequency modulation coherence among different vowels in the previous study. In this experiment, two types of vowel synthesis were used in which the component amplitudes of each vowel either remained constant with frequency modulation or traced the spectral envelope. The stimuli for the experiment were chords of three different vowels at pitch intervals of five semitones (ratio 1.33). All the vowels of a given stimulus were produced by the same synthesis method. The subjects' task involved rating the prominence of each vowel in the stimulus. It was assumed that subjects would judge this prominence to be lower when they were not able to distinguish the vowel from the background sound. Also included as stimulus parameters were the different permutations of the three vowels at three pitches and a number of modulation conditions in which vowels were unmodulated, modulated alone, and modulated either coherently with, or independently of, the other vowels. Spectral tracing did not result in increased ratings of vowel prominence compared to stimuli where no spectral tracing was present. It would therefore seem that it has no effect on grouping components of sound sources. Modulated vowels received higher prominence ratings than unmodulated vowels. Vowels modulated alone were judged to be more prominent than vowels modulated with other vowels. There was, however, no significant difference between coherent and independent modulation of the three vowels. Differences among modulation conditions were more marked when the modulation width was 6% than when it was 3%.  相似文献   

2.
The purpose of this study was to determine the accuracy with which listeners could identify the gender of a speaker from a synthesized isolated vowel based on the natural production of that speaker when (1) the fundamental frequency was consistent with the speaker's gender, (2) the fundamental frequency was inconsistent with the the speaker's gender, and (3) the speaker was transgendered. Ten male-to-female transgendered persons, 10 men and 10 women, served as subjects. Each speaker produced the vowels /i/, /u/, and //. These vowels were analyzed for fundamental frequency and the first three formant frequencies and bandwidths. Formant frequency and bandwidth information was used to synthesize two vowel tokens for each speaker, one at a fundamental frequency of 120 Hz and one at 240 Hz. Listeners were asked to listen to these tokens and determine whether the original speaker was male or female. Listeners were not aware of the use of transgendered speakers. Results showed that, in all cases, gender identifications were based on fundamental frequency, even when fundamental frequency and formant frequency information was contradictory.  相似文献   

3.
This study examines cross-linguistic variation in the location of shared vowels in the vowel space across five languages (Cantonese, American English, Greek, Japanese, and Korean) and three age groups (2-year-olds, 5-year-olds, and adults). The vowels /a/, /i/, and /u/ were elicited in familiar words using a word repetition task. The productions of target words were recorded and transcribed by native speakers of each language. For correctly produced vowels, first and second formant frequencies were measured. In order to remove the effect of vocal tract size on these measurements, a normalization approach that calculates distance and angular displacement from the speaker centroid was adopted. Language-specific differences in the location of shared vowels in the formant values as well as the shape of the vowel spaces were observed for both adults and children.  相似文献   

4.
Psychophysical results using double vowels imply that subjects are able to use the temporal aspects of neural discharge patterns. To investigate the possible temporal cues available, the responses of fibers in the cochlear nerve of the anesthetized guinea pig to synthetic vowels were recorded at a range of sound levels up to 95 dB SPL. The stimuli were the single vowels /i/ [fundamental frequency (f0) 125 Hz], /a/ (f0, 100 Hz), and /c/ (f0, 100 Hz) and the double vowels were /a(100),i(125)/ and /c(100),i(125)/. Histograms synchronized to the period of the double vowels were constructed, and locking of the discharge to individual harmonics was estimated from them by Fourier transformation. One possible cue for identifying the f0's of the constituents of a double vowel is modulation of the neural discharge with a period of 1/f0. Such modulation was found at frequencies between the formant peaks of the double vowel, with modulation at the periods of 100 and 125 Hz occurring at different places in the fiber array. Generation of a population response based on synchronized responses [average localized synchronized rate (ALSR): see Young and Sachs [J. Acoust. Soc. Am. 66, 1381-1403 (1979)] allowed estimation of the f0's by a variety of methods and subsampling the population response at the harmonics of the f0 of the constituent vowel achieved a good reconstruction of its spectrum. Other analyses using interval histograms and autocorrelation, which overcome some problems associated with the ALSR approach, also allowed f0 identification and vowel segregation. The present study has demonstrated unequivocally that the timing of the impulses in auditory-nerve fibers provides copious possible cues for the identification of the fundamental frequencies and spectra associated with each of the constituents of double vowels.  相似文献   

5.
An experiment investigated the effects of amplitude ratio (-35 to 35 dB in 10-dB steps) and fundamental frequency difference (0%, 3%, 6%, and 12%) on the identification of pairs of concurrent synthetic vowels. Vowels as weak as -25 dB relative to their competitor were easier to identify in the presence of a fundamental frequency difference (delta F0). Vowels as weak as -35 dB were not. Identification was generally the same at delta F0 = 3%, 6%, and 12% for all amplitude ratios: unfavorable amplitude ratios could not be compensated by larger delta F0's. Data for each vowel pair and each amplitude ratio, at delta F0 = 0%, were compared to the spectral envelope of the stimulus at the same ratio, in order to determine which spectral cues determined identification. This information was then used to interpret the pattern of improvement with delta F0 for each vowel pair, to better understand mechanisms of F0-guided segregation. Identification of a vowel was possible in the presence of strong cues belonging to its competitor, as long as cues to its own formants F1 and F2 were prominent. delta F0 enhanced the prominence of a target vowel's cues, even when the spectrum of the target was up to 10 dB below that of its competitor at all frequencies. The results are incompatible with models of segregation based on harmonic enhancement, beats, or channel selection.  相似文献   

6.
Two studies were conducted to assess the sensitivity of perioral muscles to vowel-like auditory stimuli. In one study, normal young adults produced an isometric lip rounding gesture while listening to a frequency modulated tone (FMT). The fundamental of the FMT was modulated over time in a sinusoidal fashion near the frequency ranges of the first and second formants of the vowels /u/ and /i/ (rate of modulation = 4.5 or 7 Hz). In another study, normal young adults produced an isometric lip rounding gesture while listening to synthesized vowels whose formant frequencies were modulated over time in a sinusoidal fashion to simulate repetitive changes from the vowel /u/ to /i/ (rate of modulation = 2 or 4 Hz). The FMTs and synthesized vowels were presented binaurally via headphones at 75 and 60 dB SL, respectively. Muscle activity from the orbicularis oris superior and inferior and from lip retractors was recorded with surface electromyography (EMG). Signal averaging and spectral analysis of the rectified and smoothed EMG failed to show perioral muscle responses to the auditory stimuli. Implications for auditory feedback theories of speech control are discussed.  相似文献   

7.
Questions exist as to the intelligibility of vowels sung at extremely high fundamental frequencies and, especially, when the fundamental frequency (F0) produced is above the region where the first vowel formant (F1) would normally occur. Can such vowels be correctly identified and, if so, does context provide the necessary information or are acoustical elements also operative? To this end, 18 professional singers (5 males and 13 females) were recorded when singing 3 isolated vowels at high and low pitches at both loud and soft levels. Aural-perceptual studies employing four types of auditors were carried out to determine the identity of these vowels, and the nature of the confusions with other vowels. Subsequent acoustical analysis focused on the actual fundamental frequencies sung plus those defining the first 2 vowel formants. It was found that F0 change had a profound effect on vowel perception; one of the more important observations was that the target tended to shift toward vowels with an F1 just above the sung frequency.  相似文献   

8.
The relation between the spatial configuration of the vocal tract as determined by magnetic resonance imaging (MRI) and the acoustical signal produced was investigated. A male subject carried out a set of phonatory tasks, comprising the utterance of the sustained vowels /i/ and /a/, each in a single articulation, and the vowel /epsilon/ with his larynx positioned variously on a vertical axis. Two- and three-dimensional measurements of the vocal tract were performed. The results of these measurements were used to calculate resonance frequencies, according to predictions from acoustical theory. Finally, calculated frequencies were compared with actually measured resonance frequencies in the audio signal. We found a strong relation between the acoustical signal produced and the spatial configuration for the first resonance frequencies of the articulations of the vowel /epsilon/, and first two resonance frequencies of the vowels /a/ and /i/. The capability to determine accurately vocal tract dimensions is a major advantage of this imaging technique.  相似文献   

9.
Formant frequencies in an old Estonian folk song performed by two female voices were estimated for two back vowels /a/ and /u/, and for two front vowels /e/ and /i/. Comparison of these estimates with formant frequencies in spoken Estonian vowels indicates a trend of the vowels to be clustered into two sets of front and back ones in the F1/F2 plane. Similar clustering has previously been shown to occur in opera and choir singing, especially with increasing fundamental frequency. The clustering in the present song, however, may also be due to a tendency for a mid vowel to be realized as a higher-beginning diphthong, which is characteristic of the North-Estonian coastal dialect area where the singers come from. No evidence of a "singer's formant" was found.  相似文献   

10.
Recent studies have demonstrated that mothers exaggerate phonetic properties of infant-directed (ID) speech. However, these studies focused on a single acoustic dimension (frequency), whereas speech sounds are composed of multiple acoustic cues. Moreover, little is known about how mothers adjust phonetic properties of speech to children with hearing loss. This study examined mothers' production of frequency and duration cues to the American English tense/lax vowel contrast in speech to profoundly deaf (N?=?14) and normal-hearing (N?=?14) infants, and to an adult experimenter. First and second formant frequencies and vowel duration of tense (/i/,?/u/) and lax (/I/,?/?/) vowels were measured. Results demonstrated that for both infant groups mothers hyperarticulated the acoustic vowel space and increased vowel duration in ID speech relative to adult-directed speech. Mean F2 values were decreased for the /u/ vowel and increased for the /I/ vowel, and vowel duration was longer for the /i/, /u/, and /I/ vowels in ID speech. However, neither acoustic cue differed in speech to hearing-impaired or normal-hearing infants. These results suggest that both formant frequencies and vowel duration that differentiate American English tense/lx vowel contrasts are modified in ID speech regardless of the hearing status of the addressee.  相似文献   

11.
The purpose of this experiment was to study the effects of changes in speaking rate on both the attainment of acoustic vowel targets and the relative time and speed of movements toward these presumed targets. Four speakers produced a number of different CVC and CVCVC utterances at slow and fast speaking rates. Spectrographic measurements showed that the midpoint format frequencies of the different vowels did not vary as a function of rate. However, for fast speech the onset frequencies of second formant transitions were closer to their target frequencies while CV transition rates remained essentially unchanged, indicating that movement toward the vowel simply began earlier for fast speech. Changes in both speaking rate and lexical stress had different effects. For stressed vowels, an increase in speaking rate was accompanied primarily by a decrease in duration. However, destressed vowels, even if they were of the same duration as quickly produced stressed vowels, were reduced in overall amplitude, fundamental frequency, and to some extent, vowel color. These results suggest that speaking rate and lexical stress are controlled by two different mechanisms.  相似文献   

12.
Human listeners are better able to identify two simultaneous vowels if the fundamental frequencies of the vowels are different. A computational model is presented which, for the first time, is able to simulate this phenomenon at least qualitatively. The first stage of the model is based upon a bank of bandpass filters and inner hair-cell simulators that simulate approximately the most relevant characteristics of the human auditory periphery. The output of each filter/hair-cell channel is then autocorrelated to extract pitch and timbre information. The pooled autocorrelation function (ACF) based on all channels is used to derive a pitch estimate for one of the component vowels from a signal composed of two vowels. Individual channel ACFs showing a pitch peak at this value are combined and used to identify the first vowel using a template matching procedure. The ACFs in the remaining channels are then combined and used to identify the second vowel. Model recognition performance shows a rapid improvement in correct vowel identification as the difference between the fundamental frequencies of two simultaneous vowels increases from zero to one semitone in a manner closely resembling human performance. As this difference increases up to four semitones, performance improves further only slowly, if at all.  相似文献   

13.
For each of five vowels [i e a o u] following [t], a continuum from non-nasal to nasal was synthesized. Nasalization was introduced by inserting a pole-zero pair in the vicinity of the first formant in an all-pole transfer function. The frequencies and spacing of the pole and zero were systematically varied to change the degree of nasalization. The selection of stimulus parameters was determined from acoustic theory and the results of pilot experiments. The stimuli were presented for identification and discrimination to listeners whose language included a non-nasal--nasal vowel opposition (Gujarati, Hindi, and Bengali) and to American listeners. There were no significant differences between language groups in the 50% crossover points of the identification functions. Some vowels were more influenced by range and context effects than were others. The language groups showed some differences in the shape of the discrimination functions for some vowels. On the basis of the results, it is postulated that (1) there is a basic acoustic property of nasality, independent of the vowel, to which the auditory system responds in a distinctive way regardless of language background; and (2) there are one or more additional acoustic properties that may be used to various degrees in different languages to enhance the contrast between a nasal vowel and its non-nasal congener. A proposed candidate for the basic acoustic property is a measure of the degree of prominence of the spectral peak in the vicinity of the first formant. Additional secondary properties include shifts in the center of gravity of the low-frequency spectral prominence, leading to a change in perceived vowel height, and changes in overall spectral balance.  相似文献   

14.
This study examined intraproduction variability in jitter measures from elderly speakers' sustained vowel productions and tried to determine whether mean jitter levels (percent) and intraspeaker variability on jitter measures are affected significantly by the segment of the vowel selected for measurement. Twenty-eight healthy elderly men (mean age 75.6 years) and women (mean age 72.0 years) were tape recorded producing 25 repeat trials of the vowels /i/, /a/, and /u/, as steadily as possible. Jitter was analyzed from two segments of each vowel production: (a) the initial 100 cycles after 1 s of phonation, and (b) 100 cycles from the most stable-appearing portion of the production. Results indicated that the measurement point selected for jitter analysis was a significant factor both in the mean jitter level obtained and in the variability of jitter observed across repeat productions.  相似文献   

15.
In this study we assessed age-related differences in the perception and production of American English (AE) vowels by native Mandarin speakers as a function of the amount of exposure to the target language. Participants included three groups of native Mandarin speakers: 87 children, adolescents and young adults living in China, 77 recent arrivals who had lived in the U.S. for two years or less, and 54 past arrivals who had lived in the U.S. between three and five years. The latter two groups arrived in the U.S. between the ages of 7 and 44 years. Discrimination of six AE vowel pairs /i-i/, /i-e(I)/, /e-ae/, /ae-a/, /a-(symbol see text)/, and /u-a/ was assessed with a categorial AXB task. Production of the eight vowels /i, i, e(I), e, ae, (symbol see text), a, u/ was assessed with an immediate imitation task. Age-related differences in performance accuracy changed from an older-learner advantage among participants in China, to no age differences among recent arrivals, and to a younger-learner advantage among past arrivals. Performance on individual vowels and vowel contrasts indicated the influence of the Mandarin phonetic/phonological system. These findings support a combined environmental and L1 interference/transfer theory as an explanation of the long-term younger-learner advantage in mastering L2 phonology.  相似文献   

16.
Abilities to detect and discriminate ten synthetic steady-state English vowels were compared in Old World monkeys (Cercopithecus, Macaca) and humans using standard animal psychophysical procedures and positive-reinforcement operant conditioning techniques. Monkeys' detection thresholds were close to humans' for the front vowels /i-I-E-ae-E), but 10-20 dB higher for the back vowels /V-D-C-U-u/. Subjects were subsequently presented with groups of vowels to discriminate. All monkeys experienced difficulty with spectrally similar pairs such as /V-D/, /E-ae/, and /U-u/, but macaques were superior to Cercopithecus monkeys. Humans discriminated all vowels at 100% correct levels, but their increased response latencies reflected spectral similarity and correlated with higher error rates by monkeys. Varying the intensity level of the vowel stimuli had little effect on either monkey or human discrimination, except at the lowest levels tested. These qualitative similarities in monkey and human vowel discrimination suggest that some monkey species may provide useful models of human vowel processing at the sensory level.  相似文献   

17.
18.
This study addresses two questions: (1) How much nasality is present in classical Western singing? (2) What are the effects of frequency range, vowel, dynamic level, and gender on nasality in amateur and classically trained singers? The Nasometer II 6400 by KayPENTAX (Lincoln Park, NJ) was used to obtain nasalance values from 21 amateur singers and 25 classically trained singers while singing an ascending five-tone scalar passage in low, mid, and high frequency ranges. Each subject sang the scalar passage at both piano and mezzo-forte dynamic loudness levels on each of the five cardinal vowels (/a/, /e/, /i/, /o/, /u/). A repeated mixed-model analysis indicated a significant main effect for the amateur/classically trained distinction, dynamic loudness level, and vowel, but not for frequency range or gender. The amateur singers had significantly higher nasalance scores than classically trained singers in all ranges and on all vowels except /o/. Dynamic loudness level had a significant effect on nasalance for all subject groups except for female majors in the mid- and high-frequency ranges. The vowel, /i/, received significantly higher nasalance than all of the other vowels. Although results of this study show that dynamic loudness level, vowel, and level of training in classical singing have a significant effect on nasality, nasalance scores for most subjects were relatively low. Only six of the subjects, all of whom were amateur singers, had average nasalance scores that could be considered hypernasal (ie, a nasalance average of 22 or above).  相似文献   

19.
The purpose of this study was to examine the effect of reduced vowel working space on dysarthric talkers' speech intelligibility using both acoustic and perceptual approaches. In experiment 1, the acoustic-perceptual relationship between vowel working space area and speech intelligibility was examined in Mandarin-speaking young adults with cerebral palsy. Subjects read aloud 18 bisyllabic words containing the vowels /i/, /a/, and /u/ using their normal speaking rate. Each talker's words were identified by three normal listeners. The percentage of correct vowel and word identification were calculated as vowel intelligibility and word intelligibility, respectively. Results revealed that talkers with cerebral palsy exhibited smaller vowel working space areas compared to ten age-matched controls. The vowel working space area was significantly correlated with vowel intelligibility (r=0.632, p<0.005) and with word intelligibility (r=0.684, p<0.005). Experiment 2 examined whether tokens of expanded vowel working spaces were perceived as better vowel exemplars and represented with greater perceptual spaces than tokens of reduced vowel working spaces. The results of the perceptual experiment support this prediction. The distorted vowels of talkers with cerebral palsy compose a smaller acoustic space that results in shrunken intervowel perceptual distances for listeners.  相似文献   

20.
A series of experiments measured the discrimination by human listeners of frequency-modulated complex tones which differed only in the coherence of frequency modulation (FM). For the coherently modulated tones all components were modulated by the same 5-Hz sinusoid, and by the same percentage of their starting frequencies, whereas for the incoherently modulated tones the modulation of one (target) component differed from that of the rest. When the 400-ms complex was composed of consecutive harmonics of a common fundamental, performance improved monotonically with increases in modulator delay, and was nearly perfect at the longest delays. When the complex was inharmonic, performance was near chance at all modular delays, both for component frequencies between 1500 and 2500 Hz, and for component frequencies between 400 and 800 Hz. It is argued that listeners detected incoherence in harmonic complexes by detecting the resulting mistuning of the target component. This conclusion was supported by the finding that listeners were usually at least as good at detecting a fixed mistuning of the center component of a harmonic complex as they were at detecting a modulator phase delay imposed on it. A final experiment, with a stimulus duration of 1 s and slower modulation rates, showed that listeners could detect incoherence for some inharmonic complexes. However, detection was worse than for harmonic complexes and was, it is argued, based on weak harmonicity cues. The results of all experiments point to the absence of an across-frequency mechanism specific to the detection of FM incoherence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号