首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
The budgerigar (Melopsittacus undulatus) has an extraordinarily complex, learned, vocal repertoire consisting of both the long rambling warble song of males and a number of short calls produced by both sexes. In warble, the most common elements (>30%) bear a strong resemblance to the highly frequency-modulated, learned contact calls that the birds produce as single utterances. However, aside from this apparent similarity, little else is known about the relationship between contact calls and warble call elements. Here, both types of calls were recorded from four male budgerigars. Signal analysis and psychophysical testing procedures showed that the acoustic features of these two vocalizations were acoustically different and perceived as distinctive vocalizations by birds. This suggests that warble call elements are not simple insertions of contact calls but are most likely different acoustic elements, created de novo, and used solely in warble. Results show that, like contact calls, warble call elements contain information about signaler identity. The fact that contact calls and warble call elements are acoustically and perceptually distinct suggests that they probably represent two phonological systems in the budgerigar vocal repertoire, both of which arise by production learning.  相似文献   

2.
Songbirds and parrots deafened as nestlings fail to develop normal vocalizations, while birds deafened as adults show a gradual deterioration in the quality and precision of vocal production. Beyond this, little is known about the effect of hearing loss on the perception of vocalizations. Here, we induced temporary hearing loss in budgerigars with kanamycin and tested several aspects of the hearing, including the perception of complex, species-specific vocalizations. The ability of these birds to discriminate among acoustically distinct vocalizations was not impaired but the ability to make fine-grain discriminations among acoustically similar vocalizations was affected, even weeks after the basilar papilla had been repopulated with new hair cells. Interestingly, these birds were initially unable to recognize previously familiar contact calls in a classification task-suggesting that previously familiar vocalizations sounded unfamiliar with new hair cells. Eventually, in spite of slightly elevated absolute thresholds, the performance of birds on discrimination and perceptual recognition of vocalizations tasks returned to original levels. Thus, even though vocalizations may initially sound different with new hair cells, there are only minimal long-term effects of temporary hearing loss on auditory perception, recognition of species-specific vocalizations, or other aspects of acoustic communication in these birds.  相似文献   

3.
The present study examined auditory distance perception cues in a non-territorial songbird, the zebra finch (Taeniopygia guttata), and in a non-songbird, the budgerigar (Melopsittacus undulatus). Using operant conditioning procedures, three zebra finches and three budgerigars were trained to identify 1- (Near) and 75-m (Far) recordings of three budgerigar contact calls, one male zebra finch song, and one female zebra finch call. Once the birds were trained on these endpoint stimuli, other stimuli were introduced into the operant task. These stimuli included recordings at intermediate distances and artificially altered stimuli simulating changes in overall amplitude, high-frequency attenuation, reverberation, and all three cues combined. By examining distance cues (amplitude, high-frequency attenuation, and reverberation) separately, this study sought to determine which cue was the most salient for the birds. The results suggest that both species could scale the stimuli on a continuum from Near to Far and that amplitude was the most important cue for these birds in auditory distance perception, as in humans and other animals.  相似文献   

4.
Auditory feedback influences the development of vocalizations in songbirds and parrots; however, little is known about the development of hearing in these birds. The auditory brainstem response was used to track the development of auditory sensitivity in budgerigars from hatch to 6 weeks of age. Responses were first obtained from 1-week-old at high stimulation levels at frequencies at or below 2 kHz, showing that budgerigars do not hear well at hatch. Over the next week, thresholds improved markedly, and responses were obtained for almost all test frequencies throughout the range of hearing by 14 days. By 3 weeks posthatch, birds' best sensitivity shifted from 2 to 2.86 kHz, and the shape of the auditory brainstem response (ABR) audiogram became similar to that of adult budgerigars. About a week before leaving the nest, ABR audiograms of young budgerigars are very similar to those of adult birds. These data complement what is known about vocal development in budgerigars and show that hearing is fully developed by the time that vocal learning begins.  相似文献   

5.
The effect of diminished auditory feedback on monophthong and diphthong production was examined in postlingually deafened Australian-English speaking adults. The participants were 4 female and 3 male speakers with severe to profound hearing loss, who were compared to 11 age- and accent-matched normally hearing speakers. The test materials were 5 repetitions of hVd words containing 18 vowels. Acoustic measures that were studied included F1, F2, discrete cosine transform coefficients (DCTs), and vowel duration information. The durational analyses revealed increased total vowel durations with a maintenance of the tense/lax vowel distinctions in the deafened speakers. The deafened speakers preserved a differentiated vowel space, although there were some gender-specific differences seen. For example, there was a retraction of F2 in the front vowels for the female speakers that did not occur in the males. However, all deafened speakers showed a close correspondence between the monophthong and diphthong formant movements that did occur. Gaussian classification highlighted vowel confusions resulting from changes in the deafened vowel space. The results support the view that postlingually deafened speakers maintain reasonably good speech intelligibility, in part by employing production strategies designed to bolster auditory feedback.  相似文献   

6.
Budgerigars were trained to produce specific vocalizations (calls) using operant conditioning and food reinforcement. The bird's call was compared to a digital representation of the call stored in a computer to determine a match. Once birds were responding at a high level of precision, we measured the effect of several manipulations upon the accuracy and the intensity of call production. Also, by differentially reinforcing other aspects of vocal behavior, budgerigars were trained to produce a call that matched another bird's contact call and to alter the latency of their vocal response. Both the accuracy of vocal matching and the intensity level of vocal production increased significantly when the bird could hear the template immediately before each trial. Moreover, manipulating the delay between the presentation of an acoustic reference and the onset of vocal production did not significantly affect either vocal intensity or matching accuracy. Interestingly, the vocalizations learned and reinforced in these operant experiments were only occasionally used in more natural communicative situations, such as when birds called back and forth to one another in their home cages.  相似文献   

7.
The acoustic structure of loud calls ("wahoos") recorded from free-ranging male baboons (Papio cynocephalus ursinus) in the Moremi Game Reserve, Botswana, was examined for differences between and within contexts, using calls given in response to predators (alarm wahoos), during male contests (contest wahoos), and when a male had become separated from the group (contact wahoos). Calls were recorded from adolescent, subadult, and adult males. In addition, male alarm calls were compared with those recorded from females. Despite their superficial acoustic similarity, the analysis revealed a number of significant differences between alarm, contest, and contact wahoos. Contest wahoos are given at a much higher rate, exhibit lower frequency characteristics, have a longer "hoo" duration, and a relatively louder "hoo" portion than alarm wahoos. Contact wahoos are acoustically similar to contest wahoos, but are given at a much lower rate. Both alarm and contest wahoos also exhibit significant differences among individuals. Some of the acoustic features that vary in relation to age and sex presumably reflect differences in body size, whereas others are possibly related to male stamina and endurance. The finding that calls serving markedly different functions constitute variants of the same general call type suggests that the vocal production in nonhuman primates is evolutionarily constrained.  相似文献   

8.
Songs of humpback whales (Megaptera novaeangliae) have been studied for several years to gain a deeper insight on the intraspecific social interactions. Such a complex acoustic display is indeed thought to play an important role in both the mating ritual and male to male interaction. Hence, the need to classify the unit constituents of a song objectively and systematically has become crucial to allow processing large data sets. We propose a new approach for song segmentation based on the definition of subunits. Songs of humpback whales collected in Madagascar in August 2008 and 2009 were segmented using an energy detector with a double threshold and classified automatically with a clustering algorithm using MFCCs: the results, which were checked against a manual classification, showed that the use of subunit as the basic constituent of a song rather than the unit produces a more accurate classification of the calls. Such results were expected given that subunits are generally shorter in duration and less variable in terms of their frequency content and so their characteristics are more easily captured by an automatic classifier. Analysis of songs from other years and various areas of the World is necessary to corroborate the repeatability of the method proposed.  相似文献   

9.
Zebra finches produce a learned song that is rich in harmonic structure and highly stereotyped. More is generally known about how birds learn and produce this song than how they perceive it. Here, zebra finches were trained with operant techniques to discriminate changes in natural and synthetic song motifs. Results show that zebra finches are quite insensitive to changes to the overall envelope of the motif since they were unable to discriminate more than a doubling in inter-syllable interval durations. By contrast, they were quite sensitive to changes in individual syllables. A series of tests with synthetic song syllables, including some made of frozen noise and Schroeder harmonic complexes, showed that birds used a suite of acoustic cues in normal listening but they could also distinguish among syllables simply on the basis of the temporal fine structure in the waveform. Thus, while syllable perception is maintained by multiple redundant cues, temporal fine structure features alone are sufficient for syllable discrimination and may be more important for communication than previously thought.  相似文献   

10.
Vocal recognition is common among songbirds, and provides an excellent model system to study the perceptual and neurobiological mechanisms for processing natural vocal communication signals. Male European starlings, a species of songbird, learn to recognize the songs of multiple conspecific males by attending to stereotyped acoustic patterns, and these learned patterns elicit selective neuronal responses in auditory forebrain neurons. The present study investigates the perceptual grouping of spectrotemporal acoustic patterns in starling song at multiple temporal scales. The results show that permutations in sequencing of submotif acoustic features have significant effects on song recognition, and that these effects are specific to songs that comprise learned motifs. The observations suggest that (1) motifs form auditory objects embedded in a hierarchy of acoustic patterns, (2) that object-based song perception emerges without explicit reinforcement, and (3) that multiple temporal scales within the acoustic pattern hierarchy convey information about the individual identity of the singer. The authors discuss the results in the context of auditory object formation and talker recognition.  相似文献   

11.
During the breeding season, the underwater vocalizations and calling rates of adult male leopard seals are highly stereotyped. In contrast, sub-adult males have more variable acoustic behavior. Although adult males produce only five stereotyped broadcast calls as part of their long-range underwater breeding displays the sub-adults have a greater repertoire including the adult-like broadcast calls, as well as variants of these. Whether this extended repertoire has a social function is unknown due to the paucity of behavioral data for this species. The broadcast calls of the sub-adults are less stereotyped in their acoustic characteristics and they have a more variable calling rate. These age-related differences have major implications for geographic variation studies, where the acoustic behavior of different populations are compared, as well as for acoustic surveying studies, where numbers of calls are used to indicate numbers of individuals present. Sampling regimes which unknowingly include recordings from sub-adult animals will artificially exaggerate differences between populations and numbers of calling animals. The acoustic behavior of sub-adult and adult male leopard seals were significantly different and although this study does not show evidence that these differences reflect vocal learning in the male leopard seal it does suggest that contextual learning may be present.  相似文献   

12.
Relatively few empirical data are available concerning the role of auditory experience in nonverbal human vocal behavior, such as laughter production. This study compared the acoustic properties of laughter in 19 congenitally, bilaterally, and profoundly deaf college students and in 23 normally hearing control participants. Analyses focused on degree of voicing, mouth position, air-flow direction, temporal features, relative amplitude, fundamental frequency, and formant frequencies. Results showed that laughter produced by the deaf participants was fundamentally similar to that produced by the normally hearing individuals, which in turn was consistent with previously reported findings. Finding comparable acoustic properties in the sounds produced by deaf and hearing vocalizers confirms the presumption that laughter is importantly grounded in human biology, and that auditory experience with this vocalization is not necessary for it to emerge in species-typical form. Some differences were found between the laughter of deaf and hearing groups; the most important being that the deaf participants produced lower-amplitude and longer-duration laughs. These discrepancies are likely due to a combination of the physiological and social factors that routinely affect profoundly deaf individuals, including low overall rates of vocal fold use and pressure from the hearing world to suppress spontaneous vocalizations.  相似文献   

13.
We provide a direct demonstration that nonhuman primates spontaneously perceive changes in formant frequencies in their own species-typical vocalizations, without training or reinforcement. Formants are vocal tract resonances leading to distinctive spectral prominences in the vocal signal, and provide the acoustic determinant of many key phonetic distinctions in human languages. We developed algorithms for manipulating formants in rhesus macaque calls. Using the resulting computer-manipulated calls in a habituation/dishabituation paradigm, with blind video scoring, we show that rhesus macaques spontaneously respond to a change in formant frequencies within the normal macaque vocal range. Lack of dishabituation to a "synthetic replica" signal demonstrates that dishabituation was not due to an artificial quality of synthetic calls, but to the formant shift itself. These results indicate that formant perception, a significant component of human voice and speech perception, is a perceptual ability shared with other primates.  相似文献   

14.
Two experiments investigating the effects of auditory stimulation delivered via a Nucleus multichannel cochlear implant upon vowel production in adventitiously deafened adult speakers are reported. The first experiment contrasts vowel formant frequencies produced without auditory stimulation (implant processor OFF) to those produced with auditory stimulation (processor ON). Significant shifts in second formant frequencies were observed for intermediate vowels produced without auditory stimulation; however, no significant shifts were observed for the point vowels. Higher first formant frequencies occurred in five of eight vowels when the processor was turned ON versus OFF. A second experiment contrasted productions of the word "head" produced with a FULL map, OFF condition, and a SINGLE channel condition that restricted the amount of auditory information received by the subjects. This experiment revealed significant shifts in second formant frequencies between FULL map utterances and the other conditions. No significant differences in second formant frequencies were observed between SINGLE channel and OFF conditions. These data suggest auditory feedback information may be used to adjust the articulation of some speech sounds.  相似文献   

15.
This study investigates the effects of speaking condition and auditory feedback on vowel production by postlingually deafened adults. Thirteen cochlear implant users produced repetitions of nine American English vowels prior to implantation, and at one month and one year after implantation. There were three speaking conditions (clear, normal, and fast), and two feedback conditions after implantation (implant processor turned on and off). Ten normal-hearing controls were also recorded once. Vowel contrasts in the formant space (expressed in mels) were larger in the clear than in the fast condition, both for controls and for implant users at all three time samples. Implant users also produced differences in duration between clear and fast conditions that were in the range of those obtained from the controls. In agreement with prior work, the implant users had contrast values lower than did the controls. The implant users' contrasts were larger with hearing on than off and improved from one month to one year postimplant. Because the controls and implant users responded similarly to a change in speaking condition, it is inferred that auditory feedback, although demonstrably important for maintaining normative values of vowel contrasts, is not needed to maintain the distinctiveness of those contrasts in different speaking conditions.  相似文献   

16.
Modifying the vocal tract alters a speaker's previously learned acoustic-articulatory relationship. This study investigated the contribution of auditory feedback to the process of adapting to vocal-tract modifications. Subjects said the word /tas/ while wearing a dental prosthesis that extended the length of their maxillary incisor teeth. The prosthesis affected /s/ productions and the subjects were asked to learn to produce "normal" /s/'s. They alternately received normal auditory feedback and noise that masked their natural feedback during productions. Acoustic analysis of the speakers' /s/ productions showed that the distribution of energy across the spectra moved toward that of normal, unperturbed production with increased experience with the prosthesis. However, the acoustic analysis did not show any significant differences in learning dependent on auditory feedback. By contrast, when naive listeners were asked to rate the quality of the speakers' utterances, productions made when auditory feedback was available were evaluated to be closer to the subjects' normal productions than when feedback was masked. The perceptual analysis showed that speakers were able to use auditory information to partially compensate for the vocal-tract modification. Furthermore, utterances produced during the masked conditions also improved over a session, demonstrating that the compensatory articulations were learned and available after auditory feedback was removed.  相似文献   

17.
This investigation determined whether the signal provided by the Cochlear Corporation Nucleus cochlear implant can convey enough speech information to induce a response to delayed auditory feedback (DAF), and whether prelingually deafened children who received a cochlear implant relatively late in their speech development are susceptible. Ten children with the Nucleus cochlear implant spoke simple phrases, first without and then with DAF. Three prelingually deafened subjects and the only two postlingually deafened subjects demonstrated longer phrase durations when speaking with DAF than without it. Two of the prelingually deafened subjects who demonstrated a response received their cochlear implants at the age of 5 years.  相似文献   

18.
The speech of a postlingually deafened preadolescent was recorded and analyzed while a single-electrode cochlear implant (3M/House) was in operation, on two occasions after it failed (1 day and 18 days) and on three occasions after stimulation of a multichannel cochlear implant (Nucleus 22) (1 day, 6 months, and 1 year). Listeners judged 3M/House tokens to be the most normal until the subject had one year's experience with the Nucleus device. Spectrograms showed less aspiration, better formant definition and longer final frication and closure duration post-Nucleus stimulation (6 MO. NUCLEUS and 1 YEAR NUCLEUS) relative to the 3M/House and no auditory feedback conditions. Acoustic measurements after loss of auditory feedback (1 DAY FAIL and 18 DAYS FAIL) indicated a constriction of vowel space. Appropriately higher fundamental frequency for stressed than unstressed syllables, an expansion of vowel space and improvement in some aspects of production of voicing, manner and place of articulation were noted one year post-Nucleus stimulation. Loss of auditory feedback results are related to the literature on the effects of postlingual deafness on speech. Nucleus and 3M/House effects on speech are discussed in terms of speech production studies of single-electrode and multichannel patients.  相似文献   

19.
The effect of auditory feedback on speech production was investigated in five postlingually deafened adults implanted with the 22-channel Nucleus device. Changes in speech production were measured before implant and 1, 6, and 24 months postimplant. Acoustic measurements included: F1 and F2 of vowels in word-in-isolation and word-in-sentence context, voice-onset-time (VOT), spectral range of sibilants, fundamental frequency (F0) of word-in-isolation and word-in-sentence context, and word and sentence duration. Perceptual ratings of speech quality were done by ten listeners. The significant changes after cochlear implantation included: a decrease of F0, word and sentence duration, and F1 values, and an increase of voiced plosives' voicing lead (from positive to negative VOT values) and fricatives' spectral range. Significant changes occurred until 2 years postimplant when most measured values fell within Hebrew norms. Listeners were found to be sensitive to the acoustic changes in the speech from preimplant to 1, 6, and 24 months postimplant. Results suggest that when hearing is restored in postlingually deafened adults, calibration of speech is not immediate and occurs over time depending on the age-at-onset of deafness, years of deafness, and perception skills. The results also concur with hypothesis that the observed changes of some speech parameters are an indirect consequence of intentional changes in other articulatory parameters.  相似文献   

20.
Field studies indicate that Japanese macaque (Macaca fuscata) communication signals vary with the social situation in which they occur [S. Green, "Variation of vocal pattern with social situation in the Japanese monkey (Macaca fuscata): A field study," in Primate Behavior, edited by L. A. Rosenblum (Academic, New York, 1975), Vol. 4]. A significant acoustic property of the contact calls produced by these primates is the temporal position of a frequency peak within the vocalization, that is, an inflection from rising to falling frequency [May et al., "Significant features of Japanese macaque communication sounds: A psychophysical study," Anim. Behav. 36, 1432-1444 (1988)]. The experiments reported here are based on the hypothesis that Japanese macaques derive meaning from this temporally graded feature by parceling the acoustic variation inherent in natural contact calls into two functional categories, and thus exhibit behavior that is analogous to the categorical perception of speech sounds by humans. To test this hypothesis, Japanese macaques were trained to classify natural contact calls by performing operant responses that signified either an early or late frequency peak position. Then, the subjects were tested in a series of experiments that required them to generalize this behavior to synthetic calls representing a continuum of peak positions. Demonstration of the classical perceptual effects noted for human listeners suggests that categorical perception reflects a principle of auditory information processing that influences the perception of sounds in the communication systems not only of humans, but of animals as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号