首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present study examined auditory distance perception cues in a non-territorial songbird, the zebra finch (Taeniopygia guttata), and in a non-songbird, the budgerigar (Melopsittacus undulatus). Using operant conditioning procedures, three zebra finches and three budgerigars were trained to identify 1- (Near) and 75-m (Far) recordings of three budgerigar contact calls, one male zebra finch song, and one female zebra finch call. Once the birds were trained on these endpoint stimuli, other stimuli were introduced into the operant task. These stimuli included recordings at intermediate distances and artificially altered stimuli simulating changes in overall amplitude, high-frequency attenuation, reverberation, and all three cues combined. By examining distance cues (amplitude, high-frequency attenuation, and reverberation) separately, this study sought to determine which cue was the most salient for the birds. The results suggest that both species could scale the stimuli on a continuum from Near to Far and that amplitude was the most important cue for these birds in auditory distance perception, as in humans and other animals.  相似文献   

2.
The ability of three species of birds to discriminate among selected harmonic complexes with fundamental frequencies varying from 50 to 1000 Hz was examined in behavioral experiments. The stimuli were synthetic harmonic complexes with waveform shapes altered by component phase selection, holding spectral and intensive information constant. Birds were able to discriminate between waveforms with randomly selected component phases and those with all components in cosine phase, as well as between positive and negative Schroeder-phase waveforms with harmonic periods as short as 1-2 ms. By contrast, human listeners are unable to make these discriminations at periods less than about 3-4 ms. Electrophysiological measures, including cochlear microphonic and compound action potential measurements to the same stimuli used in behavioral tests, showed differences between birds and gerbils paralleling, but not completely accounting for, the psychophysical differences observed between birds and humans. It appears from these data that birds can hear the fine temporal structure in complex waveforms over very short periods. These data show birds are capable of more precise temporal resolution for complex sounds than is observed in humans and perhaps other mammals. Physiological data further show that at least part of the mechanisms underlying this high temporal resolving power resides at the peripheral level of the avian auditory system.  相似文献   

3.

Background  

Previously we have found that cannabinoid treatment of zebra finches during sensorimotor stages of vocal development alters song patterns produced in adulthood. Such persistently altered behavior must be attributable to changes in physiological substrates responsible for song. We are currently working to identify the nature of such physiological changes, and to understand how they contribute to altered vocal learning. One possibility is that developmental agonist exposure results in altered expression of elements of endocannabinoid signaling systems. To test this hypothesis we have studied effects of the potent cannabinoid receptor agonist WIN55212-2 (WIN) on endocannabinoid levels and densities of CB1 immunostaining in zebra finch brain.  相似文献   

4.
The ability of normally hearing and hearing-impaired subjects to use temporal fine structure information in complex tones was measured. Subjects were required to discriminate a harmonic complex tone from a tone in which all components were shifted upwards by the same amount in Hz, in a three-alternative, forced-choice task. The tones either contained five equal-amplitude components (non-shaped stimuli) or contained many components, but were passed through a fixed bandpass filter to reduce excitation pattern changes (shaped stimuli). Components were centered at nominal harmonic numbers (N) 7, 11, and 18. For the shaped stimuli, hearing-impaired subjects performed much more poorly than normally hearing subjects, with most of the former scoring no better than chance when N=11 or 18, suggesting that they could not access the temporal fine structure information. Performance for the hearing-impaired subjects was significantly improved for the non-shaped stimuli, presumably because they could benefit from spectral cues. It is proposed that normal-hearing subjects can use temporal fine structure information provided the spacing between fine structure peaks is not too small relative to the envelope period, but subjects with moderate cochlear hearing loss make little use of temporal fine structure information for unresolved components.  相似文献   

5.
Evidence is presented that the basic vocalized sound produced by some cockatoos, specifically the Australian sulfur-crested cockatoo (Cacatua galerita) and the gang-gang cockatoo (Callocephalon fimbriatum), has a chaotic acoustic structure rather than the harmonic structure characteristic of most birdsongs. These findings support those of Fee et al. [Nature (London) 395(3), 67-71 (1999)] on nonlinear period-doubling transitions in the song of the zebra finch (Taeniopygia guttata). It is suggested that syllables with chaotic structure may be a feature of the songs of many birds.  相似文献   

6.
Two recent accounts of the acoustic cues which specify place of articulation in syllable-initial stop consonants claim that they are located in the initial portions of the CV waveform and are context-free. Stevens and Blumstein [J. Acoust. Soc. Am. 64, 1358-1368 (1978)] have described the perceptually relevant spectral properties of these cues as static, while Kewley-Port [J. Acoust. Soc. Am. 73, 322-335 (1983)] describes these cues as dynamic. Three perceptual experiments were conducted to test predictions derived from these accounts. Experiment 1 confirmed that acoustic cues for place of articulation are located in the initial 20-40 ms of natural stop-vowel syllables. Next, short synthetic CV's modeled after natural syllables were generated using either a digital, parallel-resonance synthesizer in experiment 2 or linear prediction synthesis in experiment 3. One set of synthetic stimuli preserved the static spectral properties proposed by Stevens and Blumstein. Another set of synthetic stimuli preserved the dynamic properties suggested by Kewley-Port. Listeners in both experiments identified place of articulation significantly better from stimuli which preserved dynamic acoustic properties than from those based on static onset spectra. Evidently, the dynamic structure of the initial stop-vowel articulatory gesture can be preserved in context-free acoustic cues which listeners use to identify place of articulation.  相似文献   

7.
Colonies or communities of animals such as fishes, frogs, seabirds, or marine mammals can be noisy. Although vocal communication between clearly identified sender(s) and receiver(s) has been well studied, the properties of the noisy sound that results from the acoustic network of a colony of gregarious animals have received less attention. The resulting sound could nonetheless convey some information about the emitting group. Using custom-written software for automatic detection of vocalizations occurring over many hours of recordings, this study reports acoustic features of communal vocal activities in a gregarious species, the zebra finch (Taeniopygia guttata). By biasing the sex ratio and using two different housing conditions (individual versus communal housing), six groups of zebra finches were generated, with six different social structures that varied both in terms of sex-composition and proportion of paired individuals. The results showed that the rate of emission and the acoustic dynamic both depended on the social structure. In particular, the vocal activity of a group of zebra finches depended mainly on the number of unpaired birds, i.e., individuals not part of a stably bonded pair.  相似文献   

8.
Three zebra finches were trained with operant techniques to respond to pure tones. Absolute thresholds were obtained for nine durations of a 3-kHz tone and five durations of a 1-kHz tone. The temporal integration functions were described using the negative exponential function proposed by Plomp and Bouman [J. Acoust. Soc. Am. 31, 749-758 (1959)]. The time constants obtained for zebra finches are about 250 ms, which are similar to those reported for a number of species, including humans and other bird species.  相似文献   

9.

Background  

Recent evidence suggests that some sex differences in brain and behavior might result from direct genetic effects, and not solely the result of the organizational effects of steroid hormones. The present study examined the potential role for sex-biased gene expression during development of sexually dimorphic singing behavior and associated song nuclei in juvenile zebra finches.  相似文献   

10.

Background  

Steroids affect many tissues, including the brain. In the zebra finch, the estrogenic steroid estradiol (E2) is especially effective at promoting growth of the neural circuit specialized for song. In this species, only the males sing and they have a much larger and more interconnected song circuit than females. Thus, it was surprising that the gene for 17β-hydroxysteroid dehydrogenase type 4 (HSD17B4), an enzyme that converts E2 to a less potent estrogen, had been mapped to the Z sex chromosome. As a consequence, it was likely that HSD17B4 was differentially expressed in males (ZZ) and females (ZW) because dosage compensation of Z chromosome genes is incomplete in birds. If a higher abundance of HSD17B4 mRNA in males than females was translated into functional enzyme in the brain, then contrary to expectation, males could produce less E2 in their brains than females.  相似文献   

11.
A population study of auditory nerve responses in the bullfrog, Rana catesbeiana, analyzed the relative contributions of spectral and temporal coding in representing a complex, species-specific communication signal at different stimulus intensities and in the presence of background noise. At stimulus levels of 70 and 80 dB SPL, levels which approximate that received during communication in the natural environment, average rate profiles plotted over fiber characteristic frequency do not reflect the detailed spectral fine structure of the synthetic call. Rate profiles do not change significantly in the presence of background noise. In ambient (no noise) and low noise conditions, both amphibian papilla and basilar papilla fibers phase lock strongly to the waveform periodicity (fundamental frequency) of the synthetic advertisement call. The higher harmonic spectral fine structure of the synthetic call is not accurately reflected in the timing of fiber firing, because firing is "captured" by the fundamental frequency. Only a small number of fibers synchronize preferentially to any harmonic in the call other than the first, and none synchronize to any higher than the third, even when fiber characteristic frequency is close to one of these higher harmonics. Background noise affects fiber temporal responses in two ways: It can reduce synchronization to the fundamental frequency, until fiber responses are masked; or it can shift synchronization from the fundamental to the second or third harmonic of the call. This second effect results in a preservation of temporal coding at high noise levels. These data suggest that bullfrog eighth nerve fibers extract the waveform periodicity of multiple-harmonic stimuli primarily by a temporal code.  相似文献   

12.
Behavioral responses obtained from chinchillas trained to discriminate a cosine-phase harmonic tone complex from wideband noise indicate that the perception of 'pitch' strength in chinchillas is largely influenced by periodicity information in the stimulus envelope. The perception of 'pitch' strength was examined in chinchillas in a stimulus generalization paradigm after animals had been retrained to discriminate infinitely iterated rippled noise from wideband noise. Retrained chinchillas gave larger behavioral responses to test stimuli having strong fine structure periodicity, but weak envelope periodicity. That is, chinchillas learn to use the information in the fine structure and consequently, their perception of 'pitch' strength is altered. Behavioral responses to rippled noises having similar periodicity strengths, but large spectral differences were also tested. Responses to these rippled noises were similar, suggesting a temporal analysis can be used to account for the behavior. Animals were then retested using the cosine-phase harmonic tone complex as the expected signal stimulus. Generalization gradients returned to those obtained originally in the na?ve condition, suggesting that chinchillas do not remain "fine structure listeners," but rather revert back to being "envelope listeners" when the periodicity strength in the envelope of the expected stimulus is high.  相似文献   

13.
It has been suggested [e.g., Strange et al., J. Acoust. Soc. Am. 74, 695-705 (1983); Verbrugge and Rakerd, Language Speech 29, 39-57 (1986)] that the temporal margins of vowels in consonantal contexts, consisting mainly of the rapid CV and VC transitions of CVC's, contain dynamic cues to vowel identity that are not available in isolated vowels and that may be perceptually superior in some circumstances to cues which are inherent to the vowels proper. However, this study shows that vowel-inherent formant targets and cues to vowel-inherent spectral change (measured from nucleus to offglide sections of the vowel itself) persist in the margins of /bVb/ syllables, confirming a hypothesis of Nearey and Assmann [J. Acoust. Soc. Am. 80, 1297-1308 (1986)]. Experiments were conducted to test whether listeners might be using such vowel-inherent, rather than coarticulatory information to identify the vowels. In the first experiment, perceptual tests using "hybrid silent center" syllables (i.e., syllables which contain only brief initial and final portions of the original syllable, and in which speaker identity changes from the initial to the final portion) show that listeners' error rates and confusion matrices for vowels in /bVb/ syllables are very similar to those for isolated vowels. These results suggest that listeners are using essentially the same type of information in essentially the same way to identify both kinds of stimuli. Statistical pattern recognition models confirm the relative robustness of nucleus and vocalic offglide cues and can predict reasonably well listeners' error patterns in all experimental conditions, though performance for /bVb/ syllables is somewhat worse than for isolated vowels. The second experiment involves the use of simplified synthetic stimuli, lacking consonantal transitions, which are shown to provide information that is nearly equivalent phonetically to that of the natural silent center /bVb/ syllables (from which the target measurements were extracted). Although no conclusions are drawn about other contexts, for speakers of Western Canadian English coarticulatory cues appear to play at best a minor role in the perception of vowels in /bVb/ context, while vowel-inherent factors dominate listeners' perception.  相似文献   

14.
Young deaf children using a cochlear implant develop speech abilities on the basis of speech temporal-envelope signals distributed over a limited number of frequency bands. A Headturn Preference Procedure was used to measure looking times in 6-month-old, normal-hearing infants during presentation of repeating or alternating sequences composed of different tokens of /aba/and /apa/ processed to retain envelope information below 64 Hz while degrading temporal fine structure cues. Infants attended longer to the alternating sequences, indicating that they perceive the voicing contrast on the basis of envelope cues alone in the absence of fine spectral and temporal structure information.  相似文献   

15.
Channel vocoders using either tone or band-limited noise carriers have been used in experiments to simulate cochlear implant processing in normal-hearing listeners. Previous results from these experiments have suggested that the two vocoder types produce speech of nearly equal intelligibility in quiet conditions. The purpose of this study was to further compare the performance of tone and noise-band vocoders in both quiet and noisy listening conditions. In each of four experiments, normal-hearing subjects were better able to identify tone-vocoded sentences and vowel-consonant-vowel syllables than noise-vocoded sentences and syllables, both in quiet and in the presence of either speech-spectrum noise or two-talker babble. An analysis of consonant confusions for listening in both quiet and speech-spectrum noise revealed significantly different error patterns that were related to each vocoder's ability to produce tone or noise output that accurately reflected the consonant's manner of articulation. Subject experience was also shown to influence intelligibility. Simulations using a computational model of modulation detection suggest that the noise vocoder's disadvantage is in part due to the intrinsic temporal fluctuations of its carriers, which can interfere with temporal fluctuations that convey speech recognition cues.  相似文献   

16.
At a cocktail party, listeners must attend selectively to a target speaker and segregate their speech from distracting speech sounds uttered by other speakers. To solve this task, listeners can draw on a variety of vocal, spatial, and temporal cues. Recently, Vestergaard et al. [J. Acoust. Soc. Am. 125, 1114-1124 (2009)] developed a concurrent-syllable task to control temporal glimpsing within segments of concurrent speech, and this allowed them to measure the interaction of glottal pulse rate and vocal tract length and reveal how the auditory system integrates information from independent acoustic modalities to enhance recognition. The current paper shows how the interaction of these acoustic cues evolves as the temporal overlap of syllables is varied. Temporal glimpses as short as 25 ms are observed to improve syllable recognition substantially when the target and distracter have similar vocal characteristics, but not when they are dissimilar. The effect of temporal glimpsing on recognition performance is strongly affected by the form of the syllable (consonant-vowel versus vowel-consonant), but it is independent of other phonetic features such as place and manner of articulation.  相似文献   

17.
Better place-coding of the fundamental frequency in cochlear implants   总被引:1,自引:0,他引:1  
In current cochlear implant systems, the fundamental frequency F0 of a complex sound is encoded by temporal fluctuations in the envelope of the electrical signals presented on the electrodes. In normal hearing, the lower harmonics of a complex sound are resolved, in contrast with a cochlear implant system. In the present study, it is investigated whether "place-coding" of the first harmonic improves the ability of an implantee to discriminate complex sounds with different fundamental frequencies. Therefore, a new filter bank was constructed, for which the first harmonic is always resolved in two adjacent filters, and the balance between both filter outputs is directly related to the frequency of the first harmonic. The new filter bank was compared with a filter bank that is typically used in clinical processors, both with and without the presence of temporal cues in the stimuli. Four users of the LAURA cochlear implant participated in a pitch discrimination task to determine detection thresholds for F0 differences. The results show that these thresholds decrease noticeably for the new filter bank, if no temporal cues are present in the stimuli. If temporal cues are included, the differences between the results for both filter banks become smaller, but a clear advantage is still observed for the new filter bank. This demonstrates the feasibility of using place-coding for the fundamental frequency.  相似文献   

18.
A central aspect of the motor control of birdsong production is the capacity to generate diverse respiratory rhythms, which determine the coarse temporal pattern of song. The neural mechanisms that underlie this diversity of respiratory gestures and the resulting acoustic syllables are largely unknown. We show that the respiratory patterns of the highly complex and variable temporal organization of song in the canary (Serinus canaria) can be generated as solutions of a simple model describing the integration between song control and respiratory centers. This example suggests that subharmonic behavior can play an important role in providing a complex variety of responses with minimal neural substrate.  相似文献   

19.
Thresholds for discriminating the fundamental frequency (FO) of a complex tone, FODLs, are small when low harmonics are present, but increase when the number of the lowest harmonic, N, is above eight. To assess whether the relatively small FODLs for N in the range 8-10 are based on (partly) resolved harmonics or on temporal fine structure information, FODLs were measured as a function of N for tones with three successive harmonics which were added either in cosine or alternating phase. The center frequency was 2000 Hz, and N was varied by changing the mean FO. A background noise was used to mask combination tones. The value of FO was roved across trials to force subjects to make within-trial comparisons. N was roved by +/- 1 for every stimulus, to prevent subjects from using excitation pattern cues. FODLs were not influenced by component phase for N= 6 or 7, but were smaller for cosine than for alternating phase once N exceeded 7, suggesting that temporal fine structure plays a role in this range. When the center frequency was increased to 5000 Hz, performance was much worse for low N, suggesting that phase locking is important for obtaining low FODLs with resolved harmonics.  相似文献   

20.
Moore and Se?k [J. Acoust. Soc. Am. 125, 3186-3193 (2009)] measured discrimination of a harmonic complex tone and a tone in which all harmonics were shifted upwards by the same amount in Hertz. Both tones were passed through a fixed bandpass filter and a background noise was used to mask combination tones. Performance was well above chance when the fundamental frequency was 800 Hz, and all audible components were above 8000 Hz. Moore and Se?k argued that this suggested the use of temporal fine structure information at high frequencies. However, the task may have been performed using excitation-pattern cues. To test this idea, performance on a similar task was measured as a function of level. The auditory filters broaden with increasing level, so performance based on excitation-pattern cues would be expected to worsen as level increases. The results did not show such an effect, suggesting that the task was not performed using excitation-pattern cues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号