首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Relatively few empirical data are available concerning the role of auditory experience in nonverbal human vocal behavior, such as laughter production. This study compared the acoustic properties of laughter in 19 congenitally, bilaterally, and profoundly deaf college students and in 23 normally hearing control participants. Analyses focused on degree of voicing, mouth position, air-flow direction, temporal features, relative amplitude, fundamental frequency, and formant frequencies. Results showed that laughter produced by the deaf participants was fundamentally similar to that produced by the normally hearing individuals, which in turn was consistent with previously reported findings. Finding comparable acoustic properties in the sounds produced by deaf and hearing vocalizers confirms the presumption that laughter is importantly grounded in human biology, and that auditory experience with this vocalization is not necessary for it to emerge in species-typical form. Some differences were found between the laughter of deaf and hearing groups; the most important being that the deaf participants produced lower-amplitude and longer-duration laughs. These discrepancies are likely due to a combination of the physiological and social factors that routinely affect profoundly deaf individuals, including low overall rates of vocal fold use and pressure from the hearing world to suppress spontaneous vocalizations.  相似文献   

2.
Acoustic measurements believed to reflect glottal characteristics were made on recordings collected from 21 male speakers. The waveforms and spectra of three nonhigh vowels (/ae, lambda, epsilon/) were analyzed to obtain acoustic parameters related to first-formant bandwidth, open quotient, spectral tilt, and aspiration noise. Comparisons were made with previous results obtained for 22 female speakers [H. M. Hanson, J. Acoust. Soc. Am. 101, 466-481 (1997)]. While there is considerable overlap across gender, the male data show lower average values and less interspeaker variation for all measures. In particular, the amplitude of the first harmonic relative to that of the third formant is 9.6 dB lower for the male speakers than for the female speakers, suggesting that spectral tilt is an especially significant parameter for differentiating male and female speech. These findings are consistent with fiberscopic studies which have shown that males tend to have a more complete glottal closure, leading to less energy loss at the glottis and less spectral tilt. Observations of the speech waveforms and spectra suggest the presence of a second glottal excitation within a glottal period for some of the male speakers. Possible causes and acoustic consequences of these second excitations are discussed.  相似文献   

3.
The purpose of this study was to determine the amount of variation for several vocal parameters across three times of the day (morning, noon, and afternoon). Connected speech samples from normal adult males (N = 10) and females (N = 10) were recorded during morning, early afternoon, and late afternoon. Results showed that males produced a statistically significant increase in speaking fundamental frequency (SFF) from morning to afternoon. Females did not demonstrate a statistically significant change in SFF across the three time periods. Vocal amplitude did not change significantly for either group. The SFF variability was higher for the females than for the males. Analysis of individual data revealed that the patterns of vocal change across the three times of day were not consistent among the subjects.  相似文献   

4.
Inner hair cell (IHC) responses to tone-burst stimuli were measured from three locations in the apical half of the guinea pig cochlea. In addition to the measurement of ac receptor potentials, average intracellular voltages, reflecting both ac and dc components of the receptor potential, were computed and compared to determine how bandwidth changes with level. Companion phase measures were also obtained and evaluated. Data collected from turn 2, where best frequency (BF) is approximately 4000 Hz, indicate that frequency response functions are asymmetrical with steeper slopes above the best frequency of the cell. However, in turn 4, where BF is around 250 Hz, the opposite behavior is observed and the steepest slopes are measured below BF. The data imply that cochlear filters are generally asymmetrical with steeper slopes above BF. High-pass filtering by the middle ear serves to reduce this asymmetry in turn 3 and to reverse it in turn 4. Apical response patterns are used to assess the degree to which the middle ear transfer function, the IHC's velocity dependence and the shunting effect of the helicotrema influence low-frequency hearing in guinea pigs. Implications for low-frequency hearing in man are also discussed.  相似文献   

5.
The purpose of this study was to investigate the relation between vocal tract deformation patterns obtained from statistical analyses of a set of area functions representative of a vowel repertoire, and the acoustic properties of a neutral vocal tract shape. Acoustic sensitivity functions were calculated for a mean area function based on seven different speakers. Specific linear combinations of the sensitivity functions corresponding to the first two formant frequencies were shown to possess essentially the same amplitude variation along the vocal tract length as the statistically derived deformation patterns reported in previous studies.  相似文献   

6.
This paper reports on a methodology for acoustically analyzing tone production in Cantonese. F0 offset versus F0 onset are plotted for a series of tokens for each of the six tones in the language. These are grouped according to tone type into a set of six ellipses. Qualitative visual observations regarding the degree of differentiation of the ellipses within the tonal space are summarized numerically using two indices, referred to here as Index 1 and Index 2. Index 1 is a ratio of the area of the speaker's tonal space and the average of the areas of the ellipses of the three target tones making up the tonal space. Index 2 is a ratio of the average distance between all six tonal ellipses and the average of the sum of the two axes for all six tone ellipses. Using this methodology, tonal differentiation is compared for three groups of speakers; normally hearing adults; normally hearing children aged from 4-6 years; and, prelinguistically deafened cochlear implant users aged from 4-11 years. A potential conundrum regarding how tone production abilities can outstrip tone perception abilities is explained using the data from the acoustic analyses. It is suggested that young children of the age range tested are still learning to normalize for pitch level differences in tone production. Acoustic analysis of the data thus supports results from tone perception studies and suggests that the methodology is suitable for use in studies investigating tone production in both clinical and research contexts.  相似文献   

7.
Speech recognition was measured as a function of spectral resolution (number of spectral channels) and speech-to-noise ratio in normal-hearing (NH) and cochlear-implant (CI) listeners. Vowel, consonant, word, and sentence recognition were measured in five normal-hearing listeners, ten listeners with the Nucleus-22 cochlear implant, and nine listeners with the Advanced Bionics Clarion cochlear implant. Recognition was measured as a function of the number of spectral channels (noise bands or electrodes) at signal-to-noise ratios of + 15, + 10, +5, 0 dB, and in quiet. Performance with three different speech processing strategies (SPEAK, CIS, and SAS) was similar across all conditions, and improved as the number of electrodes increased (up to seven or eight) for all conditions. For all noise levels, vowel and consonant recognition with the SPEAK speech processor did not improve with more than seven electrodes, while for normal-hearing listeners, performance continued to increase up to at least 20 channels. Speech recognition on more difficult speech materials (word and sentence recognition) showed a marginally significant increase in Nucleus-22 listeners from seven to ten electrodes. The average implant score on all processing strategies was poorer than scores of NH listeners with similar processing. However, the best CI scores were similar to the normal-hearing scores for that condition (up to seven channels). CI listeners with the highest performance level increased in performance as the number of electrodes increased up to seven, while CI listeners with low levels of speech recognition did not increase in performance as the number of electrodes was increased beyond four. These results quantify the effect of number of spectral channels on speech recognition in noise and demonstrate that most CI subjects are not able to fully utilize the spectral information provided by the number of electrodes used in their implant.  相似文献   

8.
As an alternative to subjective methods, an acoustic head simulator was constructed for hearing protector evaluation. The primary purpose of the device is for hearing protector testing and research under high-level steady-state and impulse noise environments. The design is based on the KEMAR manikin and therefore approximates the physical dimensions and the acoustical eardrum impedance of the median human adult. The head simulator includes a mechanical reproduction of the human circumaural and intraaural tissues with a silicone rubber material. A compliant head-neck system was constructed to approximate the vibrational characteristics of the human head in a sound field in order to simulate the inertia effect of earmuffs. The bone-conducted sounds are not mechanically reproduced in the design. Applications for the device are reported in a companion article [C. Giguère and H. Kunov, J. Acoust. Soc. Am. 85, 1197-1205 (1989)].  相似文献   

9.
The performance of traditional techniques of passive localization in ocean acoustics such as time-of-arrival (phase differences) and amplitude ratios measured by multiple receivers may be degraded when the receivers are placed on an underwater vehicle due to effects of scattering. However, knowledge of the interference pattern caused by scattering provides a potential enhancement to traditional source localization techniques. Results based on a study using data from a multi-element receiving array mounted on the inner shroud of an autonomous underwater vehicle show that scattering causes the localization ambiguities (side lobes) to decrease in overall level and to move closer to the true source location, thereby improving localization performance, for signals in the frequency band 2-8 kHz. These measurements are compared with numerical modeling results from a two-dimensional time domain finite difference scheme for scattering from two fluid-loaded cylindrical shells. Measured and numerically modeled results are presented for multiple source aspect angles and frequencies. Matched field processing techniques quantify the source localization capabilities for both measurements and numerical modeling output.  相似文献   

10.
This work presents an application of principle velocity patterns in the analysis of the structural acoustic design optimization of an eight ply composite cylindrical shell. The approach consists of performing structural acoustic optimizations of a composite cylindrical shell subject to external harmonic monopole excitation. The ply angles are used as the design variables in the optimization. The results of the ply angle design variable formulation are interpreted using the singular value decomposition of the interior acoustic potential energy. The decomposition of the acoustic potential energy provides surface velocity patterns associated with lower levels of interior noise. These surface velocity patterns are shown to correspond to those from the structural acoustic optimization results. Thus, it is demonstrated that the capacity to design multi-ply composite cylinders for quiet interiors is determined by how well the cylinder be can designed to exhibit particular surface velocity patterns associated with lower noise levels.  相似文献   

11.
Phase Shifting Interferometry is a highly accurate data acquisition technique that efficiently utilizes several frames of information for each measurement. In this work, the advantages of phase shifting have been applied to a conventional moiré interferometer, yielding a system capable of recording phase shifted fringe patterns for both in-plane displacement components. Using this method, the phase of a wavefront of interest can be determined at each detector location, so that the resolution of the phase measurements is limited primarily by the detector discrimination and geometry. Unlike traditional Fourier fringe analysis, the noise rejection of phase shift processing algorithms does not degrade image fidelity in the presence of edges and discontinuities. A general discussion of both the phase shifting technique and the Fourier fringe analysis method is included to provide insight into the problems of processing discontinuous fringe patterns.  相似文献   

12.
The perception of fundamental pitch for two-harmonic complex tones was examined in musically experienced listeners with cochlear-based high-frequency hearing loss. Performance in a musical interval identification task was measured as a function of the average rank of the lowest harmonic for both monotic and dichotic presentation of the harmonics at 14 dB Sensation Level. Listeners with hearing loss demonstrated excellent musical interval identification at low fundamental frequencies and low harmonic numbers, but abnormally poor identification at higher fundamental frequencies and higher average ranks. The upper frequency limit of performance in the listeners with hearing loss was similar in both monotic and dichotic conditions. These results suggest that something other than frequency resolution per se limits complex-tone pitch perception in listeners with hearing loss.  相似文献   

13.
The pitch of stimuli was studied under conditions where place-of-excitation was held constant, and where pitch was therefore derived from "purely temporal" cues. In experiment 1, the acoustical and electrical pulse trains consisted of pulses whose amplitudes alternated between a high and a low value, and whose interpulse intervals alternated between 4 and 6 ms. The attenuated pulses occurred after the 4-ms intervals in condition A, and after the 6-ms intervals in condition B. For both normal-hearing subjects and cochlear implantees, the period of an isochronous pulse train equal in pitch to this "4-6" stimulus increased from near 6 ms at the smallest modulation depth to nearly 10 ms at the largest depth. Additionally, the modulated pulse trains in condition A were perceived as being lower in pitch than those in condition B. Data are interpreted in terms of increased refractoriness in condition A, where the larger pulses are more closely followed by the smaller ones than in condition B. Consistent with this conclusion, the A-B difference was reduced at longer interpulse intervals. These findings provide a measure of supra-threshold effects of refractoriness on pitch perception, and increase our understanding of coding of temporal information in cochlear implant speech processing schemes.  相似文献   

14.
15.
This paper assesses the effect of filter spacing on melody recognition by normal-hearing (NH) and cochlear implant (CI) subjects. A new semitone filter spacing is proposed for music. The quality of melodies processed by the various filter spacings is also evaluated. Results from NH listeners showed nearly perfect melody recognition with only four channels of stimulation, and results from CI users indicated significantly higher scores with a 12-channel semitone spacing compared to the spacing used in their daily processor. The quality of melodies processed by the semitone filter spacing was preferred over melodies processed by the conventional logarithmic filter spacing.  相似文献   

16.
The benefits of combined electric and acoustic stimulation (EAS) in terms of speech recognition in noise are well established; however the underlying factors responsible for this benefit are not clear. The present study tests the hypothesis that having access to acoustic information in the low frequencies makes it easier for listeners to glimpse the target. Normal-hearing listeners were presented with vocoded speech alone (V), low-pass (LP) filtered speech alone, combined vocoded and LP speech (LP+V) and with vocoded stimuli constructed so that the low-frequency envelopes were easier to glimpse. Target speech was mixed with two types of maskers (steady-state noise and competing talker) at -5 to 5 dB signal-to-noise ratios. Results indicated no advantage of LP+V in steady noise, but a significant advantage over V in the competing talker background, an outcome consistent with the notion that it is easier for listeners to glimpse the target in fluctuating maskers. A significant improvement in performance was noted with the modified glimpsed stimuli over the original vocoded stimuli. These findings taken together suggest that a significant factor contributing to the EAS advantage is the enhanced ability to glimpse the target.  相似文献   

17.
Three alternative speech coding strategies suitable for use with cochlear implants were compared in a study of three normally hearing subjects using an acoustic model of a multiple-channel cochlear implant. The first strategy (F2) presented the amplitude envelope of the speech and the second formant frequency. The second strategy (F0 F2) included the voice fundamental frequency, and the third strategy (F0 F1 F2) presented the first formant frequency as well. Discourse level testing with the speech tracking method showed a clear superiority of the F0 F1 F2 strategy when the auditory information was used to supplement lipreading. Tracking rates averaged over three subjects for nine 10-min sessions were 40 wpm for F2, 52 wpm for F0 F2, and 66 wpm for F0 F1 F2. Vowel and consonant confusion studies and a test of prosodic information were carried out with auditory information only. The vowel test showed a significant difference between the strategies, but no differences were found for the other tests. It was concluded that the amplitude and duration cues common to all three strategies accounted for the levels of consonant and prosodic information received by the subjects, while the different tracking rates were a consequence of the better vowel recognition and the more natural quality of the F0 F1 F2 strategy.  相似文献   

18.
The attenuation characteristics of hearing protection devices (HPDs) were measured using a modular acoustic head simulator. The effect in changes in the head configuration was assessed in a steady-state diffuse sound field. The use of artificial circumaural skin had a relatively small influence on the insertion loss of earmuffs (max. 6-7 dB at low frequencies). This contrasts to the very large effects found for the artificial intraaural skin on the insertion loss of earplugs (in excess of 40 dB at low frequencies for some devices). Results were also compared with real-ear attenuation at threshold (REAT) data (ANSI S3.19-1974). In general, there is good agreement between the two methods, especially for earmuffs. Design improvements are proposed for earplugs. The result of an exploratory study aimed at measuring the complex (amplitude and phase) insertion loss of HPDs using an impulse noise source are also reported.  相似文献   

19.
The present study measured the recognition of spectrally degraded and frequency-shifted vowels in both acoustic and electric hearing. Vowel stimuli were passed through 4, 8, or 16 bandpass filters and the temporal envelopes from each filter band were extracted by half-wave rectification and low-pass filtering. The temporal envelopes were used to modulate noise bands which were shifted in frequency relative to the corresponding analysis filters. This manipulation not only degraded the spectral information by discarding within-band spectral detail, but also shifted the tonotopic representation of spectral envelope information. Results from five normal-hearing subjects showed that vowel recognition was sensitive to both spectral resolution and frequency shifting. The effect of a frequency shift did not interact with spectral resolution, suggesting that spectral resolution and spectral shifting are orthogonal in terms of intelligibility. High vowel recognition scores were observed for as few as four bands. Regardless of the number of bands, no significant performance drop was observed for tonotopic shifts equivalent to 3 mm along the basilar membrane, that is, for frequency shifts of 40%-60%. Similar results were obtained from five cochlear implant listeners, when electrode locations were fixed and the spectral location of the analysis filters was shifted. Changes in recognition performance in electrical and acoustic hearing were similar in terms of the relative location of electrodes rather than the absolute location of electrodes, indicating that cochlear implant users may at least partly accommodate to the new patterns of speech sounds after long-time exposure to their normal speech processor.  相似文献   

20.
The purpose of this study was to explore the potential advantages, both theoretical and applied, of preserving low-frequency acoustic hearing in cochlear implant patients. Several hypotheses are presented that predict that residual low-frequency acoustic hearing along with electric stimulation for high frequencies will provide an advantage over traditional long-electrode cochlear implants for the recognition of speech in competing backgrounds. A simulation experiment in normal-hearing subjects demonstrated a clear advantage for preserving low-frequency residual acoustic hearing for speech recognition in a background of other talkers, but not in steady noise. Three subjects with an implanted "short-electrode" cochlear implant and preserved low-frequency acoustic hearing were also tested on speech recognition in the same competing backgrounds and compared to a larger group of traditional cochlear implant users. Each of the three short-electrode subjects performed better than any of the traditional long-electrode implant subjects for speech recognition in a background of other talkers, but not in steady noise, in general agreement with the simulation studies. When compared to a subgroup of traditional implant users matched according to speech recognition ability in quiet, the short-electrode patients showed a 9-dB advantage in the multitalker background. These experiments provide strong preliminary support for retaining residual low-frequency acoustic hearing in cochlear implant patients. The results are consistent with the idea that better perception of voice pitch, which can aid in separating voices in a background of other talkers, was responsible for this advantage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号