首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   25篇
  免费   0篇
数学   2篇
物理学   23篇
  2011年   1篇
  2010年   1篇
  2008年   1篇
  2007年   2篇
  2006年   2篇
  2005年   2篇
  2004年   1篇
  2002年   1篇
  1999年   2篇
  1998年   1篇
  1996年   2篇
  1995年   1篇
  1994年   1篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1987年   1篇
  1986年   2篇
  1983年   1篇
排序方式: 共有25条查询结果,搜索用时 31 毫秒
1.
2.
This study investigated the role of uncertainty in masking of speech by interfering speech. Target stimuli were nonsense sentences recorded by a female talker. Masking sentences were recorded from ten female talkers and combined into pairs. Listeners' recognition performance was measured with both target and masker presented from a front loudspeaker (nonspatial condition) or with a masker presented from two loudspeakers, with the right leading the front by 4 ms (spatial condition). In Experiment 1, the sentences were presented in blocks in which the masking talkers, spatial configuration, and signal-to-noise (S-N) ratio were fixed. Listeners' recognition performance varied widely among the masking talkers in the nonspatial condition, much less so in the spatial condition. This result was attributed to variation in effectiveness of informational masking in the nonspatial condition. The second experiment increased uncertainty by randomizing masking talkers and S-N ratios across trials in some conditions, and reduced uncertainty by presenting the same token of masker across trials in other conditions. These variations in masker uncertainty had relatively small effects on speech recognition.  相似文献   
3.
Two experiments compared the effect of supplying visual speech information (e.g., lipreading cues) on the ability to hear one female talker's voice in the presence of steady-state noise or a masking complex consisting of two other female voices. In the first experiment intelligibility of sentences was measured in the presence of the two types of maskers with and without perceived spatial separation of target and masker. The second study tested detection of sentences in the same experimental conditions. Results showed that visual cues provided more benefit for both recognition and detection of speech when the masker consisted of other voices (versus steady-state noise). Moreover, visual cues provided greater benefit when the target speech and masker were spatially coincident versus when they appeared to arise from different spatial locations. The data obtained here are consistent with the hypothesis that lipreading cues help to segregate a target voice from competing voices, in addition to the established benefit of supplementing masked phonetic information.  相似文献   
4.
To gain information from complex auditory scenes, it is necessary to determine which of the many loudness, pitch, and timbre changes originate from a single source. Grouping sound into sources based on spatial information is complicated by reverberant energy bouncing off multiple surfaces and reaching the ears from directions other than the source's location. The ability to localize sounds despite these echoes has been explored with the precedence effect: Identical sounds presented from two locations with a short stimulus onset asynchrony (e.g., 1-5 ms) are perceived as a single source with a location dominated by the lead sound. Importantly, echo thresholds, the shortest onset asynchrony at which a listener reports hearing the lag sound as a separate source about half of the time, can be manipulated by presenting sound pairs in contexts. Event-related brain potentials elicited by physically identical sounds in contexts that resulted in listeners reporting either one or two sources were compared. Sound pairs perceived as two sources elicited a larger anterior negativity 100-250 ms after onset, previously termed the object-related negativity, and a larger posterior positivity 250-500 ms. These results indicate that the models of room acoustics listeners form based on recent experience with the spatiotemporal properties of sound modulate perceptual as well as later higher-level processing.  相似文献   
5.
A major problem for an auditory system exposed to sound in a reverberant environment is to distinguish reflections from true sound sources. Previous research indicates that the process of recognizing reflections is malleable from moment to moment. Three experiments report how ongoing input can prevent or disrupt the fusion of the delayed sound with the direct sound, a necessary component of the precedence effect. The buildup of fusion can be disrupted by presenting stimuli in alternation that simulate different reflecting surfaces. If buildup of fusion is accomplished first and then followed by an aberrant configuration, breakdown of the precedence effect occurs but it depends on the duration of the new sound configuration. The Djelani and Blauert (2001) finding that a brief disruption has no effect on fusion was confirmed; however, it was found that a more lengthy disruption produces breakdown.  相似文献   
6.
7.
Temporal masking curves were obtained from 12 normal-hearing and 16 hearing-impaired listeners using 200-ms, 1000-Hz pure-tone maskers and 20-ms, 1000-Hz fixed-level probe tones. For the delay times used here (greater than 40 ms), temporal masking curves obtained from both groups can be well described by an exponential function with a single level-independent time constant for each listener. Normal-hearing listeners demonstrated time constants that ranged between 37 and 67 ms, with a mean of 50 ms. Most hearing-impaired listeners, with significant hearing loss at the probe frequency, demonstrated longer time constants (range 58-114 ms) than those obtained from normal-hearing listeners. Time constants were found to grow exponentially with hearing loss according to the function tau = 52e0.011(HL), when the slope of the growth of masking is unity. The longest individual time constant was larger than normal by a factor of 2.3 for a hearing loss of 52 dB. The steep slopes of the growth of masking functions typically observed at long delay times in hearing-impaired listeners' data appear to be a direct result of longer time constants. When iterative fitting procedures included a slope parameter, the slopes of the growth of masking from normal-hearing listeners varied around unity, while those from hearing-impaired listeners tended to be less (flatter) than normal. Predictions from the results of these fixed-probe-level experiments are consistent with the results of previous fixed-masker-level experiments, and they indicate that deficiencies in the ability to detect sequential stimuli should be considerable in hearing-impaired listeners, partially because of extended time constants, but mostly because forward masking involves a recovery process that depends upon the sensory response evoked by the masking stimulus. Large sensitivity losses reduce the sensory response to high SPL maskers so that the recovery process is slower, much like the recovery process for low-level stimuli in normal-hearing listeners.  相似文献   
8.
Older individuals often report difficulty coping in situations with multiple conversations in which they at times need to "tune out" the background speech and at other times seek to monitor competing messages. The present study was designed to simulate this type of interaction by examining the cost of requiring listeners to perform a secondary task in conjunction with understanding a target talker in the presence of competing speech. The ability of younger and older adults to understand a target utterance was measured with and without requiring the listener to also determine how many masking voices were presented time-reversed. Also of interest was how spatial separation affected the ability to perform these two tasks. Older adults demonstrated slightly reduced overall speech recognition and obtained less spatial release from masking, as compared to younger listeners. For both younger and older listeners, spatial separation increased the costs associated with performing both tasks together. The meaningfulness of the masker had a greater detrimental effect on speech understanding for older participants than for younger participants. However, the results suggest that the problems experienced by older adults in complex listening situations are not necessarily due to a deficit in the ability to switch and/or divide attention among talkers.  相似文献   
9.
The effect of perceived spatial differences on masking release was examined using a 4AFC speech detection paradigm. Targets were 20 words produced by a female talker. Maskers were recordings of continuous streams of nonsense sentences spoken by two female talkers and mixed into each of two channels (two talker, and the same masker time reversed). Two masker spatial conditions were employed: "RF" with a 4 ms time lead to the loudspeaker 60 degrees horizontally to the right, and "FR" with the time lead to the front (0 degrees ) loudspeaker. The reference nonspatial "F" masker was presented from the front loudspeaker only. Target presentation was always from the front loudspeaker. In Experiment 1, target detection threshold for both natural and time-reversed spatial maskers was 17-20 dB lower than that for the nonspatial masker, suggesting that significant release from informational masking occurs with spatial speech maskers regardless of masker understandability. In Experiment 2, the effectiveness of the FR and RF maskers was evaluated as the right loudspeaker output was attenuated until the two-source maskers were indistinguishable from the F masker, as measured independently in a discrimination task. Results indicated that spatial release from masking can be observed with barely noticeable target-masker spatial differences.  相似文献   
10.
Channel vocoders using either tone or band-limited noise carriers have been used in experiments to simulate cochlear implant processing in normal-hearing listeners. Previous results from these experiments have suggested that the two vocoder types produce speech of nearly equal intelligibility in quiet conditions. The purpose of this study was to further compare the performance of tone and noise-band vocoders in both quiet and noisy listening conditions. In each of four experiments, normal-hearing subjects were better able to identify tone-vocoded sentences and vowel-consonant-vowel syllables than noise-vocoded sentences and syllables, both in quiet and in the presence of either speech-spectrum noise or two-talker babble. An analysis of consonant confusions for listening in both quiet and speech-spectrum noise revealed significantly different error patterns that were related to each vocoder's ability to produce tone or noise output that accurately reflected the consonant's manner of articulation. Subject experience was also shown to influence intelligibility. Simulations using a computational model of modulation detection suggest that the noise vocoder's disadvantage is in part due to the intrinsic temporal fluctuations of its carriers, which can interfere with temporal fluctuations that convey speech recognition cues.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号