首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 234 毫秒
1.
This experiment investigates the effect of images of differently colored sports cars on the loudness of a simultaneously perceived car sound. Still images of a sports car, colored in red, light green, blue, and dark green, were displayed to subjects during a magnitude estimation task. The sound of an accelerating sports car was used as a stimulus. Statistical analysis suggests that the color of the visual stimulus may have a small influence on loudness judgments. The observed loudness differences are generally equivalent to a change in sound level of about 1 dB, with maximum individual differences of up to 3 dB.  相似文献   

2.
The car interior is becoming quieter and other sounds are now exposed to user perception, such as the sound produced by interface buttons when actuated. So, the functional role of the button sound on interface operation and its aesthetic and emotional role on the user experience are now more important than before. However, little research and design effort has been paid to understand how to design buttons that produce a pleasant sound. Moreover, the button’s sound requirements received by interface manufacturers are ill-defined, insufficient or even inexistent, and consequently their conversion into specifications for manufacturing is problematic and leads to long and costly development processes. The purpose of this paper is to contribute to identify relevant acoustic parameters that explain the users sound preferences. Data on preference subjective judgments were collected and buttons acoustic signals were measured allowing the development of preference models based on partial least squares regression and neural networks methods. The former was successful in selecting the relevant parameters to describe the preference ratings of the buttons sound. The later, dealing with the non-linear nature of acoustic perception, was able to predict preferences based on the relevant parameters.  相似文献   

3.
Previous studies have shown a loss in the precision of horizontal localization responses of older hearing-impaired (HI) individuals, along with potentially poorer neural representations of sound-source location. These deficits could be the result or corollary of greater difficulties in discriminating spatial images, and the insensitivity to punctate sound sources. This hypothesis was tested in three headphone-presentation experiments varying interaural coherence (IC), the cue most associated with apparent auditory source width. First, thresholds for differences in IC were measured for a broad sampling of participants. Older HI participants were significantly worse at discriminating IC across reference values than younger normal-hearing participants. These results are consistent with senescent increases in temporal jitter. Performance decreased with age, a finding corroborated in a second discrimination experiment using a separate group of participants matched for hearing loss. This group also completed a third, visual experiment, with both a cross-mapping task where they drew the size of the sound they heard and the identification task where they chose the image that best corresponded to what they heard. The results from the visual tasks indicate that older HI individuals do not hear punctate images and are relatively insensitive to changes in width based on IC.  相似文献   

4.
Influence of visual setting on sound ratings in an urban environment   总被引:1,自引:0,他引:1  
We assessed how listener's judgments of a set of urban sound environments were affected by co-occurring visual settings. In artificial audiovisual environments, subjects rated eight urban sound environments (recordings) when they were associated with five visual settings (four color slides varying in degree of urbanization and a control condition with no slide), along two sound scales (Unpleasant-Pleasant and Stressful-Relaxing). In general, the more urban the visual setting, the more negative the sound ratings. However, this influence depended on the type of sound. It was marked for recordings which did not include human sounds (particularly strong for bird song and weaker for traffic noise), but was absent for all recordings which included human sounds (footsteps and voices). Results are discussed in terms of the degree of matching between visual and sound information, and the degree of implication of the perceiver with these sound environments.  相似文献   

5.
This paper describes the effects of meaningful and meaningless external acoustical noise, at various sound pressure level values, on participants during a mental task. That is, the authors focused on the psychological impression of `annoyance' caused by noise, and `performance' indicated by factors such as percentage of correct answers and reaction time. More specifically, the authors discussed how these two items depend on the sound pressure level value of noise, and how they change due to meaningful or meaningless noise. Moreover, the difference between subjective feelings of `fatigue' before and after the task, both with and without noise was considered. Furthermore, an investigation was made into how the above items change in the case of aural or visual task presentations. The task was the probe digit, which is a short-term memory task. As a result, the importance of reducing meaningful external noise at low sound pressure level values was shown.  相似文献   

6.
This study examined perceptual learning of spectrally complex nonspeech auditory categories in an interactive multi-modal training paradigm. Participants played a computer game in which they navigated through a three-dimensional space while responding to animated characters encountered along the way. Characters' appearances in the game correlated with distinctive sound category distributions, exemplars of which repeated each time the characters were encountered. As the game progressed, the speed and difficulty of required tasks increased and characters became harder to identify visually, so quick identification of approaching characters by sound patterns was, although never required or encouraged, of gradually increasing benefit. After 30 min of play, participants performed a categorization task, matching sounds to characters. Despite not being informed of audio-visual correlations, participants exhibited reliable learning of these patterns at posttest. Categorization accuracy was related to several measures of game performance and category learning was sensitive to category distribution differences modeling acoustic structures of speech categories. Category knowledge resulting from the game was qualitatively different from that gained from an explicit unsupervised categorization task involving the same stimuli. Results are discussed with respect to information sources and mechanisms involved in acquiring complex, context-dependent auditory categories, including phonetic categories, and to multi-modal statistical learning.  相似文献   

7.
One of the most important issues in aircraft noise monitoring systems is the correct detection and marking of aircraft sound events through their measurement profiles, as this influences the reported results. In the recent ISO 20906 (unattended monitoring of aircraft sound in the vicinity of airports) this marking task is split into: detection from the sound level time history, classification of probable aircraft sound events, and the concluding identification of aircraft sound events through non-acoustic features.An experiment was designed to evaluate the factors that influence the marking tasks and quantify their contribution to the uncertainty of the reported monitoring results for some specific cases. Several noise time histories, recorded in three different locations affected by flyover noise, were analyzed by practitioners selected according to three different expertise levels. The analysis was carried out considering three types of complementary information: noise recordings, list of aircraft events and no information at all.Five European universities and over 60 participants were involved in this experiment.The results showed that there were no significant differences in the results derived from factors such as the participant’s institution or the expertise of the practitioners. Nonetheless, other factors, like the noise event dynamic range or the type of help used for marking, have a statistically significant influence on the marking tasks. They cause an increase of the uncertainty of the reported monitoring and can lead to changes in the overall results.The experiment showed that, even when there are no classification and identification errors, the detection stage causes uncertainty in the results. The standard uncertainty for detection ranges from 0.3 dB for those acoustic environments where aircraft are clearly detectable to almost 2 dB in more difficult environments.  相似文献   

8.
Visual information from a speaker's face profoundly influences auditory perception of speech. However, relatively little is known about the extent to which visual influences may depend on experience, and extent to which new sources of visual speech information can be incorporated in speech perception. In the current study, participants were trained on completely novel visual cues for phonetic categories. Participants learned to accurately identify phonetic categories based on novel visual cues. These newly-learned visual cues influenced identification responses to auditory speech stimuli, but not to the same extent as visual cues from a speaker's face. The novel methods and results of the current study raise theoretical questions about the nature of information integration in speech perception, and open up possibilities for further research on learning in multimodal perception, which may have applications in improving speech comprehension among the hearing-impaired.  相似文献   

9.
Previous studies have shown that auditory cues contribute to the identification of several components of a public space such as the volume, but also the type of activity to which the space is dedicated. This paper demonstrates that solutions to improve way-finding in a public place can be based on providing additional auditory information. A methodical approach in three phases is proposed and applied in the case of a train station. First, problems encountered by travellers in a train station are identified by way of an ergonomic study under real conditions with recruited travellers. The results reveal three kinds of problems: orientation errors, lack of confirmation of direction, and lack of information about the remaining distance to be covered. In the second phase, functional and environmental specifications were developed in order to create sound signals for each identified problem. A sound designer proposed several non-speech sound signals based on two schemas: a pair of sounds for the orientation and confirmation functions, and a timeline sequence for the remaining distance. Finally, in the third phase, the sound signals were installed in the train station using an experimental broadcasting system and were evaluated in a second ergonomic study using the same method. The results show that the number of orientation errors decreased and that participants felt more confident during their walk. Sound signals for the orientation and confirmation functions were understood and used by the participants. However, the timeline sequence signalling remaining distance was not understood.  相似文献   

10.
Although many audio-visual speech experiments have focused on situations where the presence of an incongruent visual speech signal influences the perceived utterance heard by an observer, there are also documented examples of a related effect in which the presence of an incongruent audio speech signal influences the perceived utterance seen by an observer. This study examined the effects that different distracting audio signals had on performance in a color and number keyword speechreading task. When the distracting sound was noise, time-reversed speech, or continuous speech, it had no effect on speechreading. However, when the distracting audio signal consisted of speech that started at the same time as the visual stimulus, speechreading performance was substantially degraded. This degradation did not depend on the semantic similarity between the target and masker speech, but it was substantially reduced when the onset of the audio speech was shifted relative to that of the visual stimulus. Overall, these results suggest that visual speech perception is impaired by the presence of a simultaneous mismatched audio speech signal, but that other types of audio distracters have little effect on speechreading performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号