首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
基于Sigmoid函数局部视觉适应模型的真实影像再现   总被引:4,自引:0,他引:4  
为解决图像采集与显示设备之间的动态范围差异,色调映射技术试图建立一种由高到低的动态范围映射关系,可用于一般图像的真实影像再现.在不同亮度适应水平下,人眼能产生不同的响应特性及对比度敏感性,从而同时响应不同明暗的光强.由此建立了一种局部视觉适应的再现算法,采用具有参数控制的Sigmoid函数来模拟视觉适应的S形非线性特点,得到不同局部亮度适应水平下的压缩曲线,因此能较好地协调图像整体对比度、亮度调整与局部增强之间的关系,在增强图像较暗区域的同时,极大程度地保持亮区细节.经主观评价与特征统计参数相结合的方法验证,该算法能有效地实现动态范围压缩,保持图像细节而避免伪像,具有一定的色彩恒定性,且复杂度低,具有很好的实用性.  相似文献   

2.
An impression of a surface seen through holes is created when one fuses dichoptic pairs of discs with one member of each pair black and the other white. This is referred to as the ‘sieve effect’. This stimulus contains no positional disparities. The impression of depth in the sieve effect is most evident when the size, contrast, and rim thickness of rivalrous patterns are such as to produce exclusive rivalry. I investigated how long it took for the sieve effect to recover from exclusive rivalry suppression. The magnitude of perceived depth in the effect was measured after exclusive rivalry suppression of one half-image of the sieve-effect stimulus. The results showed that the sieve effect takes approximately 630 ms to recover from exclusive rivalry suppression, compared with 200 ms for disparity-based stereopsis. Considered together with the previous report [Matsumiya and Howard: Invest. Ophthalmol. Visual Sci. 42 (2001) S403] that the sieve effect is positively correlated with the rate of exclusive rivalry, these findings suggest that the effect and exclusive rivalry are processed in the identical channel.  相似文献   

3.
基于人眼视觉系统的假彩色融合图像质量的评价方法   总被引:1,自引:1,他引:0  
随着图像融合技术的发展,各种融合算法层出不穷,而很多情况下最终的融合图像是由人眼观察的,因此基于人眼视觉系统的图像融合质量评价显得尤为重要.为了能够模拟人眼对于融合图像的感知,得到融合后图像质量的客观评价,本文提出了一种基于色差理论的假彩色融合图像质量的评价方法.首先将源图像和融合图像转化到CIE L*a*b*均匀色空间,在频域对图像进行对比度敏感函数滤波,通过计算滤波后融合图像的色差判断图像的细节信息,在一定程度上色差越大信息越丰富;通过计算融合图像与源图像的色差判断融合图像与源图像的相关性,相关性越高,融合算法越好.通过融合图像的色差大小以及与源图像的相关性两个参量,得出融合算法的优劣.实验表明,与其他评价方法相比,本文提出的评价方法与人眼观察的结果较为一致.  相似文献   

4.
基于波前技术的人眼神经对比敏感度测量   总被引:1,自引:0,他引:1  
赵豪欣  戴云  周逸峰  张雨东 《光学学报》2012,32(4):433001-319
搭建了基于波前像差的神经对比敏感度(NCSF)测试系统。该系统在测试人眼空间对比敏感度(CSF)的同时,利用Hartmann-Shack波前传感器测量人眼波前像差,通过计算进而得到人眼的NCSF。与通过两种设备分别测量全视觉CSF和波前像差获得NCSF相比,该方法避免了不同测试状态下像差波动的影响,简化了测试过程;和传统激光干涉方法测量NCSF相比,该方法避免了激光干涉产生的相干噪音和激光散斑等不利因素,并且通过改变不同亮度不同颜色视标,可以得到不同亮度,不同波长下的NCSF。选用绿光视标对四例正常人眼的NCSF进行了测量,结果表明:该系统可以同时获得人眼的全视觉CSF、屈光系统调制传递函数和NCSF;在同等亮度下,不同人眼的NCSF存在个体差异;对同一个体,NCSF曲线的最大值对应的空间频率比全眼空间CSF曲线的最大值对应的空间频率高一些。  相似文献   

5.
利用Zernike多项式对用Zygo干涉仪测得的离散材料折射率数据进行了拟合,再使用光线光学的方法评价了系统的成像质量. 由于材料折射率分布的无规则性,在对包含非均匀介质的实际光学系统的模拟仿真和优化时,需要考虑选取材料不同部位加工成的透镜会对系统成像质量有不同的影响,而且加工好的透镜在装配过程中,绕着光轴旋转不同的角度同样会影响成像质量. 通过计算机模拟的方法预先选取材料的最佳部位以及找到最好的装配位置,从而提高了光学系统的性能.  相似文献   

6.
图像增强算法综述   总被引:1,自引:0,他引:1       下载免费PDF全文
王浩  张叶  沈宏海  张景忠 《中国光学》2017,10(4):438-448
图像增强算法能够提高图像整体和局部的对比度,突出图像的细节信息,使增强后的图像更符合人眼的视觉特性且易于机器识别,在军事和民用领域具有广泛的应用。本文从图像增强算法的原理出发,归纳总结了近年来应用比较广泛的4类图像增强算法及其改进算法,包括直方图均衡图像增强算法、小波变换图像增强算法、偏微分方程图像增强算法和基于Retinex理论的图像增强算法。结合人眼视觉特性、噪声抑制、亮度保持和信息熵最大化等图像增强的改进算法,在保证增强图像具有较高对比度的前提下,可进一步提升图像的质量。实现了9种较为典型的图像增强算法,采用主观和客观的评价方法对增强效果进行了对比,分析了不同增强算法的优缺点,并给出了这些算法的计算时间。对这些算法的深入研究能够推动图像增强技术向更高水平发展,从而使图像增强技术在多个学科领域发挥重要作用。  相似文献   

7.
An eye mouse interface that can be used to operate a computer using the movement of the eyes is described. We developed this eye-tracking system for eye motion disability rehabilitation. When the user watches the screen of a computer, a charge-coupled device will catch images of the user's eye and transmit it to the computer. A program, based on a new cross-line tracking and stabilizing algorithm, will locate the center point of the pupil in the images. The calibration factors and energy factors are designed for coordinate mapping and blink functions. After the system transfers the coordinates of pupil center in the images to the display coordinate, it will determine the point at which the user gazed on the display, then transfer that location to the game subroutine program. We used this eye-tracking system as a joystick to play a game with an application program in a multimedia environment. The experimental results verify the feasibility and validity of this eye-game system and the rehabilitation effects for the user's visual movement.  相似文献   

8.
We discuss a novel minimal model for binocular rivalry (and more generally perceptual dominance) effects. The model has only three state variables, but nonetheless exhibits a wide range of input and noise-dependent switching. The model has two reciprocally inhibiting input variables that represent perceptual processes active during the recognition of one of the two possible states and a third variable that represents the perceived output. Sensory inputs only affect the input variables.We observe, for rivalry-inducing inputs, the appearance of winnerless competition in the perceptual system. This gives rise to a behaviour that conforms to well-known principles describing binocular rivalry (the Levelt propositions, in particular proposition IV: monotonic response of residence time as a function of image contrast) down to very low levels of stimulus intensity.  相似文献   

9.
We present a binocular adaptive optics vision analyzer fully capable of controlling both amplitude and phase of the two complex pupil functions in each eye of the subject. A special feature of the instrument is its comparatively simple setup. A single reflective liquid crystal on silicon spatial light modulator working in pure phase modulation generates the phase profiles for both pupils simultaneously. In addition, another liquid crystal spatial light modulator working in transmission operates in pure intensity modulation to produce a large variety of pupil masks for each eye. Subjects perform visual tasks through any predefined variations of the complex pupil function for both eyes. As an example of the system efficiency, we recorded images of the stimuli through the system as they were projected at the subject's retina. This instrument proves to be extremely versatile for designing and testing novel ophthalmic elements and simulating visual outcomes, as well as for further research of binocular vision.  相似文献   

10.
We have developed an adaptive optics (AO) fundus camera to obtain high resolution retinal images of eyes. We use a liquid crystal phase modulator to compensate the aberrations of the eye for better resolution and better contrast in the images. The liquid crystal phase modulator has a wider dynamic range to compensate aberrations than most mechanical deformable mirrors and its linear phase generation makes it easy to follow eye movements. The wavefront aberration was measured in real time with a sampling rate of 10 Hz and the closed loop system was operated at around 2 Hz. We developed software tools to align consecutively obtained images. From our experiments with three eyes, the aberrations of normal eyes were reduced to less than 0.1 μm (RMS) in less than three seconds by the liquid crystal phase modulator. We confirmed that this method was adequate for measuring eyes with large aberrations including keratoconic eyes. Finally, using the liquid crystal phase modulator, high resolution images of retinas could be obtained.  相似文献   

11.
When we fixate our gaze on a stable object, our eyes move continuously with extremely small involuntary and autonomic movements, that even we are unaware of during their occurrence. One of the roles of these fixational eye movements is to prevent the adaptation of the visual system to continuous illumination and inhibit fading of the image. These random, small movements are restricted at long time scales so as to keep the target at the centre of the field of view. In addition, the synchronisation properties between both eyes are related to binocular coordination in order to provide stereopsis. We investigated the roles of different time scale behaviours, especially how they are expressed in the different spatial directions (vertical versus horizontal). We also tested the synchronisation between both eyes. Results show different scaling behaviour between horizontal and vertical movements. When the small ballistic movements, i.e., microsaccades, are removed, the scaling behaviour in both axes becomes similar. Our findings suggest that microsaccades enhance the persistence at short time scales mostly in the horizontal component and much less in the vertical component. We also applied the phase synchronisation decay method to study the synchronisation between six combinations of binocular fixational eye movement components. We found that the vertical-vertical components of right and left eyes are significantly more synchronised than the horizontal-horizontal components. These differences may be due to the need for continuously moving the eyes in the horizontal plane in order to match the stereoscopic image for different viewing distances.  相似文献   

12.
M. Dobeš  J. Martinek  Z. Dobešová 《Optik》2006,117(10):468-473
The precise localization of parts of a human face such as mouth, nose or eyes is important for their image understanding and recognition. The developed successful computer method of eyes and eyelids localization using the modified Hough transform is presented in this paper. The efficiency of this method was tested on two publicly available face images databases and one private face images database with the location correctness better than 96% for a single eye or eyelid and 92% for eye and eyelid couples.  相似文献   

13.
This study investigates the integration system of head (-to-trunk), eye (-to-head), and retinal position signals for hand pointing. In experiment 1, subjects changed their head and eye positions and pointed at a fixated visual stimulus by using an unseen pointer. In experiment 2, subjects fixated a visual stimulus and pointed at another visual stimulus. The results show that the head and eye position signals contributed linearly to perceptual direction (experiments 1 and 2), and that the coefficients of these signals decrease with peripheral vision and are smaller than the coefficient of the retinal position signal (experiment 2). These results collectively suggest that the integration algorithm of the position signals might be described by the linear summation equation and that the retinal position signal serves a more important role than the other position signals in the visual system.  相似文献   

14.
Theoretical calculations of the polychromatic modulation transfer function (MTF) and wave-front aberration were performed with physiological eye models. These eye models have an amount of spherical aberration that is representative of a normal population of pseudophakic eyes implanted with two different types of intraocular lens (IOL) made from high-refractive-index silicone. These theoretical calculations were compared with the measured contrast sensitivity function (CSF) under mesopic lighting conditions and with wave-front aberration (obtained with a Hartmann-Shack wave-front sensor) collected from 37 patients bilaterally implanted with the same types of lens. The relationships between the ocular wave-front aberration and the MTF predicted by the eye models and the CSF and the ocular wave-front aberration measured in eyes implanted with IOLs were investigated. The predicted improvements in MTF and wave-front aberration correlated well with the improvements measured in practice. Physiological eye models are therefore useful tools for IOL design.  相似文献   

15.
This study intends to quantify the effects of the surround luminance and noise of a given stimulus on the shape of spatial luminance contrast sensitivity function (CSF) and to propose an adaptive image quality evaluation method. The proposed image evaluation method extends a model called square-root integral (SQRI). The non-linear behaviour of the human visual system was taken into account by using CSF. This model can be defined as the square root integration of multiplication between display modulation transfer function and CSF. The CSF term in the original SQRI was replaced by the surround adaptive CSF quantified in this study and it is divided by the Fourier transform of a given stimulus for compensating for the noise adaptation.  相似文献   

16.
Bego?a Domenech  David Mas  Carlos Illueca 《Optik》2010,121(24):2221-2223
The visual system tends to favour one eye over the other in perceptual or motor tasks. This effect, called ocular dominance, makes those small movements in one eye be smaller and more precise than in the other eye. These dynamic effects are usually small and static devices are not capable of detecting differences between both eyes. In the last years ophthalmic devices are becoming more and more precise, thus they can be sensible to such variability. The hypothesis posed here is that variability of measures acquired this way is affected by ocular dominance. With a Pentacam system we have measured several parameters of the anterior segment of the eye. Our findings show that variables measured for the dominant eye are less dispersive than for the non-dominant eye although the limited accuracy of the device can mask this effect. The trend is confirmed by a contrast experiment and by a previous work, so we accept the validity of our hypothesis. Our main conclusion is that systematic election of the right eye in analysis of reliability or reproducibility can bias the variability of results and consequently we suggest considering dominance effects.  相似文献   

17.
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.  相似文献   

18.
An experiment was carried out to determine whether sudden loss of vision in one eye would result in a bias in sound localization in the direction of the viewing eye. Fifteen normal-sighted young adults were tested binocularly and with the right or left eye covered. Within each vision condition, sound localization was assessed using three different arrays of six loudspeakers, positioned frontally and on the right and left sides of space, in combination with two stimuli, a one-third octave noise band centred at 4 kHz and broadband noise. These assessed the utilization of mainly the interaural level difference cue and binaural and spectral cues in combination, respectively. One block of 90 speaker identification trials was presented for each of the 18 conditions. For the lateral arrays in combination with the broadband noise stimulus, monocular vision resulted in decreased accuracy on the contralateral side. Errors were in the direction of the viewing eye. While monocularity resulted in performance decrements with the 4-kHz stimulus, the error pattern was not consistent. These results support the hypothesis of visually guided auditory adaptation of binaural and spectral cues in combination in response to sudden deprivation of vision in one eye.  相似文献   

19.
Effects of motion, implied direction in a static stimulus and displacement on postural control were examined independently. In Experiment 1, rotation of a random-dot stimulus was presented. In Experiments 2 and 3, photographic slides of natural scenes were used; participants closed their eyes during stimulus rotation to eliminate motion information. In Experiment 2, the stimulus was presented upright initially, then presented again with a tilt. In Experiment 3, the order was reversed to separate the effects of implied direction and displacement. Results showed that all information of motion, implied direction, and displacement had some effect on postural control, although visual information of motion has been presumed to have a principal effect on postural control. Results suggested that the effects of implied direction might reflect an immediate processing of information. The effects of displacement and motion might reflect a continuous processing of information.  相似文献   

20.
Perceptual distortions referred to as aftereffects may arise following exposure to an adapting sensory stimulus. The study of aftereffects has a long and distinguished history [Kohler and Wallach, Proc. Am. Philos. Soc. 88, 269-359 (1944)] and a range of aftereffects have been well described in sensory modalities such as the visual system [Barlow, in Vision: Coding and Efficiency (Cambridge University Press, Cambridge, 1990)]. In the visual system these effects have been interpreted as evidence for a population of cells or channels specific for certain features of a stimulus. However there has been relatively little work examining auditory aftereffects, particularly in respect of spatial location. In this study we have examined the effects of a stationary adapting noise stimulus on the subsequent auditory localization in the vicinity of the adapting stimulus. All human subjects in this study were trained to localize short bursts of noise in a darkened anechoic environment. Adaptation was achieved by presenting 4 min of continuous noise at the start of each block of trials and was maintained by a further 15-s noise burst between each trial. The adapting stimulus was located either directly in front of the subject or 30 degrees to the right of the midline. Subjects were required to determine the location of noise burst stimuli (150 ms) in the proximity of the adapting stimulus following each interstimulus period of adaptation. Results demonstrated that following adaptation there was a general radial displacement of perceived sound sources away from the location of the adapting stimulus. These data are more consistent with a channel-based or place-based process of sound localization rather than a simple level-based adaptation model. A simple "distribution shift" model that assumes an array of overlapping spatial channels is advanced to explain the psychophysical data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号