首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
图像的边缘信息是人眼观察和识别物体的重要特征,根据模糊图像相对于清晰图像其边缘特征发生较大变化的特点,提出了一种基于边缘锐度的无参考模糊图像质量评价方法。首先,通过文中所示方法寻找图像中的所有阶跃边缘;其次,根据一些原则选择合适的部分边缘;最终,计算这些合适边缘的锐利程度作为图像的模糊度评价依据。实验结果表明,该方法相比于全参考模型SSIM能够更好地评价高斯模糊、离焦模糊等模糊类型图像,与主观评价结构相关性强,更符合人眼视觉系统特性,并且易于实现。  相似文献   

2.
图像的边缘信息是人眼观察和识别物体的重要特征,根据模糊图像相对于清晰图像其边缘特征发生较大变化的特点,提出了一种基于边缘锐度的无参考模糊图像质量评价方法。首先,通过文中所示方法寻找图像中的所有阶跃边缘;其次,根据一些原则选择合适的部分边缘;最终,计算这些合适边缘的锐利程度作为图像的模糊度评价依据。实验结果表明,该方法相比于全参考模型SSIM能够更好地评价高斯模糊、离焦模糊等模糊类型图像,与主观评价结构相关性强,更符合人眼视觉系统特性,并且易于实现。  相似文献   

3.
为了提高融合图像质量评价中评价结果与人眼视觉特性的一致性,分析了现有融合图像质量评价方法,提出了一种基于图像结构信息复数表示的融合图像质量评价方法,通过计算图像亮度分量的梯度,构成了一种表征图像结构信息的梯度复数矩阵,用该矩阵表征图像的结构信息。考虑到复数无法计算互信息等参数,将分块奇异值分解后得到的矩阵作为度量矩阵,采用该矩阵计算了两种融合图像质量评价方法。实验结果表明,该方法提高了评价结果与人眼视觉特性的一致性,对于融合效果较好的金字塔和小波方法给出了3.748 5和3.722 2的评价结果,与人眼视觉特性的一致性优于传统方法。  相似文献   

4.
夏振平  李晓华  陈磊  王坚 《光学学报》2015,35(1):111001
为了更为客观、准确地评价立体显示的运动图像质量,基于平面运动模糊评价模型和立体运动视觉感知实验建立了立体显示运动模糊评价方法。通过对双目视差图像分别进行平面运动模糊模拟,然后再三维重现,再进行实际感知和模拟效果对比的视觉感知实验,结果表明,"视差"因素对模糊程度感知没有显著性影响,据此利用双目视差图像模糊程度平均的方法建立了立体显示运动模糊的客观评价方法。该评价方法的建立使立体运动图像质量的评价更加客观、准确,同时为立体显示运动图像质量的提升提供参考。  相似文献   

5.
《光学技术》2015,(5):396-399
图像清晰度是评价图像质量时常用的指标之一。现有的清晰度评价模型未能充分考虑人眼视觉的亮度掩盖特性。为此,在均方根对比度基础上,考虑人眼亮度掩盖特性,通过计算图像中人眼感兴趣区域(包含细节、边缘和纹理)的感知对比度构造一种无参考的图像清晰度客观评价模型。并利用IVC数据库来验证模型,结果表明,与已有的4种清晰(模糊)度评价模型相比,该模型的评价结果更接近人眼主观感受,且计算量小,运算耗时短,是一种简单有效的图像清晰度评价模型。  相似文献   

6.
在诸多图像质量评价方法中,结构相似度(SSIM)算法简单高效,准确性较高,但SSIM模型不能很好地评价存在局部失真和交叉失真类型的图像。针对SSIM算法对图像不同区域平等对待的不足并考虑了时域人眼视觉特性,提出一种改进的基于区域对比度和结构相似度(RCSSIM)的图像质量评价方法。该算法将图像区域灰度信息对比度与SSIM算法融合,加权归一为参考图像与失真图像的对比度结构相似度值,以其评价图像质量。在LIVE图像数据库上的实验结果表明,与SSIM算法相比,RCSSIM评价结果的皮尔逊线性相关系数提高约0.015,均方根误差减小约0.55,更接近于人眼主观测试结果,具有更好的评价性能。  相似文献   

7.
姚军财  刘贵忠 《物理学报》2018,67(10):108702-108702
图像质量客观评价在图像和视频传输、编解码以及服务质量中起着非常重要的作用.然而现有的方法往往没有考虑图像内容特征及其视觉感知,使得其质量客观评价与主观感知结果存在一定的差距.基于此,本文结合图像内容的复杂性特征和人眼的掩蔽特性、对比敏感度特性以及亮度感知的非线性特性,提出了一种基于人眼对图像内容感知的图像质量客观评价方法.该方法首先结合亮度感知的非线性模型将图像进行转换,得到人眼感知强度图;再分别以人眼对比敏感度值和图像局部平均对比度值作为权重因子对强度求和,以求和的数据信息作为人眼感知图像的内容,并构建图像感知模型;最后以此模型分别模拟人眼感知参考图像和失真图像,并计算二者的强度差,以强度差为评价分数的基础构建图像质量客观评价模型.采用LIVE,TID2008和CSIQ三个数据库中的共47幅参考图像和1549幅测试图像进行仿真实验,且与SSIM,VSNR,FSIM和PSNRHVS等典型的图像质量客观评价模型进行对比分析,同时探讨影响图像质量评价的因素.结果表明:所提方法的评价分数与主观评价分数的Pearson线性相关性系数和Spearman秩相关系数值比SSIM的评价结果均有一定程度的提高,提高幅度分别平均为9.5402%和3.2852%,比PSNRHVS和VSNR提高幅度更大.综合以上表明:所提方法是一种有效可行的图像质量客观评价方法;同时,在图像质量客观评价中,考虑人眼对图像内容的感知和复杂度的分析有助于提高图像质量主客观评价的一致性,评价精度可得到进一步的提高.  相似文献   

8.
《光学技术》2015,(5):445-450
根据小波变换的高频分量包含边缘及纹理等人眼比较关注的细节信息,提出了一种基于小波变换的高频重构的立体图像质量评价方法,包括左右视点质量评价和立体感评价两部分。通过模拟人类视觉系统处理原始和失真的左右视点图像,利用结构相似度算法得到左右视点质量评价值;用加权处理后的小波高频分量经过重构得到人眼关注的细节信息图,分别计算原始和失真的左右视点细节信息图的绝对差值图,利用结构相似度算法对两幅绝对差值图评价得到立体感质量评价值。将两种质量评价值进行加权融合得到最终的立体图像质量评价值。整体实验结果的Pearson线性相关系数在0.94以上,Spearman秩相关系数值在0.93以上,均方根误差值接近5.5,能较好地符合人眼视觉特性。  相似文献   

9.
基于结构相似度与感兴趣区域的图像融合评价方法   总被引:3,自引:0,他引:3  
张勇  金伟其 《光子学报》2011,40(2):311-315
针对红外和可见光图像融合效果评价问题,在分析图像结构相似度算法基础上,结合人眼视觉特性,提出了基于结构相似度与感兴趣区域的图像融合评价方法.利用红外和可见光传感器各自成像特性形成的不同图像特征,分别划分图像感兴趣区域和剩余区域.根据人眼对不同区域的重视程度,分别赋予不同的加权因子,较以往评价方法更突出了图像重要特征在评...  相似文献   

10.
提出一种结合人眼视觉敏感特性的无参考图像清晰度客观评价方法,利用高通滤波划分出人眼视觉敏感(图像细节)和非敏感区域,分别计算对应区域的清晰度,最后加权求和得出整幅图像的清晰度。利用公开的4个图像库与现有的6个无参考清晰度评价模型进行测试和比较,所研究的评价模型计算结果与主观评价分数有较好的一致性,且运算速度较快。  相似文献   

11.
柯洪昌  孙宏彬 《中国光学》2015,8(5):768-774
针对传统视觉显著性模型在自顶向下的任务指导和动态信息处理方面的不足,设计并实现了融入运动特征的视觉显著性模型。利用该模型提取了图像的静态特征和动态特征,静态特征的提取在图像的亮度、颜色和方向通道进行,运动特征的提取采用基于多尺度差分的特征提取方法实现,然后各通道分别通过滤波、差分得到显著图,在生成全局显著图时,提出多通道参数估计方法,计算图像感兴趣区域与眼动感兴趣区域的相似度,从而可在图像上准确定位目标位置。针对20组视频图像序列(每组50帧)进行了实验,结果表明:本文算法提取注意焦点即目标区域的平均相似度为0.87,使用本文算法能够根据不同任务情境,选择各特征通道的权重参数,从而可有效提高目标搜索的效率。  相似文献   

12.
提出了感知清晰度评价模型,来评价人眼对红外与可见光彩色融合图像细节和边缘的可辨识度。首先,利用人眼对比度敏感函数模型,抑制在特定观察条件下图像中人眼不敏感的频率成分。之后,在局部频带对比度模型基础上,结合人眼亮度掩模特性构造了感知对比度模型。最后,计算融合图像人眼兴趣区域(细节和边缘区域)的感知对比度,进而评价融合图像的感知清晰度。实验结果表明,与现有的五种彩色图像清晰(模糊)度的客观评价模型相比,考虑人眼视觉特性感知清晰度模型的计算结果与人眼主观感受具有较好的一致性,可以有效地对彩色融合图像清晰度进行客观评价。  相似文献   

13.
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.  相似文献   

14.
Since existing no-reference image quality assessment (IQA) algorithms are not consistent with subjective assessment, a novel no-reference image quality assessment method is proposed by introducing three types of image distortion, including noise, blur degree and blocking effects. Firstly, the standard deviation of image noise is estimated by modified wavelet medium estimation. Secondly, the blur degree of image is obtained by counting edge pixel points. Thirdly, blocking effect is represented by characteristics of image pixel blocks. Finally, the assessment model is established by combining these three distortion types. Combining the differential mean opinion scores (DMOS) provided in the LIVE IQA database, the weighting coefficients are obtained. The experimental results indicate that these evaluation values of this algorithm not only agree with PSNR in objective assessment, but also are consistent with the DMOS in subjective assessment.  相似文献   

15.
Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric distortions that seriously deteriorate the users’ visual perception. An effective 3D synthesized image quality assessment (IQA) metric can simulate human visual perception and determine the application feasibility of the synthesized content. In this paper, a no-reference IQA metric based on visual-entropy-guided multi-layer features analysis for 3D synthesized images is proposed. According to the energy entropy, the geometric distortions are divided into two visual attention layers, namely, bottom-up layer and top-down layer. The feature of salient distortion is measured by regional proportion plus transition threshold on a bottom-up layer. In parallel, the key distribution regions of insignificant geometric distortion are extracted by a relative total variation model, and the features of these distortions are measured by the interaction of decentralized attention and concentrated attention on top-down layers. By integrating the features of both bottom-up and top-down layers, a more visually perceptive quality evaluation model is built. Experimental results show that the proposed method is superior to the state-of-the-art in assessing the quality of 3D synthesized images.  相似文献   

16.
Fusion for visible and infrared images aims to combine the source images of the same scene into a single image with more feature information and better visual performance. In this paper, the authors propose a fusion method based on multi-window visual saliency extraction for visible and infrared images. To extract feature information from infrared and visible images, we design local-window-based frequency-tuned method. With this idea, visual saliency maps are calculated for variable feature information under different local window. These maps show the weights of people’s attention upon images for each pixel and region. Enhanced fusion is done using simple weight combination way. Compared with the classical and state-of-the-art approaches, the experimental results demonstrate the proposed approach runs efficiently and performs better than other methods, especially in visual performance and details enhancement.  相似文献   

17.
Military, navigation and concealed weapon detection need different imaging modalities such as visible and infrared to monitor a targeted scene. These modalities provide complementary information. For better situation awareness, complementary information of these images has to be integrated into a single image. Image fusion is the process of integrating complementary source information into a composite image. In this paper, we propose a new image fusion method based on saliency detection and two-scale image decomposition. This method is beneficial because the visual saliency extraction process introduced in this paper can highlight the saliency information of source images very well. A new weight map construction process based on visual saliency is proposed. This process is able to integrate the visually significant information of source images into the fused image. In contrast to most of the multi-scale image fusion techniques, proposed technique uses only two-scale image decomposition. So it is fast and efficient. Our method is tested on several image pairs and is evaluated qualitatively by visual inspection and quantitatively using objective fusion metrics. Outcomes of the proposed method are compared with the state-of-art multi-scale fusion techniques. Results reveal that the proposed method performance is comparable or superior to the existing methods.  相似文献   

18.
方志明  崔荣一  金璟璇 《物理学报》2017,66(10):109501-109501
提出了一种空域和时域相结合的视频显著性检测算法.对单帧图像,受视觉皮层层次化感知特性和Gestalt视觉心理学的启发,提出了一种层次化的静态显著图检测方法.在底层,通过符合生物视觉特性的特征图像(双对立颜色特征及亮度特征图像)的非线性简化模型来合成特征图像,形成多个候选显著区域;在中层,根据矩阵的最小Frobenius-范数(F-范数)性质选取竞争力最强的候选显著区域作为局部显著区域;在高层,利用Gestalt视觉心理学的核心理论,对在中层得到的局部显著区域进行整合,得到具有整体感知的空域显著图.对序列帧图像,基于运动目标在位置、运动幅度和运动方向一致性的假设,对Lucas-Kanade算法检测出的光流点进行二分类,排除噪声点的干扰,并利用光流点的运动幅度来衡量运动目标运动显著性.最后,基于人类视觉对动态信息与静态信息敏感度的差异提出了一种空域和时域显著图融合的通用模型.实验结果表明,该方法能够抑制视频背景中的噪声并且解决了运动目标稀疏等问题,能够较好地从复杂场景中检测出视频中的显著区域.  相似文献   

19.
We propose a novel image enhancement method based on salient region detection and a layered difference representation of 2D histograms. We first obtain the visual salient region corresponding to maximal human attention using saliency filters. Then, we obtain a difference vector for the visual salient region by solving a constrained optimization problem of the layered difference representation at a specified layer. Finally, the new difference vector and the difference vector of the original image are aggregated to enhance the salient region and protect other regions from overstretching or brightness shift. Experimental results including comparisons with other methods show that our proposed algorithm produces more suitable enhanced images compared with the results of existing algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号