首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
由于可见光图像在低光照环境下其可视性较差,为了提高红外与弱可见光图像融合的效果,提出了一种基于对比度增强和柯西模糊函数的图像融合算法.首先用改进的引导滤波自适应增强提高弱可见光图像暗区域的可视性;其次,利用非下采样剪切波变换将红外和增强后的弱可见光图像分解,得到相应的低频和高频子带;再后,分别用直觉模糊集构建柯西隶属函数和自适应双通道脉冲发放皮层模型对低频、高频子带进行融合;最后,使用非下采样剪切波变换对融合得到的高低频子带进行逆变换重构得到融合图像.实验结果表明,与其它融合算法相比,该算法有效地增强了弱可见光图像的暗区域,保留了更多的背景信息,从而提高了融合图像的对比度和清晰度.  相似文献   

2.
娄熙承  冯鑫 《光子学报》2021,50(3):180-193
为提高融合图像的可视性,解决传统红外与可见光图像融合算法中存在的边缘特征缺失、细节模糊的问题,提出了一种潜在低秩表示框架下基于卷积神经网络结合引导滤波的红外与可见光图像融合算法。该算法首先利用潜在低秩表示对源图像进行分解,得到源图像的低秩分量和显著分量。其次,利用卷积神经网络根据源图像的特征信息,得到权值图。再次,通过引导滤波算法对权值图进行边缘锐化,然后再将优化后的权值图分别与源图像的低秩分量和显著分量融合,得到融合图像的低秩分量和显著分量。最后,将融合图像的低秩分量和显著分量叠加,得到最终的融合图像。实验结果表明,该算法在主观评价和客观指标上均优于传统的红外与可见光图像融合算法。  相似文献   

3.
为增强红外与可见光图像融合可视性,克服红外与可见光图像融合结果中细节丢失、目标不显著和对比度低等问题,提出一种基于二尺度分解和显著性提取的红外与可见光图像融合方法。首先,以人类视觉感知理论为基础,针对人眼对图像不同区域敏感性不同特性,在跨模态融合任务中需要对源图像进行不同层次分解,避免高频分量和低频分量混合减少光晕效应,采用二尺度分解方法对源红外与可见光图像进行分解,分别获取各自的基本层和细节层,该分解方法能够很好的表达图像并具有很好的实时性;然后,针对基本层的融合提出一种基于视觉显著图(VSM)的加权平均融合规则,VSM方法能够很好提取源图像中的显著结构和目标。采用基于VSM的加权平均融合规则对基本层融合,能够有效避免直接使用加权平均策略而导致对比度损失,使融合图像可视性更好;针对细节层的融合,采用Kirsch算子对源图像分别提取得到显著图,然后通过VGG-19网络对显著图进行特征提取获取权值图,并与细节层进行融合,得到融合的细节层;Kirsch算子能在八个方向上快速提取图像边缘,显著图中将包含更多边缘信息和更少噪声,且VGG-19网络能够提取到图像更深层特征信息,获取的权值图中将包...  相似文献   

4.
熊芳芳  肖宁 《光学技术》2019,45(3):355-363
针对当前红外(IR)与可见光(VI)图像融合中细节保留能力不足及目标配准精度不高的问题,设计了一种多尺度2D经验模态分解耦合非下采样方向滤波器组(NSDFB)的红外与可见光图像融合算法。分别计算红外与可见光图像的熵值,并比较二者阈值的大小,计算阈值较大图像的残差。通过2D经验模态分解(2D-EMD)和NSDFB机制,构建了多尺度方向分解模型,将熵值较大图像的残差和熵值较小的图像变换为高频方向系数与低频系数,以获得源图像的细节和特征信息。对于低频系数,引入加权平均作为低频系数的融合准则;根据区域能量对比度与清晰度来定义融合规则,完成高频系数的融合。利用2D-EMD多尺度分解逆变换将获取的低频与高频系数生成新图像。实验表明:与当前常用红外与可见光图像融合对比,所提算法具有更高的融合质量,所输出的图像具有更好的对比度与丰富的细节信息。  相似文献   

5.
为了使融合结果突出目标并发掘更多细节,提出了一种基于目标提取与引导滤波增强的红外与可见光图像融合方法。首先对红外图像依据二维Tsallis熵和基于图的视觉显著性模型提取目标区域。然后对可见光与红外图像分别进行非下采样Shearlet变换(NSST),并对所得低频分量进行引导滤波增强。由增强后的红外图像和可见光图像低频分量基于目标提取的融合规则得到融合图像的低频分量,高频分量则根据方向子带信息和取大来确定。最后经NSST逆变换得到融合图像。大量实验结果表明,本文方法在增强融合图像空间细节的同时,有效突出了目标,并且在信息熵、平均梯度等指标上优于基于拉普拉斯金字塔变换、基于小波变换、基于平稳小波变换、基于非下采样Contourlet变换(NSCT)、基于目标提取与NSCT变换等。  相似文献   

6.
为了使融合结果突出目标并发掘更多细节,提出了一种基于目标提取与引导滤波增强的红外与可见光图像融合方法。首先对红外图像依据二维Tsallis熵和基于图的视觉显著性模型提取目标区域。然后对可见光与红外图像分别进行非下采样Shearlet变换(NSST),并对所得低频分量进行引导滤波增强。由增强后的红外图像和可见光图像低频分量基于目标提取的融合规则得到融合图像的低频分量,高频分量则根据方向子带信息和取大来确定。最后经NSST逆变换得到融合图像。大量实验结果表明,本文方法在增强融合图像空间细节的同时,有效突出了目标,并且在信息熵、平均梯度等指标上优于基于拉普拉斯金字塔变换、基于小波变换、基于平稳小波变换、基于非下采样Contourlet变换(NSCT)、基于目标提取与NSCT变换等。  相似文献   

7.
为了有效地克服单波段前视红外图像中存在的点状杂波、条状波浪以及局部高亮区域等随机杂乱背景的影响,开展了基于多波段前视红外图像融合的海面杂乱背景平滑方法的研究。充分利用多波段前视红外图像之间的互补性和差异性,通过融合多波段红外图像的信息,旨在平滑抑制海面杂乱背景并保持舰船目标的特征信息,为舰船目标检测提供一幅优质的图像。首先利用离散小波变换将多波段源图像分解为低频子带和高频子带,其中,高频子带主要包含了图像中背景以及舰船目标的细节信息,低频子带主要包含了图像的亮度以及对比度信息;对于高频子带,在基于高频系数取绝对值最大法得到高频融合图像后,计算每个像素的区域能量来对高频融合图像进行调制以抑制图像背景的细节信息而保留舰船目标的细节信息;对于低频子带,通过平均法融合低频子带并利用导向滤波对低频融合图像进行平滑滤波处理;最后对高频融合图像和低频融合图像进行小波逆变换得到的重构图像即为融合图像。对实际采集的多波段前视红外图像进行仿真实验,将该方法与双边滤波、导向滤波、梯度最小化、相对全变分、双边纹理滤波和滚动滤波共6种图像平滑滤波方法进行对比。结果表明:所提出的方法通过有效地融合多波段图像的信息,将空间域的平滑处理转换到频率域中进行,能够很好地平滑海面随机杂乱背景并较好地保持舰船目标的结构、灰度以及对比度信息,大大增强了舰船目标的可分离性,其图像平滑性能优于作为对比的6种方法。  相似文献   

8.
针对传统红外与可见光图像融合算法中存在的目标不够突出、背景缺失、边缘信息保留不够充分等问题,提出了一种基于改进引导滤波和双通道脉冲发放皮层模型(DCSCM)的红外与可见光图像融合算法。首先,对源图像进行非降采样Shearlet变换(NSST),得到相应的低频和高频分量。然后,分别采用改进的引导滤波算法和DCSCM模型对低频、高频分量进行融合。最后,对融合得到的高低频分量进行NSST逆变换得到最终的融合图像。与其他几种方法进行比较,实验结果表明,本文算法的融合图像目标突出,背景信息丰富,且在图像清晰度、对比度、信息熵等方面均有优势。  相似文献   

9.
针对传统图像融合方法造成的边缘模糊、细节损失、图像对比度与清晰度容易降低等问题,利用非下采样轮廓波变换,提出一种基于直觉模糊集和区域对比度的红外与可见光图像融合算法.首先,使用非下采样轮廓波变换将源图像分解,分别得到源图像的高频和低频成分.其次,利用直觉模糊集灵活准确描述模糊概念的特性,构建双高斯隶属函数对低频成分进行融合;利用区域对比度详细描述图像纹理信息的特点,采用多区域特征对比度结合距离分析的融合规则,对高频成分进行融合.最后使用非下采样轮廓波逆变换得到融合图像.实验结果表明,与其它融合算法相比,该算法提高了图像对比度,保留了源图像中的边缘和细节信息,且得到的融合结果具有更优的客观评价值.  相似文献   

10.
针对近红外与彩色可见光图像融合后对比度低、细节丢失和颜色失真等问题,提出一种基于多尺度变换和自适应脉冲耦合神经网络(PCNN-pulse coupled neural network,PCNN)的红外与彩色可见光图像融合的新算法。首先将彩色可见光图像转换到HSI(hue saturation intensity)空间,HSI色彩空间包含亮度、色度和饱和度三个分量,并且这三个分量互不相关,因此利用这个特点可对三个分量分别进行处理。将其亮度分量与近红外图像分别进行多尺度变换,变换方法选择Tetrolet变换。变换后分别得到低频和高频分量,针对图像低频分量,提出一种期望最大的低频分量融合规则;针对图像高频分量,采用高斯差分算子调节PCNN模型的阈值,提出一种自适应的PCNN模型作为融合规则。处理后的高低频分量经过Tetrolet逆变换得到的融合图像作为新的亮度图像。然后将新的亮度图像和原始的色度和饱和度分量反向映射到RGB空间,得到融合后的彩色图像。为了解决融合带来的图像平滑化和原始图像光照不均的问题,引入颜色与锐度校正机制(colour and sharpness correction, CSC)来提高融合图像的质量。为了验证方法的有效性,选取了5组分辨率为1 024×680近红外与彩色可见光图像进行试验,并与当前高效的四种融合方法以及未进行颜色校正的本方法进行了对比。实验结果表明,同其他图像融合算法进行对比分析,该方法在有无CSC颜色的情况下均能保留最多的细节和纹理,可见度均大大提高,同时本方法的结果在光照条件较弱的情况下具有更多的细节和纹理,均具有更好的对比度和良好的色彩再现性。在信息保留度、颜色恢复、图像对比度和结构相似性等客观指标上均具有较大优势。  相似文献   

11.
In this paper, an improved fusion algorithm for infrared and visible images based on multi-scale transform is proposed. First of all, Morphology-Hat transform is used for an infrared image and a visible image separately. Then two images were decomposed into high-frequency and low-frequency images by contourlet transform (CT). The fusion strategy of high-frequency images is based on mean gradient and the fusion strategy of low-frequency images is based on Principal Component Analysis (PCA). Finally, the final fused image is obtained by using the inverse contourlet transform (ICT). The experiments and results demonstrate that the proposed method can significantly improve image fusion performance, accomplish notable target information and high contrast and preserve rich details information at the same time.  相似文献   

12.
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.  相似文献   

13.
The aim of infrared and visible image fusion is to enhance the feature in infrared image and preserve abundant detail information in visible image. Based on the fact that the human sense system accepts external stimulation only when the stimulus intensity is greater than a certain value and the reaction of neuronal cells have obvious regional characters, an image fusion algorithm based on region dual-channel unit-linking pulse coupled neural networks (RDU-PCNN) and independent component analysis (ICA) bases in non-subsampled shearlet transform (NSST) domain for infrared and visible images is proposed. RDU-PCNN we constructed has obvious regional characters and much lower computational costs. We trained ICA-bases using a number of images that the content and statistical properties are similar with the fusion images but applied it as low-frequency ICA-bases, which can reduce calculation complexity. Experimental results demonstrate that the proposed method can significantly improved the fusion quality and need less computational costs.  相似文献   

14.
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional “averaging” fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.  相似文献   

15.
针对红外与可见光图像融合,提出了一种基于NSCT变换的图像融合方法。对经NSCT变换的低频子带系数采用基于区域能量自适应加权的融合规则,对高频子带系数采用混合的融合方法,即对于低层,采用基于区域方差选大的融合方法,对于高层采用像素点的绝对值选大的融合方法。实验结果表明,该融合算法可以获得更多的细节信息,能获得较理想的融合图像。  相似文献   

16.
一种基于区域特性的红外与可见光图像融合算法   总被引:2,自引:2,他引:0  
叶传奇  王宝树  苗启广 《光子学报》2009,38(6):1498-1503
提出了一种基于区域分割和à trous小波变换的红外与可见光图像融合算法.首先,对红外与可见光图像进行区域分割及区域关联,并按关联映射图所划分区域提取红外与可见光图像的的能量信息及梯度信息;然后,对红外与可见光图像进行多尺度à trous小波变换分解,分解后的低频部分按照文中所提出的区域能量比和区域清晰比指标进行区域融合,高频部分采用绝对值取大算子进行融合;最后进行重构得到融合图像.结果表明,该算法既可保持可见光图像的光谱信息,又可有效获取红外图像的热目标信息.  相似文献   

17.
A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images.Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s).In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm.Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.  相似文献   

18.
Although the fused image of the infrared and visible image takes advantage of their complementary, the artifact of infrared targets and vague edges seriously interfere the fusion effect. To solve these problems, a fusion method based on infrared target extraction and sparse representation is proposed. Firstly, the infrared target is detected and separated from the background rely on the regional statistical properties. Secondly, DENCLUE (the kernel density estimation clustering method) is used to classify the source images into the target region and the background region, and the infrared target region is accurately located in the infrared image. Then the background regions of the source images are trained by Kernel Singular Value Decomposition (KSVD) dictionary to get their sparse representation, the details information is retained and the background noise is suppressed. Finally, fusion rules are built to select the fusion coefficients of two regions and coefficients are reconstructed to get the fused image. The fused image based on the proposed method not only contains a clear outline of the infrared target, but also has rich detail information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号