首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
针对传统红外与弱可见光图像融合算法中存在的亮度与对比度低、细节轮廓信息缺失、可视性差等问题,提出一种基于潜在低秩表示与复合滤波的红外与弱可见光增强图像融合方法.该方法首先利用改进的高动态范围压缩增强方法增强可见光图像提高亮度;然后利用基于潜在低秩表示与复合滤波的分解方法分别对红外与增强后的弱可见光图像进行分解,得到相应的低频和高频层;再分别使用改进的对比度增强视觉显著图融合方法与改进的加权最小二乘优化融合方法对得到的低频和高频层进行融合;最后将得到的低频和高频融合层进行线性叠加得到最终的融合图像.与其他方法的对比实验结果表明,用该方法得到的融合图像细节信息丰富,清晰度高,具有良好的可视性.  相似文献   

2.
针对传统红外与可见光图像融合算法中存在的目标不够突出、背景缺失、边缘信息保留不够充分等问题,提出了一种基于改进引导滤波和双通道脉冲发放皮层模型(DCSCM)的红外与可见光图像融合算法。首先,对源图像进行非降采样Shearlet变换(NSST),得到相应的低频和高频分量。然后,分别采用改进的引导滤波算法和DCSCM模型对低频、高频分量进行融合。最后,对融合得到的高低频分量进行NSST逆变换得到最终的融合图像。与其他几种方法进行比较,实验结果表明,本文算法的融合图像目标突出,背景信息丰富,且在图像清晰度、对比度、信息熵等方面均有优势。  相似文献   

3.
为增强红外与可见光图像融合可视性,克服红外与可见光图像融合结果中细节丢失、目标不显著和对比度低等问题,提出一种基于二尺度分解和显著性提取的红外与可见光图像融合方法。首先,以人类视觉感知理论为基础,针对人眼对图像不同区域敏感性不同特性,在跨模态融合任务中需要对源图像进行不同层次分解,避免高频分量和低频分量混合减少光晕效应,采用二尺度分解方法对源红外与可见光图像进行分解,分别获取各自的基本层和细节层,该分解方法能够很好的表达图像并具有很好的实时性;然后,针对基本层的融合提出一种基于视觉显著图(VSM)的加权平均融合规则,VSM方法能够很好提取源图像中的显著结构和目标。采用基于VSM的加权平均融合规则对基本层融合,能够有效避免直接使用加权平均策略而导致对比度损失,使融合图像可视性更好;针对细节层的融合,采用Kirsch算子对源图像分别提取得到显著图,然后通过VGG-19网络对显著图进行特征提取获取权值图,并与细节层进行融合,得到融合的细节层;Kirsch算子能在八个方向上快速提取图像边缘,显著图中将包含更多边缘信息和更少噪声,且VGG-19网络能够提取到图像更深层特征信息,获取的权值图中将包...  相似文献   

4.
基于引导滤波和非下采样方向滤波器,提出了一种多尺度方向引导滤波图像融合方法,该方法兼具边缘保持特性和方向信息提取能力,能够有效提取源图像的有用信息。所提方法对源图像进行多尺度方向引导滤波,得到了包含低频近似部分和强边缘部分的低频分量,而后通过高斯低通滤波将其进行有效分离,分别应用基于卷积稀疏表示和区域能量自适应加权平均的融合规则;对高频细节方向分量应用显著性与引导滤波相结合的融合规则,以保持空间一致性,得到了相应的高频细节融合分量。结果表明,所提方法能更好地提取源图像的目标特征信息,保留丰富的背景信息,客观评价指标优于现有方法,融合结果具有更好的主观视觉效果。  相似文献   

5.
为了使融合结果突出目标并发掘更多细节,提出了一种基于目标提取与引导滤波增强的红外与可见光图像融合方法。首先对红外图像依据二维Tsallis熵和基于图的视觉显著性模型提取目标区域。然后对可见光与红外图像分别进行非下采样Shearlet变换(NSST),并对所得低频分量进行引导滤波增强。由增强后的红外图像和可见光图像低频分量基于目标提取的融合规则得到融合图像的低频分量,高频分量则根据方向子带信息和取大来确定。最后经NSST逆变换得到融合图像。大量实验结果表明,本文方法在增强融合图像空间细节的同时,有效突出了目标,并且在信息熵、平均梯度等指标上优于基于拉普拉斯金字塔变换、基于小波变换、基于平稳小波变换、基于非下采样Contourlet变换(NSCT)、基于目标提取与NSCT变换等。  相似文献   

6.
邱春红 《光学技术》2022,(4):492-498
针对户外环境下红外与可见光图像融合效果不足的问题,提出一种基于卷积神经网络的红外与可见光户外图像融合方法。该方法先利用滚动引导滤波器对输入的红外图像进行预处理,过滤噪声并消除无用信息。然后,利用Curvelet变换将红外图像与可见光图像分解成高频系数与低频系数,利用基于卷积神经网络的深度特征融合规则融合高频系数,采用最小融合规则融合低频系数。实验结果表明,该方法的融合图像在主观视觉与客观定量两方面均获得了较好的结果。  相似文献   

7.
针对红外与可见光图像特点,提出一种基于小波包变换的融合算法。该算法先对源图像进行小波包分解,得到低频分量和各带通方向子带分量,并对不同分量采用不同的融合规则进行融合处理,得到各融合系数,然后经小波包重构获得融合图像。该方法可提取源图像细节信息,取得较好的融合效果。  相似文献   

8.
针对红外图像与可见光图像在不同场景的特征表达不同的问题,提出一种基于显著性的双鉴别器生成对抗网络方法,将红外与可见光的特征信息相融合。区别于传统的生成对抗网络,该算法采用双鉴别器方式分别鉴别源图像与融合图像中的显著性区域,以两幅源图像的显著性区域作为鉴别器的输入,使融合图像保留更多的显著特征;并将梯度约束引入其损失函数中,使显著对比度和丰富纹理信息保留在融合图像中。实验结果表明:本文方法在熵值(entropy, EN)、平均梯度(mean gradient, MG)、空间频率(spatial frequency, SF)及边缘强度(edge intensity, EI)4个评价指标中均优于其他对比算法。该研究实现了红外图像与可见光图像高效融合,有望在目标识别等领域中获得应用。  相似文献   

9.
基于PHLST的红外与可见光图像融合算法   总被引:1,自引:0,他引:1  
刘少鹏  郝群  宋勇 《光子学报》2011,40(1):107-111
针对图像融合过程中边缘处理和区域一致性的问题,提出一种基于多重调和局部正弦变换的红外与可见光图像融合新算法.多重调和局部正弦变换的多重调和分量μ代表了图像缓慢变化的"趋势",在空域进行加权融合;残差分量υ体现了源图像的"波动",在傅里叶正弦变换域进行融合,以充分提取可见光图像的细节信息.由于不存在边缘效应,同时残差分量...  相似文献   

10.
针对多聚焦图像融合中目标物边缘处产生虚影的问题,提出一种基于引导滤波与改进脉冲耦合神经网络(PCNN)的多聚焦图像融合算法。该算法利用引导滤波器对源图像进行多尺度边缘保持分解,对分解得到的基本图像和细节图像采用不同的引导滤波加权融合策略进行初步融合;将初步融合图作为外部输入激励刺激改进的PCNN模型;根据融合权重图对多幅源图像进行融合,获得最终的融合图像。实验结果表明,与传统融合算法相比,本文方法较好地保留了源图像的边缘、区域边界以及纹理等细节信息,避免了目标物边缘处产生虚影,提高了融合图像的质量。  相似文献   

11.
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.  相似文献   

12.
Existing fusion rules focus on retaining detailed information in the source image, but as the thermal radiation information in infrared images is mainly characterized by pixel intensity, these fusion rules are likely to result in reduced saliency of the target in the fused image. To address this problem, we propose an infrared and visible image fusion model based on significant target enhancement, aiming to inject thermal targets from infrared images into visible images to enhance target saliency while retaining important details in visible images. First, the source image is decomposed with multi-level Gaussian curvature filtering to obtain background information with high spatial resolution. Second, the large-scale layers are fused using ResNet50 and maximizing weights based on the average operator to improve detail retention. Finally, the base layers are fused by incorporating a new salient target detection method. The subjective and objective experimental results on TNO and MSRS datasets demonstrate that our method achieves better results compared to other traditional and deep learning-based methods.  相似文献   

13.
Fusion for visible and infrared images aims to combine the source images of the same scene into a single image with more feature information and better visual performance. In this paper, the authors propose a fusion method based on multi-window visual saliency extraction for visible and infrared images. To extract feature information from infrared and visible images, we design local-window-based frequency-tuned method. With this idea, visual saliency maps are calculated for variable feature information under different local window. These maps show the weights of people’s attention upon images for each pixel and region. Enhanced fusion is done using simple weight combination way. Compared with the classical and state-of-the-art approaches, the experimental results demonstrate the proposed approach runs efficiently and performs better than other methods, especially in visual performance and details enhancement.  相似文献   

14.
Image fusion for visible and infrared images is a significant task in image analysis. The target regions in infrared image and abundant detail information in visible image should be both extracted into the fused result. Thus, one should preserve or even enhance the details from original images in fusion process. In this paper, an algorithm using pixel value based saliency detection and detail preserving based image decomposition is proposed. Firstly, the multi-scale decomposition is constructed using weighted least squares filter for original infrared and visible images. Secondly, the pixel value based saliency map is designed and utilized for image fusion in different decomposition level. Finally, the fusion result is reconstructed by synthesizing different scales with synthetic weights. Since the information of original signals can be well preserved and enhanced with saliency extraction and multi scale decomposition process, the fusion algorithm performs robustly and excellently. The proposed approach is compared with other state-of the-art methods on several image sets to verify the effectiveness and robustness.  相似文献   

15.
提出了一种基于伪维格纳分布(PWVD)的融合方法.利用一维N像素的滑动窗口在各个方向上对各待融合图像进行伪维格纳变换,选择均方根最大的方向为各待融合网像的PWVD分解方向,分解形成待融合图像不同频段的能量谱图,然后,针对各待融合不同频段的能量谱图,融合原则足高频段取区域能量最大,低频段取能量方差最大,形成具有不同频段的融合能量谱网,最后,对能量谱网进行PWVD逆变换,形成融合图像.对红外与可见光图像、多聚焦图像、电子计算机X射线断层扫描(CT)图像与磁共振(MR)图像和红外与合成孔径雷达(SAR)图像进行了融合实验,并对融合图像和待融合图像进行了信息熵对比.实验结果表明,采用本文算法的融合图像保留了待融合图像的绝大部分信息.  相似文献   

16.
建立权重独立的双通道残差卷积神经网络,对可见光与红外频段下的目标图像进行特征提取,生成多尺度复合频段特征图组.基于像点间的欧式距离计算双频段特征图显著性,根据目标在不同成像频段下的特征贡献值进行自适应融合.通过热源能量池化核与视觉注意力机制,分别生成目标在双频段下的兴趣区域逻辑掩码并叠加在融合图像上,凸显目标特征并抑制...  相似文献   

17.
Military, navigation and concealed weapon detection need different imaging modalities such as visible and infrared to monitor a targeted scene. These modalities provide complementary information. For better situation awareness, complementary information of these images has to be integrated into a single image. Image fusion is the process of integrating complementary source information into a composite image. In this paper, we propose a new image fusion method based on saliency detection and two-scale image decomposition. This method is beneficial because the visual saliency extraction process introduced in this paper can highlight the saliency information of source images very well. A new weight map construction process based on visual saliency is proposed. This process is able to integrate the visually significant information of source images into the fused image. In contrast to most of the multi-scale image fusion techniques, proposed technique uses only two-scale image decomposition. So it is fast and efficient. Our method is tested on several image pairs and is evaluated qualitatively by visual inspection and quantitatively using objective fusion metrics. Outcomes of the proposed method are compared with the state-of-art multi-scale fusion techniques. Results reveal that the proposed method performance is comparable or superior to the existing methods.  相似文献   

18.
In this paper, an interesting fusion method, named as NNSP, is developed for infrared and visible image fusion, where non-negative sparse representation is used to extract the features of source images. The characteristics of non-negative sparse representation coefficients are described according to their activity levels and sparseness levels. Multiple methods are developed to detect the salient features of the source images, which include the target and contour features in the infrared images and the texture features in the visible images. The regional consistency rule is proposed to obtain the fusion guide vector for determining the fused image automatically, where the features of the source images are seamlessly integrated into the fused image. Compared with the classical and state-of-the-art methods, our experimental results have indicated that our NNSP method has better fusion performance in both noiseless and noisy situations.  相似文献   

19.
Although the fused image of the infrared and visible image takes advantage of their complementary, the artifact of infrared targets and vague edges seriously interfere the fusion effect. To solve these problems, a fusion method based on infrared target extraction and sparse representation is proposed. Firstly, the infrared target is detected and separated from the background rely on the regional statistical properties. Secondly, DENCLUE (the kernel density estimation clustering method) is used to classify the source images into the target region and the background region, and the infrared target region is accurately located in the infrared image. Then the background regions of the source images are trained by Kernel Singular Value Decomposition (KSVD) dictionary to get their sparse representation, the details information is retained and the background noise is suppressed. Finally, fusion rules are built to select the fusion coefficients of two regions and coefficients are reconstructed to get the fused image. The fused image based on the proposed method not only contains a clear outline of the infrared target, but also has rich detail information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号