首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.  相似文献   

2.
Although the fused image of the infrared and visible image takes advantage of their complementary, the artifact of infrared targets and vague edges seriously interfere the fusion effect. To solve these problems, a fusion method based on infrared target extraction and sparse representation is proposed. Firstly, the infrared target is detected and separated from the background rely on the regional statistical properties. Secondly, DENCLUE (the kernel density estimation clustering method) is used to classify the source images into the target region and the background region, and the infrared target region is accurately located in the infrared image. Then the background regions of the source images are trained by Kernel Singular Value Decomposition (KSVD) dictionary to get their sparse representation, the details information is retained and the background noise is suppressed. Finally, fusion rules are built to select the fusion coefficients of two regions and coefficients are reconstructed to get the fused image. The fused image based on the proposed method not only contains a clear outline of the infrared target, but also has rich detail information.  相似文献   

3.
Image fusion techniques aim at transferring useful information from the input source images to the fused image. The common assumption for most fusion approaches is that the useful information is defined by local features such as contrast, variance, and gradient. However, there is no consideration of global visual attention of the whole source images which indicates the “interesting” information of the source images. In this paper, we firstly review the patch-based image fusion methods which attract the attention and interest of many researchers. Then, a visual attention guided patch-based image fusion method is proposed. The visual attention maps of the source images are calculated from the sparse represent coefficients of the source images. Then, the sparse coefficients are fused with the guidance of visual attention maps in order to emphasize the global “interesting” objects in the source images. Finally, the fused image is reconstructed from the fused sparse coefficients. The new fusion strategy ensures that the objects being “interesting” for our visual system are preserved in the fused image. The proposed approach is tested on infrared and visual, medical, and multi-focus images. The results compared with those of traditional methods show obvious improvement in objective and subjective quality measurements.  相似文献   

4.
娄熙承  冯鑫 《光子学报》2021,50(3):180-193
为提高融合图像的可视性,解决传统红外与可见光图像融合算法中存在的边缘特征缺失、细节模糊的问题,提出了一种潜在低秩表示框架下基于卷积神经网络结合引导滤波的红外与可见光图像融合算法。该算法首先利用潜在低秩表示对源图像进行分解,得到源图像的低秩分量和显著分量。其次,利用卷积神经网络根据源图像的特征信息,得到权值图。再次,通过引导滤波算法对权值图进行边缘锐化,然后再将优化后的权值图分别与源图像的低秩分量和显著分量融合,得到融合图像的低秩分量和显著分量。最后,将融合图像的低秩分量和显著分量叠加,得到最终的融合图像。实验结果表明,该算法在主观评价和客观指标上均优于传统的红外与可见光图像融合算法。  相似文献   

5.
With the nonsubsampled contourlet transform (NSCT), a novel region-segmentation-based fusion algorithm for infrared (IR) and visible images is presented.The IR image is segmented according to the physical features of the target.The source images are decomposed by the NSCT, and then, different fusion rules for the target regions and the background regions are employed to merge the NSCT coefficients respectively.Finally, the fused image is obtained by applying the inverse NSCT.Experimental results show that the proposed algorithm outperforms the pixel-based methods, including the traditional wavelet-based method and NSCT-based method.  相似文献   

6.
Existing fusion rules focus on retaining detailed information in the source image, but as the thermal radiation information in infrared images is mainly characterized by pixel intensity, these fusion rules are likely to result in reduced saliency of the target in the fused image. To address this problem, we propose an infrared and visible image fusion model based on significant target enhancement, aiming to inject thermal targets from infrared images into visible images to enhance target saliency while retaining important details in visible images. First, the source image is decomposed with multi-level Gaussian curvature filtering to obtain background information with high spatial resolution. Second, the large-scale layers are fused using ResNet50 and maximizing weights based on the average operator to improve detail retention. Finally, the base layers are fused by incorporating a new salient target detection method. The subjective and objective experimental results on TNO and MSRS datasets demonstrate that our method achieves better results compared to other traditional and deep learning-based methods.  相似文献   

7.
针对传统红外与弱可见光图像融合算法中存在的亮度与对比度低、细节轮廓信息缺失、可视性差等问题,提出一种基于潜在低秩表示与复合滤波的红外与弱可见光增强图像融合方法.该方法首先利用改进的高动态范围压缩增强方法增强可见光图像提高亮度;然后利用基于潜在低秩表示与复合滤波的分解方法分别对红外与增强后的弱可见光图像进行分解,得到相应的低频和高频层;再分别使用改进的对比度增强视觉显著图融合方法与改进的加权最小二乘优化融合方法对得到的低频和高频层进行融合;最后将得到的低频和高频融合层进行线性叠加得到最终的融合图像.与其他方法的对比实验结果表明,用该方法得到的融合图像细节信息丰富,清晰度高,具有良好的可视性.  相似文献   

8.
基于区域分割和Counterlet变换的图像融合算法   总被引:12,自引:4,他引:8  
提出了一种基于区域分割和Contourlet变换的图像融合算法。首先,对各源图像做区域分割,并利用区域能量比和区域清晰比的概念来度量和提取区域信息;然后,对各源图像进行多尺度非子采样Contourlet分解,分解后的高频部分采用绝对值取大算子进行融合,低频部分则采用基于区域的融合规则和算子进行融合;最后进行重构得到融合图像。对红外与可见光图像进行了融合实验,并与基于像素的àtrous小波变换和Contourlet变换的融合效果进行了比较。结果表明,采用本文算法的融合图像既保留了可见光图像的光谱信息,又继承了红外图像的目标信息,其熵值高于基于像素的融合方法约10%,交叉熵仅为基于像素的融合方法的1%左右。  相似文献   

9.
Integration of infrared and visible images is an active and important topic in image understanding and interpretation. In this paper, a new fusion method is proposed based on the improved multi-scale center-surround top-hat transform, which can effectively extract the feature information and detail information of source images. Firstly, the multi-scale bright (dark) feature regions of infrared and visible images are respectively extracted at different scale levels by the improved multi-scale center-surround top-hat transform. Secondly, the feature regions at the same scale in both images are combined by multi-judgment contrast fusion rule, and the final feature images are obtained by simply adding all scales of feature images together. Then, a base image is calculated by performing Gaussian fuzzy logic combination rule on two smoothed source images. Finally, the fusion image is obtained by importing the extracted bright and dark feature images into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method is superior to current popular MST-based methods and morphology-based methods in the field of infrared-visible images fusion.  相似文献   

10.
On fusing infrared and visible image, the traditional fusion method cannot get the better image quality. Based on neighborhood characteristic and regionalization in NSCT (Nonsubsampled Contourlet Transform) domain, the fusion algorithm was proposed. Firstly, NSCT was adopted to decompose infrared and visible images at different scales and directions for the low and high frequency coefficients, the low frequency coefficients which were fused with improving regional weighted fusion method based on neighborhood energy, and the high-frequency coefficients were fused with multi-judgment rule based on neighborhood characteristic regional process. Finally, the coefficients were reconstructed to obtain the fused image. The experimental results show that, compared with the other three related methods, the proposed method can get the biggest value of IE (information entropy), MI(VI,F) (mutual information from visible image), MI(VI,F) (mutual information from infrared image), MI (sum of mutual information), and QAB/F (edge retention). The proposed method can leave enough information in the original images and its details, and the fused images have better visual effects.  相似文献   

11.
In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming and complicated. In recent years, deep learning-based methods are proposed to handle this problem. However, for multi-modality image fusion, using the same network cannot extract effective feature maps from source images that are obtained by different image sensors. In TPFusion, we can avoid this issue. At first, we extract the textural information of the source images. Then two densely connected networks are trained to fuse textural information and source image, respectively. By this way, we can preserve more textural details in the fused image. Moreover, loss functions we designed to constrain two densely connected convolutional networks are according to the characteristics of textural information and source images. Through our method, the fused image will obtain more textural information of source images. For proving the validity of our method, we implement comparison and ablation experiments from the qualitative and quantitative assessments. The ablation experiments prove the effectiveness of TPFusion. Being compared to existing advanced IR and VIS image fusion methods, our fusion results possess better fusion results in both objective and subjective aspects. To be specific, in qualitative comparisons, our fusion results have better contrast ratio and abundant textural details. In quantitative comparisons, TPFusion outperforms existing representative fusion methods.  相似文献   

12.
This paper presents a fusion method for infrared–visible image and infrared-polarization image based on multi-scale center-surround top-hat transform which can effectively extract the feature information and detail information of source images. Firstly, the multi-scale bright (dark) feature regions of source images at different scale levels are respectively extracted by multi-scale center-surround top-hat transform. Secondly, the bright (dark) feature regions at different scale levels are refined for eliminating the redundancies by spatial scale. Thirdly, the refined bright (dark) feature regions from different scales are combined into the fused bright (dark) feature regions through adding. Then, a base image is calculated by performing dilation and erosion on the source images with the largest scale outer structure element. Finally, the fusion image is obtained by importing the fused bright and dark features into the base image with a reasonable strategy. Experimental results indicate that the proposed fusion method can obtain state-of-the-art performance in both aspects of objective assessment and subjective visual quality.  相似文献   

13.
针对红外与可见光图像特点,提出一种基于小波包变换的融合算法。该算法先对源图像进行小波包分解,得到低频分量和各带通方向子带分量,并对不同分量采用不同的融合规则进行融合处理,得到各融合系数,然后经小波包重构获得融合图像。该方法可提取源图像细节信息,取得较好的融合效果。  相似文献   

14.
红外与可见光图像融合一直是图像领域研究的热点,融合技术能弥补单一传感器的不足,为图像理解与分析提供良好的成像基础。因生产工艺以及成本的限制,红外探测器的分辨率远低于可见光探测器,并在一定程度上因源图像分辨率的差异阻碍了实际应用。针对红外与可见光图像分辨率不一致的问题,提出了用于红外图像超分辨率重建与融合的多任务卷积网络框架,应用于多分辨率图像融合。在网络结构方面,首先设计了双通道网络分别提取红外与可见光特征,使算法不受源图像分辨率的限制;其次提出了特征上采样模块,先用双线性插值方法增加像素个数,再通过多层感知器精细化拟合像素平滑空间与高频空间的映射关系,无需重新训练模型即可实现任意尺度的红外图像上采样;接着将线性注意力引入网络,学习特征空间位置间的非线性关系,抑制无关信息并增强网络对全局信息的表达。在损失函数方面,提出了梯度损失,保留红外与可见光图像中绝对值较大的滤波器响应值,并计算该值与重建的融合图像响应值的Frobenius范数,无需理想的融合图像作为真值监督网络学习就能生成融合图像;此外,在梯度损失、像素损失的共同作用下对多任务模型进行优化,可以同时重建融合图像和高分辨率红外图像...  相似文献   

15.
针对红外偏振与光强图像彼此包含共同信息和特有信息的特点,提出了一种基于双树复小波变换和稀疏表示的图像融合方法.首先,利用双树复小波变换获取源图像的高频和低频成分,并用绝对值最大值法获得融合的高频成分;然后,用低频成分组成联合矩阵,并使用K-奇异值分解法训练该矩阵的冗余字典,根据该字典求出各个低频成分的稀疏系数,通过稀疏系数中非零值的位置信息判断共有信息和特有信息,并分别使用相应的规则进行融合;最后,将融合的高低频系数经过双树复小波反变换得到融合图像.实验结果表明,本文提出的融合算法不仅能较好地凸显源图像的共有信息,而且能很好地保留它们的特有信息,同时,融合图像具有较高的对比度和细节信息.  相似文献   

16.
周浦城  韩裕生  薛模根  王峰  张磊 《光子学报》2014,39(9):1682-1687
针对传统偏振图像伪彩色融合方法存在的不足,提出了一种基于非负矩阵分解和IHS(Intensity Hue Saturation)颜色模型的图像融合方法.首先将偏振信息解析得到的各偏振参量图像作为原始数据集进行非负矩阵分解,得到三幅特征基图像,这些特征基图像包含了场景的大部分偏振信息|然后将三幅特征基图像经直方图匹配之后,分别映射到IHS颜色模型的三个颜色通道,最后变换到RGB颜色空间,得到融合后的图像.实验结果表明,该方法不仅具有较好的色彩表达能力,而且有效地突出了目标的细节信息,提高了图像的可判读性.  相似文献   

17.
Multi-focus-image-fusion is a crucial embranchment of image processing. Many methods have been developed from different perspectives to solve this problem. Among them, the sparse representation (SR)-based and convolutional neural network (CNN)-based fusion methods have been widely used. Fusing the source image patches, the SR-based model is essentially a local method with a nonlinear fusion rule. On the other hand, the direct mapping between the source images follows the decision map which is learned via CNN. The fusion is a global one with a linear fusion rule. Combining the advantages of the above two methods, a novel fusion method that applies CNN to assist SR is proposed for the purpose of gaining a fused image with more precise and abundant information. In the proposed method, source image patches were fused based on SR and the new weight obtained by CNN. Experimental results demonstrate that the proposed method clearly outperforms existing state-of-the-art methods in addition to SR and CNN in terms of both visual perception and objective evaluation metrics, and the computational complexity is greatly reduced. Experimental results demonstrate that the proposed method not only clearly outperforms the SR and CNN methods in terms of visual perception and objective evaluation indicators, but is also significantly better than other state-of-the-art methods since our computational complexity is greatly reduced.  相似文献   

18.
冯鑫  李川  胡开群 《物理学报》2014,63(18):184202-184202
为了克服红外与可见光图像融合时噪声干扰及易产生伪影导致目标轮廓不鲜明、对比度低的缺点,提出一种基于深度模型分割的图像融合方法.首先,采用深度玻尔兹曼机学习红外与可见光的目标和背景轮廓先验,构建轮廓的深度分割模型,通过Split Bregman迭代算法获取最优能量分割后的红外与可见光图像轮廓;然后再使用非下采样轮廓波变换对源图像进行分解,并针对所分割的背景轮廓采用结构相似度的规则进行系数组合;最后进行非下采样轮廓波反变换重构出融合图像.数值试验证明,该算法可以有效获取目标和背景轮廓均清晰的融合图像,融合结果不但具有较高的对比度,还能抑制噪声影响,具有有效性.  相似文献   

19.
Multi-focus image fusion combines multiple source images with different focus points into one image, so that the resulting image appears all in-focus. In order to improve the accuracy of focused region detection and fusion quality, a novel multi-focus image fusion scheme based on robust principal component analysis (RPCA) and pulse-coupled neural network (PCNN) is proposed. In this method, registered source images are decomposed into principal component matrices and sparse matrices with RPCA decomposition. The local sparse features computed from the sparse matrix construct a composite feature space to represent the important information from the source images, which become inputs to PCNN to motivate the PCNN neurons. The focused regions of the source images are detected by the firing maps of PCNN and are integrated to construct the final, fused image. Experimental results demonstrate that the superiority of the proposed scheme over existing methods and highlight the expediency and suitability of the proposed method.  相似文献   

20.
提出了一种基于伪维格纳分布(PWVD)的融合方法.利用一维N像素的滑动窗口在各个方向上对各待融合图像进行伪维格纳变换,选择均方根最大的方向为各待融合网像的PWVD分解方向,分解形成待融合图像不同频段的能量谱图,然后,针对各待融合不同频段的能量谱图,融合原则足高频段取区域能量最大,低频段取能量方差最大,形成具有不同频段的融合能量谱网,最后,对能量谱网进行PWVD逆变换,形成融合图像.对红外与可见光图像、多聚焦图像、电子计算机X射线断层扫描(CT)图像与磁共振(MR)图像和红外与合成孔径雷达(SAR)图像进行了融合实验,并对融合图像和待融合图像进行了信息熵对比.实验结果表明,采用本文算法的融合图像保留了待融合图像的绝大部分信息.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号