首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Military, navigation and concealed weapon detection need different imaging modalities such as visible and infrared to monitor a targeted scene. These modalities provide complementary information. For better situation awareness, complementary information of these images has to be integrated into a single image. Image fusion is the process of integrating complementary source information into a composite image. In this paper, we propose a new image fusion method based on saliency detection and two-scale image decomposition. This method is beneficial because the visual saliency extraction process introduced in this paper can highlight the saliency information of source images very well. A new weight map construction process based on visual saliency is proposed. This process is able to integrate the visually significant information of source images into the fused image. In contrast to most of the multi-scale image fusion techniques, proposed technique uses only two-scale image decomposition. So it is fast and efficient. Our method is tested on several image pairs and is evaluated qualitatively by visual inspection and quantitatively using objective fusion metrics. Outcomes of the proposed method are compared with the state-of-art multi-scale fusion techniques. Results reveal that the proposed method performance is comparable or superior to the existing methods.  相似文献   

2.
Multimodal medical image fusion aims to fuse images with complementary multisource information. In this paper, we propose a novel multimodal medical image fusion method using pulse coupled neural network (PCNN) and a weighted sum of eight-neighborhood-based modified Laplacian (WSEML) integrating guided image filtering (GIF) in non-subsampled contourlet transform (NSCT) domain. Firstly, the source images are decomposed by NSCT, several low- and high-frequency sub-bands are generated. Secondly, the PCNN-based fusion rule is used to process the low-frequency components, and the GIF-WSEML fusion model is used to process the high-frequency components. Finally, the fused image is obtained by integrating the fused low- and high-frequency sub-bands. The experimental results demonstrate that the proposed method can achieve better performance in terms of multimodal medical image fusion. The proposed algorithm also has obvious advantages in objective evaluation indexes VIFF, QW, API, SD, EN and time consumption.  相似文献   

3.
基于Shearlet变换的自适应图像融合算法   总被引:3,自引:1,他引:2  
石智  张卓  岳彦刚 《光子学报》2013,42(1):115-120
针对多聚焦图像与多光谱和全色图像的成像特点,结合Shearlet变换具有较好的稀疏表示图像特征的性质,提出了一种新的图像融合规则.并基于此融合规则,提出了基于Shearlet变换的自适应图像融合算法.在多聚焦图像的融合算法中,分别对聚焦不同的图像进行Shearlet变换,并基于本文提出的融合规则,对分解后的高低频系数进行融合处理. 通过与多种算法的比较实验证明了本文提出的算法融合的图像具有更高的清晰度和更加丰富的细节信息.在多光谱和全色图像的融合处理中,提出了一种基于Shearlet变换与HSV变换相结合的图像融合方法.该算法首先对多光谱图像作HSV变换,将得到的V分量与全色图像进行Shearlet分解与融合,在融合过程中对分解系数选用特定的融合准则进行融合,最后将融合生成新的分量与H、S分量进行HSV逆变换产生新的RGB融合图像. 该算法在空间分辨率和光谱特性两方面达到了良好的平衡,融合后的图像在减少光谱失真的同时,有效增强了空间分辨率. 仿真实验证明,本文算法融合的图像与传统的多光谱和全色图像融合算法相比,具有更佳的融合性能和视觉效果.  相似文献   

4.
针对传统红外与弱可见光图像融合算法中存在的亮度与对比度低、细节轮廓信息缺失、可视性差等问题,提出一种基于潜在低秩表示与复合滤波的红外与弱可见光增强图像融合方法.该方法首先利用改进的高动态范围压缩增强方法增强可见光图像提高亮度;然后利用基于潜在低秩表示与复合滤波的分解方法分别对红外与增强后的弱可见光图像进行分解,得到相应的低频和高频层;再分别使用改进的对比度增强视觉显著图融合方法与改进的加权最小二乘优化融合方法对得到的低频和高频层进行融合;最后将得到的低频和高频融合层进行线性叠加得到最终的融合图像.与其他方法的对比实验结果表明,用该方法得到的融合图像细节信息丰富,清晰度高,具有良好的可视性.  相似文献   

5.
A medical image fusion method based on bi-dimensional empirical mode decomposition (BEMD) and dual-channel PCNN is proposed in this paper. The multi-modality medical images are decomposed into intrinsic mode function (IMF) components and a residue component. IMF components are divided into high-frequency and low-frequency components based on the component energy. Fusion coefficients are achieved by the following fusion rule: high frequency components and the residue component are superimposed to get more textures; low frequency components contain more details of the source image which are input into dual-channel PCNN to select fusion coefficients, the fused medical image is achieved by inverse transformation of BEMD. BEMD is a self-adaptive tool for analyzing nonlinear and non-stationary data; it doesn’t need to predefine filter or basis function. Dual-channel PCNN reduces the computational complexity and has a good ability in selecting fusion coefficients. A combined application of BEMD and dual-channel PCNN can extract the details of the image information more effectively. The experimental result shows the proposed algorithm gets better fusion result and has more advantages comparing with traditional fusion algorithms.  相似文献   

6.
以提升遥感图像和多聚焦图像的融合精度为目的,结合非下采样剪切波变换(NSST)可以捕捉图像的细节特征,提出了一种NSST和加权区域特性的图像融合方法。利用非下采样剪切波变换对源图像进行多尺度、多方向分解,得到低频子带和高频子带,低频子带系数采用改进梯度投影的非负矩阵分解(NMF),高频子带系数采用加权区域能量和区域方差相结合的融合策略,然后应用非下采样剪切波的逆变换得到融合的图像。实验结果表明:该方法从主观视觉方面很好地保留了多幅图像的有用信息,给出该方法与其他融合算法在客观评价指标应用信息熵EN、互信息MI和加权边缘信息量QAB/F的比较结果 。  相似文献   

7.
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.  相似文献   

8.
Multi-focus image fusion combines multiple source images with different focus points into one image, so that the resulting image appears all in-focus. In order to improve the accuracy of focused region detection and fusion quality, a novel multi-focus image fusion scheme based on robust principal component analysis (RPCA) and pulse-coupled neural network (PCNN) is proposed. In this method, registered source images are decomposed into principal component matrices and sparse matrices with RPCA decomposition. The local sparse features computed from the sparse matrix construct a composite feature space to represent the important information from the source images, which become inputs to PCNN to motivate the PCNN neurons. The focused regions of the source images are detected by the firing maps of PCNN and are integrated to construct the final, fused image. Experimental results demonstrate that the superiority of the proposed scheme over existing methods and highlight the expediency and suitability of the proposed method.  相似文献   

9.
基于区域分割和Counterlet变换的图像融合算法   总被引:12,自引:4,他引:8  
提出了一种基于区域分割和Contourlet变换的图像融合算法。首先,对各源图像做区域分割,并利用区域能量比和区域清晰比的概念来度量和提取区域信息;然后,对各源图像进行多尺度非子采样Contourlet分解,分解后的高频部分采用绝对值取大算子进行融合,低频部分则采用基于区域的融合规则和算子进行融合;最后进行重构得到融合图像。对红外与可见光图像进行了融合实验,并与基于像素的àtrous小波变换和Contourlet变换的融合效果进行了比较。结果表明,采用本文算法的融合图像既保留了可见光图像的光谱信息,又继承了红外图像的目标信息,其熵值高于基于像素的融合方法约10%,交叉熵仅为基于像素的融合方法的1%左右。  相似文献   

10.
基于支持度变换和top-hat分解的双色中波红外图像融合   总被引:1,自引:0,他引:1  
为了解决用多尺度top-hat分解法融合双色中波红外图像时经常存在对比度提升有限、边缘区域失真较重的问题,提出了基于支持度变换和top-hat分解相结合的融合方法。先用支持度变换法将双色中波图像分解为低频图像和支持度图像序列;再从最后一层低频图像中用多尺度top-hat分解法提取各自的亮信息和暗信息;用灰度值取大法分别融合亮信息和暗信息;通过灰度值归一化和高斯滤波分别增强亮、暗信息融合图像;然后融合两低频图像和亮、暗信息增强图像;将融合图像作为新的低频图像和用灰度值取大法融合得到的支持度融合图像序列进行支持度逆变换,得到最终融合图像。该方法的实验结果同采用单一的支持度变换法融合和多尺度top-hat分解法融合相比,融合图像的对比度提升了11.69%,失真度降低了63.42%,局部粗糙度提高了38.12%。说明提出的从低频图像提取亮暗信息,并经过分别融合、增强,再与低频图像进行融合,能有效破解红外融合图像对比度提升和边缘区域失真度降低之间的矛盾,为提高图像融合质量提供了新方法。  相似文献   

11.
在多聚焦图像的融合过程中,对源图像采用固定大小的分块会导致融合后的图像存在块效应、边缘模糊甚至聚焦错误。为了克服此问题,提出了一种新的基于人工鱼群优化分块的多聚焦图像融合方法。首先,将源图像分解成互不重叠的方块,利用聚焦准则选取清晰度高的方块,将已选择的方块合并重构成初始融合图像。然后,利用改进的人工鱼群优化算法,根据一定的适应度值,寻找最优大小的分块方式,获得更优的融合图像。该方法与基于空域、频域及其他优化算法的融合方法进行了多个实验比较,结果表明,该方法获得的融合图像具有较好的客观质量和主观视觉感觉。  相似文献   

12.
基于数学形态学的数字全息再现像融合方法   总被引:1,自引:0,他引:1       下载免费PDF全文
潘锋  闫贝贝  肖文  刘烁  李艳 《中国光学》2015,8(1):60-67
针对数字全息中不同再现距离获得的携带不同聚焦信息的再现像, 提出了一种基于数学形态学的多聚焦再现像融合方法, 以有效扩展成像景深。首先通过小波-Controulet变换获得源图像的高频和低频分量;然后, 针对数字全息中含散斑噪声的特点, 对高频分量采用基于数学形态学区域能量的方法进行融合, 对低频分量采用加权对比度法进行融合;最后, 将融合系数反变换得到融合图像。通过对算法的有效性分析和实验验证, 将本文提出的方法与不加入数学形态学的融合方法进行了对比研究。结果表明, 基于数学形态学的融合方法能充分抑制散斑噪声的影响, 保留更多细节信息, 有效扩展了成像景深范围达11.5 cm。其中, 对于表面粗糙且信息量较少的骰子, 基于数学形态学方法的空间梯度算子提高了11.8%, 熵值提高了2.7%;对于表面光滑且信息量较多的硬币, 其空间梯度算子提高了13.6%, 熵值提高了2.8%。  相似文献   

13.
A new multi-focus image fusion method using spatial frequency (SF) and morphological operators is proposed. Firstly, the focus regions are detected using SF criteria. Then the morphological operators are used to smooth the regions. Finally the fused image is constructed by cutting and pasting the focused regions of the source images. Experimental results show that the proposed algorithm performs well for multi-focus image fusion.  相似文献   

14.
Pyramid decomposition in the NSCT transformation is a band-pass filtering process in the frequency domain where different scales of images are orthogonal. However, from the perspective of the image content, correlation is likely to exist between the fused images, and this kind of decomposition makes images of different scales contain redundant information, as a result of which the fused image may not capture the subtle information from the original images. In order to overcome the above-mentioned problem, an effective image fusion method based on redundant-lifting non-separable wavelet multi-directional analysis (NSWMDA) and adaptive pulse coupled neural network (PCNN) has been proposed. The original images are firstly decomposed by using the NSWMDA into several sub-bands in order to retain texture detail and contrast information of the images, and then adaptive PCNN algorithm is applied on the high-frequency directional sub-bands to extract the high-frequency information. The low-frequency sub-bands are evaluated by weighted average based on Gaussian kernel with a chosen maximum fusion rule. Results from experiments show that the proposed method can make the fused image maintains more texture details and contrast information.  相似文献   

15.
Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and high-frequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.  相似文献   

16.
To solve the fusion problem of the multifocus images of the same scene, a novel algorithm based on focused region detection and multiresolution is proposed. In order to integrate the advantages of spatial domain-based fusion methods and transformed domain-based fusion methods, we use a technique of focused region detection and a new fusion method of multiscale transform (MST) to guide pixel combination. Firstly, the initial fused image is acquired with a novel multiresolution image fusion method. The pixels of the original images, which are similar to the corresponding initial fused image pixels, are considered to be located in the sharply focused regions. By this method, the initial focused regions can be determined, and the techniques of morphological opening and closing are employed for post-processing. Then the pixels within the focused regions in each source image are selected as the pixels of the fused image; meanwhile, the initial fused image pixels which are located at the focused border regions are retained as the pixels of the final fused image. The fused image is then obtained. The experimental results show that the proposed fusion approach is effective and performs better in fusing multi-focus images than some current methods.  相似文献   

17.
针对红外与可见光图像融合,提出了一种基于NSCT变换的图像融合方法。对经NSCT变换的低频子带系数采用基于区域能量自适应加权的融合规则,对高频子带系数采用混合的融合方法,即对于低层,采用基于区域方差选大的融合方法,对于高层采用像素点的绝对值选大的融合方法。实验结果表明,该融合算法可以获得更多的细节信息,能获得较理想的融合图像。  相似文献   

18.
The methods based on the convolutional neural network have demonstrated its powerful information integration ability in image fusion. However, most of the existing methods based on neural networks are only applied to a part of the fusion process. In this paper, an end-to-end multi-focus image fusion method based on a multi-scale generative adversarial network (MsGAN) is proposed that makes full use of image features by a combination of multi-scale decomposition with a convolutional neural network. Extensive qualitative and quantitative experiments on the synthetic and Lytro datasets demonstrated the effectiveness and superiority of the proposed MsGAN compared to the state-of-the-art multi-focus image fusion methods.  相似文献   

19.
针对红外偏振与光强图像彼此包含共同信息和特有信息的特点,提出了一种基于双树复小波变换和稀疏表示的图像融合方法.首先,利用双树复小波变换获取源图像的高频和低频成分,并用绝对值最大值法获得融合的高频成分;然后,用低频成分组成联合矩阵,并使用K-奇异值分解法训练该矩阵的冗余字典,根据该字典求出各个低频成分的稀疏系数,通过稀疏系数中非零值的位置信息判断共有信息和特有信息,并分别使用相应的规则进行融合;最后,将融合的高低频系数经过双树复小波反变换得到融合图像.实验结果表明,本文提出的融合算法不仅能较好地凸显源图像的共有信息,而且能很好地保留它们的特有信息,同时,融合图像具有较高的对比度和细节信息.  相似文献   

20.
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号