首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, an interesting fusion method, named as NNSP, is developed for infrared and visible image fusion, where non-negative sparse representation is used to extract the features of source images. The characteristics of non-negative sparse representation coefficients are described according to their activity levels and sparseness levels. Multiple methods are developed to detect the salient features of the source images, which include the target and contour features in the infrared images and the texture features in the visible images. The regional consistency rule is proposed to obtain the fusion guide vector for determining the fused image automatically, where the features of the source images are seamlessly integrated into the fused image. Compared with the classical and state-of-the-art methods, our experimental results have indicated that our NNSP method has better fusion performance in both noiseless and noisy situations.  相似文献   

2.
A novel technique for image fusion based on non-subsampled shearlet transform (NSST) domain improved non-negative matrix factorization (INMF) is proposed. Firstly, NSST, which owns much lower computational complexity compared with other conventional typical multi-resolution tools, is adopted to perform the multi-scale and multi-directional decompositions of source images. Secondly, the traditional basic NMF model is updated to be an improved NMF (INMF). INMF is utilized to capture the marked characteristics in a series of sub-band components from the pure mathematical point of view and without destroying the two-dimensional structural information in the image. Thirdly, with INMF and the model of local directional contrast (LDC), the fused sub-images can be achieved. Finally, the final fused image can be obtained by using the inverse NSST. Experimental results demonstrate that the presented technique outperforms other typical NMF-based ones in both visual effect and objective evaluation criteria.  相似文献   

3.
With the nonsubsampled contourlet transform (NSCT), a novel region-segmentation-based fusion algorithm for infrared (IR) and visible images is presented.The IR image is segmented according to the physical features of the target.The source images are decomposed by the NSCT, and then, different fusion rules for the target regions and the background regions are employed to merge the NSCT coefficients respectively.Finally, the fused image is obtained by applying the inverse NSCT.Experimental results show that the proposed algorithm outperforms the pixel-based methods, including the traditional wavelet-based method and NSCT-based method.  相似文献   

4.
Multifocus image fusion aims at overcoming imaging cameras's finite depth of field by combining information from multiple images with the same scene. For the fusion problem of the multifocus image of the same scene, a novel algorithm is proposed based on multiscale products of the lifting stationary wavelet transform (LSWT) and the improved pulse coupled neural network (PCNN), where the linking strength of each neuron can be chosen adaptively. In order to select the coefficients of the fused image properly with the source multifocus images in a noisy environment, the selection principles of the low frequency subband coefficients and bandpass subband coefficients are discussed, respectively. For choosing the low frequency subband coefficients, a new sum modified-Laplacian (NSML) of the low frequency subband, which can effectively represent the salient features and sharp boundaries of the image in the LSWT domain, is an input to motivate the PCNN neurons; when choosing the high frequency subband coefficients, a novel local neighborhood sum of Laplacian of multiscale products is developed and taken as one type of feature of high frequency to motivate the PCNN neurons. The coefficients in the LSWT domain with large firing times are selected as coefficients of the fused image. Experimental results demonstrate that the proposed fusion approach outperforms the traditional discrete wavelet transform (DWT)-based, LSWT-based and LSWT-PCNN-based image fusion methods even though the source image is in a noisy environment in terms of both visual quality and objective evaluation.  相似文献   

5.
In this paper a fusion method is proposed for merging a high-resolution panchromatic image and a low resolution multispectral image.The algorithm is based on discrete wavelet transform(DWT).It uses correlation moment rule to the low frequency bands and local deviation rule to the high frequency bands separately.Experimental results indicate that the proposed approach outperforms the traditional methods.  相似文献   

6.
In this paper, a novel image fusion method based on the expectation maximization (EM) algorithm and steerable pyramid is proposed. The registered images are first decomposed by using steerable pyramid.The EM algorithm is used to fuse the image components in the low frequency band. The selection method involving the informative importance measure is applied to those in the high frequency band. The final fused image is then computed by taking the inverse transform on the composite coefficient representations.Experimental results show that the proposed method outperforms conventional image fusion methods.  相似文献   

7.
基于方向金字塔框架变换的遥感图像融合算法   总被引:18,自引:6,他引:12  
为了综合利用多光谱遥感图像与全色遥感图像之间的互补信息,提出了一种方向金字塔框架变换(SPFT),并基于此变换提出了一种遥感图像融合算法。具体融合过程是将多光谱图像的每个波段分别与高分辨力全色图像进行融合,首先将高分辨力全色图像与多光谱图像的待融合波段进行直方图匹配,然后对该波段图像以及直方图匹配后的高分辨力全色图像分别进行方向金字塔框架变换分解,融合过程就是对两图像方向金字塔框架变换分解后的系数进行组合,最后对组合后的系数进行方向金字塔框架逆变换即可得到该波段图像与高分辨力全色图像的融合图像。实验结果表明该算法在性能上优于基于亮度-色调-饱和度(1HS)的彩色空间变换以及基于离散小波框架变换(DWFT)的遥感图像融合方法,尤其对源图像之间存在配准误差的情况。  相似文献   

8.
A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images.Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s).In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm.Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.  相似文献   

9.
Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.  相似文献   

10.
A medical image fusion method based on bi-dimensional empirical mode decomposition (BEMD) and dual-channel PCNN is proposed in this paper. The multi-modality medical images are decomposed into intrinsic mode function (IMF) components and a residue component. IMF components are divided into high-frequency and low-frequency components based on the component energy. Fusion coefficients are achieved by the following fusion rule: high frequency components and the residue component are superimposed to get more textures; low frequency components contain more details of the source image which are input into dual-channel PCNN to select fusion coefficients, the fused medical image is achieved by inverse transformation of BEMD. BEMD is a self-adaptive tool for analyzing nonlinear and non-stationary data; it doesn’t need to predefine filter or basis function. Dual-channel PCNN reduces the computational complexity and has a good ability in selecting fusion coefficients. A combined application of BEMD and dual-channel PCNN can extract the details of the image information more effectively. The experimental result shows the proposed algorithm gets better fusion result and has more advantages comparing with traditional fusion algorithms.  相似文献   

11.
A novel image fusion algorithm based on homogeneity similarity is proposed in this paper, aiming at solving the fusion problem of clean and noisy multifocus images. Firstly, the initial fused image is acquired with one multiresolution image fusion method. The pixels of the source images, which are similar to the corresponding initial fused image pixels, are considered to be located in the sharply focused regions. By this method, the initial focused regions are determined. In order to improve the fusion performance, morphological opening and closing are employed for post-processing. Secondly, the homogeneity similarity is introduced and used to fuse the clean and noisy multifocus images. Finally, the fused image is obtained by weighting the neighborhood pixels of the point of source images which are located at the focused region. Experimental results demonstrate that, for the clean multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities. Furthermore, it can simultaneously resolve the image restoration and fusion problem when the source multifocus images are corrupted by the Gaussian white noise, and can also provide better performance than the conventional methods.  相似文献   

12.
This paper presents a fusion method for infrared–visible image and infrared-polarization image based on multi-scale center-surround top-hat transform which can effectively extract the feature information and detail information of source images. Firstly, the multi-scale bright (dark) feature regions of source images at different scale levels are respectively extracted by multi-scale center-surround top-hat transform. Secondly, the bright (dark) feature regions at different scale levels are refined for eliminating the redundancies by spatial scale. Thirdly, the refined bright (dark) feature regions from different scales are combined into the fused bright (dark) feature regions through adding. Then, a base image is calculated by performing dilation and erosion on the source images with the largest scale outer structure element. Finally, the fusion image is obtained by importing the fused bright and dark features into the base image with a reasonable strategy. Experimental results indicate that the proposed fusion method can obtain state-of-the-art performance in both aspects of objective assessment and subjective visual quality.  相似文献   

13.
Attention mechanisms can improve the performance of neural networks, but the recent attention networks bring a greater computational overhead while improving network performance. How to maintain model performance while reducing complexity is a hot research topic. In this paper, a lightweight Mixture Attention (MA) module is proposed to improve network performance and reduce the complexity of the model. Firstly, the MA module uses multi-branch architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Secondly, in order to reduce the number of parameters, each branch uses group convolution independently, and the feature maps extracted by different branches are fused along the channel dimension. Finally, the fused feature maps are processed using the channel attention module to extract statistical information on the channels. The proposed method is efficient yet effective, e.g., the network parameters and computational cost are reduced by 9.86% and 7.83%, respectively, and the Top-1 performance is improved by 1.99% compared with ResNet50. Experimental results on common-used benchmarks, including CIFAR-10 for classification and PASCAL-VOC for object detection, demonstrate that the proposed MA outperforms the current SOTA methods significantly by achieving higher accuracy while having lower model complexity.  相似文献   

14.
基于人眼视觉系统的假彩色融合图像质量的评价方法   总被引:1,自引:1,他引:0  
随着图像融合技术的发展,各种融合算法层出不穷,而很多情况下最终的融合图像是由人眼观察的,因此基于人眼视觉系统的图像融合质量评价显得尤为重要.为了能够模拟人眼对于融合图像的感知,得到融合后图像质量的客观评价,本文提出了一种基于色差理论的假彩色融合图像质量的评价方法.首先将源图像和融合图像转化到CIE L*a*b*均匀色空间,在频域对图像进行对比度敏感函数滤波,通过计算滤波后融合图像的色差判断图像的细节信息,在一定程度上色差越大信息越丰富;通过计算融合图像与源图像的色差判断融合图像与源图像的相关性,相关性越高,融合算法越好.通过融合图像的色差大小以及与源图像的相关性两个参量,得出融合算法的优劣.实验表明,与其他评价方法相比,本文提出的评价方法与人眼观察的结果较为一致.  相似文献   

15.
在多聚焦图像的融合过程中,对源图像采用固定大小的分块会导致融合后的图像存在块效应、边缘模糊甚至聚焦错误。为了克服此问题,提出了一种新的基于人工鱼群优化分块的多聚焦图像融合方法。首先,将源图像分解成互不重叠的方块,利用聚焦准则选取清晰度高的方块,将已选择的方块合并重构成初始融合图像。然后,利用改进的人工鱼群优化算法,根据一定的适应度值,寻找最优大小的分块方式,获得更优的融合图像。 该方法与基于空域、频域及其他优化算法的融合方法进行了多个实验比较,结果表明,该方法获得的融合图像具有较好的客观质量和主观视觉感觉。  相似文献   

16.
在多聚焦图像的融合过程中,对源图像采用固定大小的分块会导致融合后的图像存在块效应、边缘模糊甚至聚焦错误。为了克服此问题,提出了一种新的基于人工鱼群优化分块的多聚焦图像融合方法。首先,将源图像分解成互不重叠的方块,利用聚焦准则选取清晰度高的方块,将已选择的方块合并重构成初始融合图像。然后,利用改进的人工鱼群优化算法,根据一定的适应度值,寻找最优大小的分块方式,获得更优的融合图像。该方法与基于空域、频域及其他优化算法的融合方法进行了多个实验比较,结果表明,该方法获得的融合图像具有较好的客观质量和主观视觉感觉。  相似文献   

17.
Image fusion using non-separable wavelet frame   总被引:6,自引:0,他引:6  
In this paper, an image fusion method is proposed based on the non-separable wavelet frame (NWF) for merging a high-resolution panchromatic image and a low-resolution multispectral image. The low-frequency part of the panchromatic image is directly substituted by multispectral image. As a result, the multispectral information of the multispectral image can be preserved effectively in the fused image. Due to multiscale method for enhancing the high-frequency parts of the panchromatic image, spatial information of the fused image can be improved. Experimental results indicate that the proposed method outperforms the intensity-hue-saturation (IHS) transform, discrete wavelet transform and separable wavelet frame in preserving spectral and spatial information.  相似文献   

18.
利用Zernike多项式对用Zygo干涉仪测得的离散材料折射率数据进行了拟合,再使用光线光学的方法评价了系统的成像质量. 由于材料折射率分布的无规则性,在对包含非均匀介质的实际光学系统的模拟仿真和优化时,需要考虑选取材料不同部位加工成的透镜会对系统成像质量有不同的影响,而且加工好的透镜在装配过程中,绕着光轴旋转不同的角度同样会影响成像质量. 通过计算机模拟的方法预先选取材料的最佳部位以及找到最好的装配位置,从而提高了光学系统的性能.  相似文献   

19.
Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.  相似文献   

20.
The unavoidable noise often present in synthetic aperture radar (SAR) images, such as speckle noise, negatively impacts the subsequent processing of SAR images. Further, it is not easy to find an appropriate application for SAR images, given that the human visual system is sensitive to color and SAR images are gray. As a result, a noisy SAR image fusion method based on nonlocal matching and generative adversarial networks is presented in this paper. A nonlocal matching method is applied to processing source images into similar block groups in the pre-processing step. Then, adversarial networks are employed to generate a final noise-free fused SAR image block, where the generator aims to generate a noise-free SAR image block with color information, and the discriminator tries to increase the spatial resolution of the generated image block. This step ensures that the fused image block contains high resolution and color information at the same time. Finally, a fused image can be obtained by aggregating all the image blocks. By extensive comparative experiments on the SEN1–2 datasets and source images, it can be found that the proposed method not only has better fusion results but is also robust to image noise, indicating the superiority of the proposed noisy SAR image fusion method over the state-of-the-art methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号