共查询到20条相似文献,搜索用时 10 毫秒
1.
On fusing infrared and visible image, the traditional fusion method cannot get the better image quality. Based on neighborhood characteristic and regionalization in NSCT (Nonsubsampled Contourlet Transform) domain, the fusion algorithm was proposed. Firstly, NSCT was adopted to decompose infrared and visible images at different scales and directions for the low and high frequency coefficients, the low frequency coefficients which were fused with improving regional weighted fusion method based on neighborhood energy, and the high-frequency coefficients were fused with multi-judgment rule based on neighborhood characteristic regional process. Finally, the coefficients were reconstructed to obtain the fused image. The experimental results show that, compared with the other three related methods, the proposed method can get the biggest value of IE (information entropy), MI(VI,F) (mutual information from visible image), MI(VI,F) (mutual information from infrared image), MI (sum of mutual information), and QAB/F (edge retention). The proposed method can leave enough information in the original images and its details, and the fused images have better visual effects. 相似文献
2.
The infrared and visible image fusion algorithm based on target separation and sparse representation
Although the fused image of the infrared and visible image takes advantage of their complementary, the artifact of infrared targets and vague edges seriously interfere the fusion effect. To solve these problems, a fusion method based on infrared target extraction and sparse representation is proposed. Firstly, the infrared target is detected and separated from the background rely on the regional statistical properties. Secondly, DENCLUE (the kernel density estimation clustering method) is used to classify the source images into the target region and the background region, and the infrared target region is accurately located in the infrared image. Then the background regions of the source images are trained by Kernel Singular Value Decomposition (KSVD) dictionary to get their sparse representation, the details information is retained and the background noise is suppressed. Finally, fusion rules are built to select the fusion coefficients of two regions and coefficients are reconstructed to get the fused image. The fused image based on the proposed method not only contains a clear outline of the infrared target, but also has rich detail information. 相似文献
3.
In this paper, an improved fusion algorithm for infrared and visible images based on multi-scale transform is proposed. First of all, Morphology-Hat transform is used for an infrared image and a visible image separately. Then two images were decomposed into high-frequency and low-frequency images by contourlet transform (CT). The fusion strategy of high-frequency images is based on mean gradient and the fusion strategy of low-frequency images is based on Principal Component Analysis (PCA). Finally, the final fused image is obtained by using the inverse contourlet transform (ICT). The experiments and results demonstrate that the proposed method can significantly improve image fusion performance, accomplish notable target information and high contrast and preserve rich details information at the same time. 相似文献
4.
In this paper, an interesting fusion method, named as NNSP, is developed for infrared and visible image fusion, where non-negative sparse representation is used to extract the features of source images. The characteristics of non-negative sparse representation coefficients are described according to their activity levels and sparseness levels. Multiple methods are developed to detect the salient features of the source images, which include the target and contour features in the infrared images and the texture features in the visible images. The regional consistency rule is proposed to obtain the fusion guide vector for determining the fused image automatically, where the features of the source images are seamlessly integrated into the fused image. Compared with the classical and state-of-the-art methods, our experimental results have indicated that our NNSP method has better fusion performance in both noiseless and noisy situations. 相似文献
5.
红外和彩色可见光图像亮度-对比度传递融合算法 总被引:1,自引:0,他引:1
以红外和彩色可见光图像为研究对象,提出了一种基于亮度-对比度传递(LCT)技术的彩色图像融合算法。首先借助灰度融合方法将红外图像与彩色可见光图像亮度分量融合,然后用LCT技术改善灰度融合结果的亮度和对比度,最后利用快速YCBCR变换融合策略在RGB空间内直接生成彩色融合图像。文中利用像素平均融合法和多分辨率融合法作为不同的灰度融合措施以分别满足高实时性和高融合质量的需求。实验结果表明,提出算法的融合结果不仅具有与输入彩色可见光图像相近的自然色彩,而且具备令人满意的亮度和对比度,即使采用运算简单的像素平均法进行灰度融合,同样可以获得良好的融合效果。 相似文献
6.
Infrared and visible image fusion has been an important and popular topic in imaging science. Dual-band image fusion aims to extract both target regions in infrared image and abundant detail information in visible image into fused result, preserving even enhancing the information that inherits from source images. In our study, we propose an optimization-based fusion method by combining global entropy and gradient constrained regularization. We design a cost function by taking the advantages of global maximum entropy as the first term, together with gradient constraint as the regularized term. In this cost function, global maximum entropy could make the fused result inherit as more information as possible from sources. And using gradient constraint, the fused result would have clear details and edges with noise suppression. The fusion is achieved based on the minimization of the cost function by adding weight value matrix. Experimental results indicate that the proposed method performs well and has obvious superiorities over other typical algorithms in both subjective visual performance and objective criteria. 相似文献
7.
This paper proposes a novel image fusion scheme based on contrast pyramid (CP) with teaching learning based optimization (TLBO) for visible and infrared images under different spectrum of complicated scene. Firstly, CP decomposition is employed into every level of each original image. Then, we introduce TLBO to optimizing fusion coefficients, which will be changed under teaching phase and learner phase of TLBO, so that the weighted coefficients can be automatically adjusted according to fitness function, namely the evaluation standards of image quality. At last, obtain fusion results by the inverse transformation of CP. Compared with existing methods, experimental results show that our method is effective and the fused images are more suitable for further human visual or machine perception. 相似文献
8.
Weiwei Kong 《Optik》2014
A novel technique for image fusion based on non-subsampled shearlet transform (NSST) domain improved non-negative matrix factorization (INMF) is proposed. Firstly, NSST, which owns much lower computational complexity compared with other conventional typical multi-resolution tools, is adopted to perform the multi-scale and multi-directional decompositions of source images. Secondly, the traditional basic NMF model is updated to be an improved NMF (INMF). INMF is utilized to capture the marked characteristics in a series of sub-band components from the pure mathematical point of view and without destroying the two-dimensional structural information in the image. Thirdly, with INMF and the model of local directional contrast (LDC), the fused sub-images can be achieved. Finally, the final fused image can be obtained by using the inverse NSST. Experimental results demonstrate that the presented technique outperforms other typical NMF-based ones in both visual effect and objective evaluation criteria. 相似文献
9.
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality. 相似文献
10.
A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images.Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s).In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm.Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index. 相似文献
11.
Fusion for visible and infrared images aims to combine the source images of the same scene into a single image with more feature information and better visual performance. In this paper, the authors propose a fusion method based on multi-window visual saliency extraction for visible and infrared images. To extract feature information from infrared and visible images, we design local-window-based frequency-tuned method. With this idea, visual saliency maps are calculated for variable feature information under different local window. These maps show the weights of people’s attention upon images for each pixel and region. Enhanced fusion is done using simple weight combination way. Compared with the classical and state-of-the-art approaches, the experimental results demonstrate the proposed approach runs efficiently and performs better than other methods, especially in visual performance and details enhancement. 相似文献
12.
Military, navigation and concealed weapon detection need different imaging modalities such as visible and infrared to monitor a targeted scene. These modalities provide complementary information. For better situation awareness, complementary information of these images has to be integrated into a single image. Image fusion is the process of integrating complementary source information into a composite image. In this paper, we propose a new image fusion method based on saliency detection and two-scale image decomposition. This method is beneficial because the visual saliency extraction process introduced in this paper can highlight the saliency information of source images very well. A new weight map construction process based on visual saliency is proposed. This process is able to integrate the visually significant information of source images into the fused image. In contrast to most of the multi-scale image fusion techniques, proposed technique uses only two-scale image decomposition. So it is fast and efficient. Our method is tested on several image pairs and is evaluated qualitatively by visual inspection and quantitatively using objective fusion metrics. Outcomes of the proposed method are compared with the state-of-art multi-scale fusion techniques. Results reveal that the proposed method performance is comparable or superior to the existing methods. 相似文献
13.
从图像中恢复场景的深度是计算机视觉领域中的一个关键问题。考虑到单一类型图像在深度估计中受场景不同光照的限制,提出了基于红外和可见光图像逐级自适应融合的场景深度估计方法(PF-CNN)。该方法包括双流滤波器部分耦合网络、自适应多模态特征融合网络以及自适应逐级特征融合网络。在双流卷积中红外和可见光图像的滤波器部分耦合使两者特征得到增强;自适应多模态特征融合网络学习红外和可见光图像的残差特征并将两者自适应加权融合,充分利用两者的互补信息;逐级特征融合网络学习多层融合特征的结合,充分利用不同卷积层的不同特征。实验结果表明:PF-CNN在测试集上获得了较好的效果,将阈值指标提高了5%,明显优于其他方法。 相似文献
15.
With the nonsubsampled contourlet transform (NSCT), a novel region-segmentation-based fusion algorithm for infrared (IR) and visible images is presented.The IR image is segmented according to the physical features of the target.The source images are decomposed by the NSCT, and then, different fusion rules for the target regions and the background regions are employed to merge the NSCT coefficients respectively.Finally, the fused image is obtained by applying the inverse NSCT.Experimental results show that the proposed algorithm outperforms the pixel-based methods, including the traditional wavelet-based method and NSCT-based method. 相似文献
16.
In this paper, a new method based on nonsubsampled contourlet transform (NSCT) is proposed to fuse the infrared image and the visible light image, which will produce a new fused image by which the target can be identified more easily. Firstly, two original images were decomposed into low frequency subband coefficients and the bandpass direction subband coefficients by using NSCT. Secondly, the selection of the low frequency subband coefficient and the bandpass direction subband coefficient is discussed in detail. The low frequency subband coefficients are selected based on the regional visual characteristics. For the selection of bandpass direction subband coefficients, this paper proposes a minimum regional cross-gradient method, and the cross-gradient is gained by calculating the gradient between the pixel of bandpass subbands and the adjacent pixel in the fused image of the low-frequency components. Comparison experiments have been performed on different image sets, and experimental results demonstrate that the proposed method performs better in both subjective and objective qualities. 相似文献
17.
针对目前图像融合过程中的不足之处,结合有限离散剪切波具有高的方向敏感性和抛物尺度化特性,提出了一种有限离散剪切波变换下的图像融合算法。首先对严格配准的多传感器图像进行有限离散剪切波变换,得到低频子带系数和不同尺度不同方向的高频子带系数;然后对低频子带系数采用全局特征值和像素点之间的差异性与区域空间频率匹配度相结合的融合算法,高频方向子带系数采用方向权重对比度与相对区域平均梯度和相对区域方差相结合的方案;最后通过有限离散剪切波逆变换得到融合图像。实验结果表明,与其他的融合算法相比较,本文算法不但有良好的主观视觉效果,而且3幅图像的客观评价指标分别平均提高了0.9%、3.8%、3.1%,2.6%、3.8%、2.9%和1.5%、125%、59%,充分说明了本文融合算法的优越性。 相似文献
18.
Integration of infrared and visible images is an active and important topic in image understanding and interpretation. In this paper, a new fusion method is proposed based on the improved multi-scale center-surround top-hat transform, which can effectively extract the feature information and detail information of source images. Firstly, the multi-scale bright (dark) feature regions of infrared and visible images are respectively extracted at different scale levels by the improved multi-scale center-surround top-hat transform. Secondly, the feature regions at the same scale in both images are combined by multi-judgment contrast fusion rule, and the final feature images are obtained by simply adding all scales of feature images together. Then, a base image is calculated by performing Gaussian fuzzy logic combination rule on two smoothed source images. Finally, the fusion image is obtained by importing the extracted bright and dark feature images into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method is superior to current popular MST-based methods and morphology-based methods in the field of infrared-visible images fusion. 相似文献
19.
A multi-focus image fusion algorithm based on an improved dual-channel PCNN in NSCT domain 总被引:1,自引:0,他引:1
This paper presents a multi-focus image fusion algorithm based on dual-channel PCNN in NSCT domain. The fusion algorithm based on multi-scale transform is likely to produce the pseudo-Gibbs effects and it is not effective to fuse the dim or partial bright images. To solve these problems, this algorithm will get a number of different frequency sub-image of the two images by using the NSCT transform, the selection principles of different subband coefficients obtained by the NSCT decomposition are discussed in detail, and the images are fused based on the improved dual-channel PCNN in order to determine the band-pass sub-band coefficient, at last fused image is obtained by using the inverse NSCT transform. Fusion rules based on dual-channel PCNN are used to solve the complexity of the PCNN parameter settings and long computing time problems. The experimental results show that the algorithm has overcome the defects of the traditional multi-focus image fusion algorithm and improved the fusion effect. 相似文献
20.
提出一种基于小波变换和区域分割的YCb Cr变换域红外-可见光彩色图像融合算法,以小波变换融合为基础,将融合结果作为YCb Cr域参量,以区域分割方法为基础,与参考图像进行色彩传递。实验结果表明,采用方法比传统的线性色彩传递方法具有更好的彩色图像融合效果,同时参考图像适应性较好,适合单一图像以及视频的融合。 相似文献