首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 139 毫秒
1.
针对目前合成孔径雷达(SAR)与可见光图像融合结果目标信息缺失、对比度不高的缺点,提出了一种基于纹理分割和top-hat变换的图像增强融合算法。将SAR图像灰度共生矩阵的熵纹理特征图进行阈值分割,提取SAR图像的感兴趣区域(ROI);并对SAR和可见光图像进行非下采样Contourlet变换(NSCT)分解,低频系数采用基于区域的融合规则,在感兴趣区域内选择SAR的低频系数。对低频系数进行top-hat变换得到显著化的图像亮、暗细节特征,并加入到低频系数上形成低频合成系数;高频子带系数采用局部方向信息熵显著性因子取大的融合规则;对融合系数进行NSCT逆变换得到最终的融合图像。实验证明了本算法的有效性。  相似文献   

2.
由于可见光图像在低光照环境下其可视性较差,为了提高红外与弱可见光图像融合的效果,提出了一种基于对比度增强和柯西模糊函数的图像融合算法.首先用改进的引导滤波自适应增强提高弱可见光图像暗区域的可视性;其次,利用非下采样剪切波变换将红外和增强后的弱可见光图像分解,得到相应的低频和高频子带;再后,分别用直觉模糊集构建柯西隶属函数和自适应双通道脉冲发放皮层模型对低频、高频子带进行融合;最后,使用非下采样剪切波变换对融合得到的高低频子带进行逆变换重构得到融合图像.实验结果表明,与其它融合算法相比,该算法有效地增强了弱可见光图像的暗区域,保留了更多的背景信息,从而提高了融合图像的对比度和清晰度.  相似文献   

3.
熊芳芳  肖宁 《光学技术》2019,45(3):355-363
针对当前红外(IR)与可见光(VI)图像融合中细节保留能力不足及目标配准精度不高的问题,设计了一种多尺度2D经验模态分解耦合非下采样方向滤波器组(NSDFB)的红外与可见光图像融合算法。分别计算红外与可见光图像的熵值,并比较二者阈值的大小,计算阈值较大图像的残差。通过2D经验模态分解(2D-EMD)和NSDFB机制,构建了多尺度方向分解模型,将熵值较大图像的残差和熵值较小的图像变换为高频方向系数与低频系数,以获得源图像的细节和特征信息。对于低频系数,引入加权平均作为低频系数的融合准则;根据区域能量对比度与清晰度来定义融合规则,完成高频系数的融合。利用2D-EMD多尺度分解逆变换将获取的低频与高频系数生成新图像。实验表明:与当前常用红外与可见光图像融合对比,所提算法具有更高的融合质量,所输出的图像具有更好的对比度与丰富的细节信息。  相似文献   

4.
基于Contourlet变换的红外图像非线性增强算法   总被引:7,自引:1,他引:6  
针对红外图像对比度低、噪声大等特点,提出一种基于Contourlet变换的红外图像非线性增强算法.Contourlet变换是一种有效的方向多尺度变换分析方法,能在任意尺度上实现任意方向的分解.首先采用Contourlet变换对图像进行多尺度、多方向分解,得到低频子带系数和各带通方向子带系数.引入非完全贝塔函数对低频子带系数进行处理,提升图像整体对比度;采用非线性增益函数对各带通方向子带系数进行处理,通过估计噪声水平设定阈值,抑制绝对值小于阈值的系数,增强大于阈值的系数.最后经Contourlet逆变换得到增强图像.实际实验结果表明,该方法可以有效地增强低对比度红外图像,无论是在视觉效果上还是在图像对比度评估值定量指标上均明显优于直方图均衡化、小波变换增强等方法,且能保持更多的图像轮廓特征,克服了这些方法对噪声增强过度和图像细节增强不足等缺点.  相似文献   

5.
针对探测波段为3.7~4.8 μm的中波红外图像和探测波段为8~14 μm长波红外图像融合过程中存在场景对比度低,显著性目标不够凸出,伪影引入严重的问题,采用快速自适应二维经验模态分解(FABEMD)对红外中波和长波图像进行多尺度分解以得到二维内蕴模函数(BIMFs)和残余分量(Residual)。对于每一层内蕴模函数选用改进的局部能量窗口融合规则,首先配置好加权算子以增加区域窗口中心像素的能量占比;选用不同的加权算子,经实验验证能有效突出红外中波和长波图像的能量特征信息;其次充分利用内蕴模函数的相位信息,当相位相反时,采用能量加权平均的方式,以解决融合系数的正负符号极性难以确定的问题;当相位相同时,判断二者的能量差距并依据差距大小选择设定的融合规则,融合规则基于红外中波和长波图像的灰度差异特性设定。对于残余分量则利用红外中波图像和改进区域能量窗口的最大对称环绕显著性权重图指导基础层系数的融合,自适应的局部环绕窗口充分利用了低频显著性信息,对无用背景的抑制效果也相当出色,能够在复杂背景图像中突出显著性对象,最终得到细节信息丰富,对比度明显的指导图像。最后通过FABEMD的逆变化重构过程得到融合图像,对4组不同背景、不同大小的红外中长波图像进行主观和客观性能评价,4组图像均来自多波段红外采集系统且都经过严格配准并和7种相关算法进行对比实验,在主观性能上显著性对象突出、清晰度度高;客观性能上在平均梯度和空间频率这两个评价指标上性能优异,验证了该算法的有效性。  相似文献   

6.
王慧斌  廖艳  沈洁  王鑫 《光子学报》2014,43(5):510004
提出了一种分级多尺度融合的水下偏振图像处理方法.首先,利用非负矩阵分解对偏振参量图像进行融合增强,得到所含局部特征信息完整且冗余度低的偏振参量融合图像;在此基础上,基于二维经验模式分解分别将偏振参量融合图像与偏振强度图像进行多尺度变换,对得到的高低频子图像分别进行加权平均融合,融合权重是采用穷举搜索法计算得到;最后,将高低频融合结果反变换得到最终融合图像.实验仿真结果表明该融合方法在增强图像细节信息及提高水下偏振图像对比度方面具有显著效果.  相似文献   

7.
为了有效地克服单波段前视红外图像中存在的点状杂波、条状波浪以及局部高亮区域等随机杂乱背景的影响,开展了基于多波段前视红外图像融合的海面杂乱背景平滑方法的研究。充分利用多波段前视红外图像之间的互补性和差异性,通过融合多波段红外图像的信息,旨在平滑抑制海面杂乱背景并保持舰船目标的特征信息,为舰船目标检测提供一幅优质的图像。首先利用离散小波变换将多波段源图像分解为低频子带和高频子带,其中,高频子带主要包含了图像中背景以及舰船目标的细节信息,低频子带主要包含了图像的亮度以及对比度信息;对于高频子带,在基于高频系数取绝对值最大法得到高频融合图像后,计算每个像素的区域能量来对高频融合图像进行调制以抑制图像背景的细节信息而保留舰船目标的细节信息;对于低频子带,通过平均法融合低频子带并利用导向滤波对低频融合图像进行平滑滤波处理;最后对高频融合图像和低频融合图像进行小波逆变换得到的重构图像即为融合图像。对实际采集的多波段前视红外图像进行仿真实验,将该方法与双边滤波、导向滤波、梯度最小化、相对全变分、双边纹理滤波和滚动滤波共6种图像平滑滤波方法进行对比。结果表明:所提出的方法通过有效地融合多波段图像的信息,将空间域的平滑处理转换到频率域中进行,能够很好地平滑海面随机杂乱背景并较好地保持舰船目标的结构、灰度以及对比度信息,大大增强了舰船目标的可分离性,其图像平滑性能优于作为对比的6种方法。  相似文献   

8.
针对红外偏振与光强图像彼此包含共同信息和特有信息的特点,提出了一种基于双树复小波变换和稀疏表示的图像融合方法.首先,利用双树复小波变换获取源图像的高频和低频成分,并用绝对值最大值法获得融合的高频成分;然后,用低频成分组成联合矩阵,并使用K-奇异值分解法训练该矩阵的冗余字典,根据该字典求出各个低频成分的稀疏系数,通过稀疏系数中非零值的位置信息判断共有信息和特有信息,并分别使用相应的规则进行融合;最后,将融合的高低频系数经过双树复小波反变换得到融合图像.实验结果表明,本文提出的融合算法不仅能较好地凸显源图像的共有信息,而且能很好地保留它们的特有信息,同时,融合图像具有较高的对比度和细节信息.  相似文献   

9.
针对基于小波变换的红外图像增强方法视觉效果不够理想的缺点,提出了一种基于平稳小波变换和Retinex的红外图像增强方法,利用Retinex增强算法增强图像的视觉效果,并改善其亮度均匀性。首先,对红外图像经平稳小波变换后的最大尺度低频子带图像进行多尺度Retinex增强;然后,利用贝叶斯萎缩阈值法对高频子带图像进行阈值去噪,并根据低频子带图像的局部对比度和模糊规则计算高频子带的增益系数,从而得到增强后的高频子带图像;最后,由低频子带图像和高频子带图像重构得到增强后的图像。针对大量图像进行了实验和增强效果的定性与定量评价,并与双向直方图均衡法、二代小波变换法、Curvelet变换法和多尺度Retinex法作了比较。结果表明,所提出的方法增强了图像细节,抑制了噪声,并明显改善了图像的整体视觉效果。  相似文献   

10.
针对传统图像融合方法造成的边缘模糊、细节损失、图像对比度与清晰度容易降低等问题,利用非下采样轮廓波变换,提出一种基于直觉模糊集和区域对比度的红外与可见光图像融合算法.首先,使用非下采样轮廓波变换将源图像分解,分别得到源图像的高频和低频成分.其次,利用直觉模糊集灵活准确描述模糊概念的特性,构建双高斯隶属函数对低频成分进行融合;利用区域对比度详细描述图像纹理信息的特点,采用多区域特征对比度结合距离分析的融合规则,对高频成分进行融合.最后使用非下采样轮廓波逆变换得到融合图像.实验结果表明,与其它融合算法相比,该算法提高了图像对比度,保留了源图像中的边缘和细节信息,且得到的融合结果具有更优的客观评价值.  相似文献   

11.
This paper presents a fusion method for infrared–visible image and infrared-polarization image based on multi-scale center-surround top-hat transform which can effectively extract the feature information and detail information of source images. Firstly, the multi-scale bright (dark) feature regions of source images at different scale levels are respectively extracted by multi-scale center-surround top-hat transform. Secondly, the bright (dark) feature regions at different scale levels are refined for eliminating the redundancies by spatial scale. Thirdly, the refined bright (dark) feature regions from different scales are combined into the fused bright (dark) feature regions through adding. Then, a base image is calculated by performing dilation and erosion on the source images with the largest scale outer structure element. Finally, the fusion image is obtained by importing the fused bright and dark features into the base image with a reasonable strategy. Experimental results indicate that the proposed fusion method can obtain state-of-the-art performance in both aspects of objective assessment and subjective visual quality.  相似文献   

12.
Integration of infrared and visible images is an active and important topic in image understanding and interpretation. In this paper, a new fusion method is proposed based on the improved multi-scale center-surround top-hat transform, which can effectively extract the feature information and detail information of source images. Firstly, the multi-scale bright (dark) feature regions of infrared and visible images are respectively extracted at different scale levels by the improved multi-scale center-surround top-hat transform. Secondly, the feature regions at the same scale in both images are combined by multi-judgment contrast fusion rule, and the final feature images are obtained by simply adding all scales of feature images together. Then, a base image is calculated by performing Gaussian fuzzy logic combination rule on two smoothed source images. Finally, the fusion image is obtained by importing the extracted bright and dark feature images into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method is superior to current popular MST-based methods and morphology-based methods in the field of infrared-visible images fusion.  相似文献   

13.
In this paper, an improved fusion algorithm for infrared and visible images based on multi-scale transform is proposed. First of all, Morphology-Hat transform is used for an infrared image and a visible image separately. Then two images were decomposed into high-frequency and low-frequency images by contourlet transform (CT). The fusion strategy of high-frequency images is based on mean gradient and the fusion strategy of low-frequency images is based on Principal Component Analysis (PCA). Finally, the final fused image is obtained by using the inverse contourlet transform (ICT). The experiments and results demonstrate that the proposed method can significantly improve image fusion performance, accomplish notable target information and high contrast and preserve rich details information at the same time.  相似文献   

14.
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.  相似文献   

15.
To efficiently enhance images, a novel algorithm using multi scale image features extracted by top-hat transform is proposed in this paper. Firstly, the multi scale bright and dim regions are extracted through top-hat transform using structuring elements with the same shape and increasing sizes. Then, two types of multi scale image features, which are the multi scale bright and dim image regions at each scale and the multi scale image details between neighboring scales, are extracted and used to form the final extracted bright and dim image regions. Finally, the image is enhanced through enlarging the contrast between the final extracted bright and dim image features. Experimental results on images from different applications verified that the proposed algorithm could efficiently enhance the contrast and details of image, and produce few noise regions.  相似文献   

16.
This paper presents a multi-focus image fusion algorithm based on dual-channel PCNN in NSCT domain. The fusion algorithm based on multi-scale transform is likely to produce the pseudo-Gibbs effects and it is not effective to fuse the dim or partial bright images. To solve these problems, this algorithm will get a number of different frequency sub-image of the two images by using the NSCT transform, the selection principles of different subband coefficients obtained by the NSCT decomposition are discussed in detail, and the images are fused based on the improved dual-channel PCNN in order to determine the band-pass sub-band coefficient, at last fused image is obtained by using the inverse NSCT transform. Fusion rules based on dual-channel PCNN are used to solve the complexity of the PCNN parameter settings and long computing time problems. The experimental results show that the algorithm has overcome the defects of the traditional multi-focus image fusion algorithm and improved the fusion effect.  相似文献   

17.
针对传统红外与弱可见光图像融合算法中存在的亮度与对比度低、细节轮廓信息缺失、可视性差等问题,提出一种基于潜在低秩表示与复合滤波的红外与弱可见光增强图像融合方法.该方法首先利用改进的高动态范围压缩增强方法增强可见光图像提高亮度;然后利用基于潜在低秩表示与复合滤波的分解方法分别对红外与增强后的弱可见光图像进行分解,得到相应的低频和高频层;再分别使用改进的对比度增强视觉显著图融合方法与改进的加权最小二乘优化融合方法对得到的低频和高频层进行融合;最后将得到的低频和高频融合层进行线性叠加得到最终的融合图像.与其他方法的对比实验结果表明,用该方法得到的融合图像细节信息丰富,清晰度高,具有良好的可视性.  相似文献   

18.
针对近红外与彩色可见光图像融合后对比度低、细节丢失和颜色失真等问题,提出一种基于多尺度变换和自适应脉冲耦合神经网络(PCNN-pulse coupled neural network,PCNN)的红外与彩色可见光图像融合的新算法。首先将彩色可见光图像转换到HSI(hue saturation intensity)空间,HSI色彩空间包含亮度、色度和饱和度三个分量,并且这三个分量互不相关,因此利用这个特点可对三个分量分别进行处理。将其亮度分量与近红外图像分别进行多尺度变换,变换方法选择Tetrolet变换。变换后分别得到低频和高频分量,针对图像低频分量,提出一种期望最大的低频分量融合规则;针对图像高频分量,采用高斯差分算子调节PCNN模型的阈值,提出一种自适应的PCNN模型作为融合规则。处理后的高低频分量经过Tetrolet逆变换得到的融合图像作为新的亮度图像。然后将新的亮度图像和原始的色度和饱和度分量反向映射到RGB空间,得到融合后的彩色图像。为了解决融合带来的图像平滑化和原始图像光照不均的问题,引入颜色与锐度校正机制(colour and sharpness correction, CSC)来提高融合图像的质量。为了验证方法的有效性,选取了5组分辨率为1 024×680近红外与彩色可见光图像进行试验,并与当前高效的四种融合方法以及未进行颜色校正的本方法进行了对比。实验结果表明,同其他图像融合算法进行对比分析,该方法在有无CSC颜色的情况下均能保留最多的细节和纹理,可见度均大大提高,同时本方法的结果在光照条件较弱的情况下具有更多的细节和纹理,均具有更好的对比度和良好的色彩再现性。在信息保留度、颜色恢复、图像对比度和结构相似性等客观指标上均具有较大优势。  相似文献   

19.
Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and high-frequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号