首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于变换域PCNN的近红外与彩色可见光融合
作者单位:兰州交通大学电子与信息工程学院,甘肃 兰州 730070
基金项目:国家自然科学基金项目(61861025),2021年陇原青年创新创业人才项目,甘肃省高等学校科研项目(2016A-018),兰州交通大学青年基金项目(2015005)资助
摘    要:针对近红外与彩色可见光图像融合后对比度低、细节丢失和颜色失真等问题,提出一种基于多尺度变换和自适应脉冲耦合神经网络(PCNN-pulse coupled neural network,PCNN)的红外与彩色可见光图像融合的新算法。首先将彩色可见光图像转换到HSI(hue saturation intensity)空间,HSI色彩空间包含亮度、色度和饱和度三个分量,并且这三个分量互不相关,因此利用这个特点可对三个分量分别进行处理。将其亮度分量与近红外图像分别进行多尺度变换,变换方法选择Tetrolet变换。变换后分别得到低频和高频分量,针对图像低频分量,提出一种期望最大的低频分量融合规则;针对图像高频分量,采用高斯差分算子调节PCNN模型的阈值,提出一种自适应的PCNN模型作为融合规则。处理后的高低频分量经过Tetrolet逆变换得到的融合图像作为新的亮度图像。然后将新的亮度图像和原始的色度和饱和度分量反向映射到RGB空间,得到融合后的彩色图像。为了解决融合带来的图像平滑化和原始图像光照不均的问题,引入颜色与锐度校正机制(colour and sharpness correction, CSC)来提高融合图像的质量。为了验证方法的有效性,选取了5组分辨率为1 024×680近红外与彩色可见光图像进行试验,并与当前高效的四种融合方法以及未进行颜色校正的本方法进行了对比。实验结果表明,同其他图像融合算法进行对比分析,该方法在有无CSC颜色的情况下均能保留最多的细节和纹理,可见度均大大提高,同时本方法的结果在光照条件较弱的情况下具有更多的细节和纹理,均具有更好的对比度和良好的色彩再现性。在信息保留度、颜色恢复、图像对比度和结构相似性等客观指标上均具有较大优势。

关 键 词:彩色图像融合  Tetrolet变换  期望最大算法  自适应脉冲耦合神经网络  
收稿时间:2019-09-27

Research on Near Infrared and Color Visible Fusion Based on PCNN in Transform Domain
Authors:SHEN Yu  YUAN Yu-bin  PENG Jing
Institution:School of Electronic and Information Engineering,Lanzhou Jiaotong University,Lanzhou 730070,China
Abstract:Aiming at the problems of low contrast, loss of detail and color distortion after fusion of near-infrared and color visible images, a new fusion algorithm of infrared and color visible images based on multi-scale transformation and adaptive pulse coupled neural network (PCNN) is proposed. Firstly, the visible color image is transformed into HSI (Hue Saturation Intensity) space. HSI color space contains three components: brightness, chroma and saturation, and these three components are not correlated with each other. Therefore, using this feature, the three components can be processed separately. The brightness component and the near-infrared image are transformed by multi-scale transformation, respectively. Tetrolet transform is chosen as the transformation method. After transformation, the low-frequency and high-frequency components are obtained, respectively. For the low-frequency components of the image, a fusion rule with the highest expectation is proposed. For the high-frequency components of the image, the threshold of the PCNN model is adjusted by the Gauss difference operator, and an adaptive PCNN model is proposed as the fusion rule. The fused image of the processed high and low frequency components through Tetrolet inverse transformation is used as a new brightness image. Then, the new brightness image and the original chromaticity and saturation components are mapped to RGB space, and the fused color image is obtained. In order to solve the problem of image smoothing and uneven illumination of the original image, a color and sharpness correction mechanism (CSC) is introduced to improve the quality of the fused image. In order to verify the effectiveness of the proposed method, five groups of near-infrared and color visible images with the resolution of 1 024×680 were selected for experiments and compared with four current efficient fusion methods and the method without color correction. The experimental results show that, compared with other image fusion algorithms, this method can retain the most details and textures with or without CSC color, and the visibility is greatly improved. At the same time, the results of this method have more details and textures under weak illumination conditions and have better contrast and good quality. Good color reproduction. It has great advantages in information retention, color restoration, image contrast and structural similarity.
Keywords:Color image fusion  Tetrolet transform  Expected maximum algorithm  Adaptive Pulse Coupled Neural Network  
本文献已被 CNKI 等数据库收录!
点击此处可从《光谱学与光谱分析》浏览原始摘要信息
点击此处可从《光谱学与光谱分析》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号