首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Multi-Scale Mixed Attention Network for CT and MRI Image Fusion
Authors:Yang Liu  Binyu Yan  Rongzhu Zhang  Kai Liu  Gwanggil Jeon  Xiaoming Yang
Institution:1.College of Electronics and Information Engineering, Sichuan University, Chengdu 610064, China; (Y.L.); (B.Y.); (R.Z.);2.College of Electrical Engineering, Sichuan University, Chengdu 610064, China;3.Department of Embedded Systems Engineering, Incheon National University, Incheon 22012, Korea
Abstract:Recently, the rapid development of the Internet of Things has contributed to the generation of telemedicine. However, online diagnoses by doctors require the analyses of multiple multi-modal medical images, which are inconvenient and inefficient. Multi-modal medical image fusion is proposed to solve this problem. Due to its outstanding feature extraction and representation capabilities, convolutional neural networks (CNNs) have been widely used in medical image fusion. However, most existing CNN-based medical image fusion methods calculate their weight maps by a simple weighted average strategy, which weakens the quality of fused images due to the effect of inessential information. In this paper, we propose a CNN-based CT and MRI image fusion method (MMAN), which adopts a visual saliency-based strategy to preserve more useful information. Firstly, a multi-scale mixed attention block is designed to extract features. This block can gather more helpful information and refine the extracted features both in the channel and spatial levels. Then, a visual saliency-based fusion strategy is used to fuse the feature maps. Finally, the fused image can be obtained via reconstruction blocks. The experimental results of our method preserve more textual details, clearer edge information and higher contrast when compared to other state-of-the-art methods.
Keywords:convolutional neural network  image fusion  attention  visual saliency
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号