首页 | 本学科首页   官方微博 | 高级检索  
     检索      


High quality and fast compressed sensing MRI reconstruction via edge-enhanced dual discriminator generative adversarial network
Institution:1. Departments of Electronic Science and Communication Engineering, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen 361005, China;1. Jiangsu Key Laboratory of Meteorological Observation and Information Processing, Jiangsu Technology and Engineering Center of Meteorological Sensor Network, School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China;2. Department of Hematology and Lymphoma Research Center, Peking University Third Hospital, Peking University, Beijing 100191, China;3. Fujian Key Laboratory of Sensing and Computing for Smart City, School of Information Science and Engineering, Xiamen University, Xiamen 361005, China
Abstract:Generative adversarial networks (GAN) are widely used for fast compressed sensing magnetic resonance imaging (CSMRI) reconstruction. However, most existing methods are difficult to make an effective trade-off between abstract global high-level features and edge features. It easily causes problems, such as significant remaining aliasing artifacts and clearly over-smoothed reconstruction details. To tackle these issues, we propose a novel edge-enhanced dual discriminator generative adversarial network architecture called EDDGAN for CSMRI reconstruction with high quality. In this model, we extract effective edge features by fusing edge information from different depths. Then, leveraging the relationship between abstract global high-level features and edge features, a three-player game is introduced to control the hallucination of details and stabilize the training process. The resulting EDDGAN can offer more focus on edge restoration and de-aliasing. Extensive experimental results demonstrate that our method consistently outperforms state-of-the-art methods and obtains reconstructed images with rich edge details. In addition, our method also shows remarkable generalization, and its time consumption for each 256 × 256 image reconstruction is approximately 8.39 ms.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号