首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Deep-learning approach with convolutional neural network for classification of maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging
Institution:1. Department of Radiology, Guangdong Provincial People''s Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd road, Guangzhou 510080 Guangdong, PR China;2. Graduate College, Shantou University Medical College, Shantou, Guangdong, PR China;3. The School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, PR China;4. Department of Radiology, Foshan Fetal Medicine Institute, Foshan Maternity and Children''s Healthcare Hospital Affiliated to Southern Medical University, Foshan Guangdong, PR China;5. Department of Biomedical Engineering, Stony Brook University, Stony Brook, New York;6. Department of Radiology, Stony Brook Medicine, Stony Brook, New York;7. Department of Psychiatry, Stony Brook Medicine, Stony Brook, New York
Abstract:PurposeWe aimed to evaluate deep learning approach with convolutional neural networks (CNNs) to discriminate between benign and malignant lesions on maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging (MRI).MethodsWe retrospectively gathered maximum intensity projections of dynamic contrast-enhanced breast MRI of 106 benign (including 22 normal) and 180 malignant cases for training and validation data. CNN models were constructed to calculate the probability of malignancy using CNN architectures (DenseNet121, DenseNet169, InceptionResNetV2, InceptionV3, NasNetMobile, and Xception) with 500 epochs and analyzed that of 25 benign (including 12 normal) and 47 malignant cases for test data. Two human readers also interpreted these test data and scored the probability of malignancy for each case using Breast Imaging Reporting and Data System. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were calculated.ResultsThe CNN models showed a mean AUC of 0.830 (range, 0.750–0.895). The best model was InceptionResNetV2. This model, Reader 1, and Reader 2 had sensitivities of 74.5%, 72.3%, and 78.7%; specificities of 96.0%, 88.0%, and 80.0%; and AUCs of 0.895, 0.823, and 0.849, respectively. No significant difference arose between the CNN models and human readers (p > 0.125).ConclusionOur CNN models showed comparable diagnostic performance in differentiating between benign and malignant lesions to human readers on maximum intensity projection of dynamic contrast-enhanced breast MRI.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号