Novel Designs of Radiomics-Integrated Deep Learning Models

Thumbnail Image



Journal Title

Journal ISSN

Volume Title

Repository Usage Stats



Purpose: To investigate the feasibility of integrate radiomics and deep learning in computer-aided medical imaging analysis Methods: Two different approaches were investigated to integrate radiomics and deep leaning on two independent tasks respectively. In the first approach, a 2D sliding kernel was implemented to map the impulse response of radiomic features throughout the entire chest X-ray image; thus, each feature is rendered as a 2D map in the same dimension as the X-ray image. Based on each of the three investigated deep neural network architectures, including VGG-16, VGG-19, and DenseNet-121, a pilot model was trained using X-ray images only. Subsequently, 2 radiomic feature maps (RFMs) were selected based on cross-correlation analysis in reference to the pilot model saliency map results. The radiomics-boosted model was then trained based on the same deep neural network architecture using X-ray images plus the selected RFMs as input. The proposed radiomics-boosted design was developed using 812 chest X-ray images with 262/288/262 COVID-19/Non-COVID-19 pneumonia/healthy cases, and 649/163 cases were assigned as training-validation/independent test sets. For each model, 50 runs were trained with random assignments of training/validation cases following the 7:1 ratio in the training-validation set. Sensitivity, specificity, accuracy, and ROC curves together with Area-Under-the-Curve (AUC) from all three deep neural network architectures were evaluated. In the second approach, a cohort of 235 GBM patients with complete surgical resection was divided into short-term/long-term survival groups with 1-yr survival time threshold. Each patient received a pre-surgery multi-parametric MRI exam with 4 scans: T1, contrast-enhanced T1 (T1ce), T2, and FLAIR. Three tumor subregions were segmented by neuroradiologists, and the whole dataset was divided into training, validation, and test groups following a 7:1:2 ratio. The developed model comprises three data source branches: in the 1st radiomics branch, 456 radiomics features (RF) were calculated from the three tumor subregions of each patient’s MR images; in the 2nd deep learning branch, an encoding neural network architecture was trained for survival group prediction using each single MR modality, and high-dimensional parameters from the last two network layers were extracted as deep features (DF). The extracted radiomics features and deep features were processed by a feature selection procedure to reduce the dimension size of each feature space. In the 3rd branch, patient-specific clinical features (PSCF), including patient age and three tumor subregions volumes, were collected from the dataset. Finally, data sources from all three branches were fused as an integrated input for a supporting vector machine (SVM) execution for survival group prediction. Different strategies of model design were investigated in comparison studies, including 1) 2D/3D-based image analysis, 2) different radiomics feature space dimension reduction methods, and 3) different data source combinations in SVM input design. Results: In the first approach, all three investigated deep neural network architectures demonstrated improved sensitivity, specificity, accuracy, and ROC AUC results in COVID-19 and healthy individual classifications. VGG-16 showed the largest improvement in COVID-19 classification ROC (AUC from 0.963 to 0.993), and DenseNet-121 showed the largest improvement in healthy individual classification ROC (AUC from 0.962 to 0.989). The reduced variations suggested improved robustness of the model to data partition. For the challenging Non-COVID-19 pneumonia classification task, radiomics-boosted implementation of VGG-16 (AUC from 0.918 to 0.969) and VGG-19 (AUC from 0.964 to 0.970) improved ROC results, while DenseNet-121 showed a slight yet insignificant ROC performance reduction (AUC from 0.963 to 0.949). The achieved highest accuracy of COVID-19/Non-COVID-19 pneumonia/healthy individual classifications were 0.973 (VGG-19)/0.936 (VGG-19)/ 0.933 (VGG-16), respectively. In the second approach, the model achieved 0.638 prediction accuracy in the test set when using patient-specific clinical features only, which was higher than the results using radiomics features/deep features as sole input of SVM in both 2D and 3D based analysis. The inclusion of radiomics features or deep features with patient-specific clinical features improved accuracy results in 3D analysis. The most accurate models in 2D/3D analysis reached the highest accuracy of 0.745 with different combinations of dissimilarity-selected radiomics features, deep features, and patient-specific clinical features, and the corresponding ROC area-under-curve (AUC) results were 0.69 (2D) and 0.71 (3D), respectively.

Conclusions: The integration of radiomic analysis in deep learning model design improved the performance and robustness computer-aided diagnosis and outcome predication, which holds great potential for clinical applications and provides a radiomics perspective for deep learning interpretation.





Hu, Zongsheng (2022). Novel Designs of Radiomics-Integrated Deep Learning Models. Master's thesis, Duke University. Retrieved from


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.