Deriving Lung Ventilation MAP Directly from Auto Segmented CT Images Using Deep Convolutional Neural Network (CNN)

dc.contributor.advisor

Yin, Fang Fang

dc.contributor.author

Li, Nan

dc.date.accessioned

2022-06-15T20:01:46Z

dc.date.issued

2022

dc.department

DKU - Medical Physics Master of Science Program

dc.description.abstract

Lung cancer has been the most commonly occurring cancer (J. Ferlay, 2018), with the highest fatality rate worldwide. Lung cancer patients undergoing radiation therapy typically experience many side effects. In order to reduce the adverse effects, lung function (ventilation state)-guided radiation therapy has been highly recommended. “Functional Lung Avoidance Radiation Therapy” (FLA-RT) can selectively avoid high-dose irradiation to the well-functioning region of the lungs and reduce lung injury during RT (Azza Ahmed Khalil, 2021). FLA-RT, however, needs information on lung function for the treatment process. The conventional techniques that acquire lung function map (S. W. Harders, 2013) include 99mTc SPECT technique (Suga, 2002), 99mTc MRI technique (LindsayMathew, 2012), 68Ga PET technique (Jason Callahan, 2013). Nevertheless, these techniques have the following issues: high cost, labor-intensive in the preparation process, and low accessibility for the radiation oncology departments.This research is aimed to investigate whether the lung function images could be generated from routine planning CT images using CNN. This study will also develop an image segmentation method to automatically and accurately segment lung volume for the chest CT images. This study retrospectively analyzed 99mTc DTPA SPECT scans of 21 cases. These were open-source data from "VAMPIRE (Ventilation Archive for Medical Pulmonary Image Registration Evaluation)" established by John Kipritidis, Henry C. Woodruff, and Paul J. Keall from the Radiation Physics Laboratory of the University of Sydney in Australia. The sizes for CT images and the reference mask images were 512 ⅹ 512 matrices with the pixel size of 2.5 ⅹ 2.5 mm2 and 3 mm slice thickness. The SPECT images were reconstructed in 512 ⅹ 512 matrices with 2.5 ⅹ 2.5 ⅹ 2.5 mm3 voxel size. CT, reference mask, and SPECT images are all in ". mha" data format for each study case. This study contains two major components. First, a deep-learning model was developed to auto-segment the lung region from the CT images. Second, another deep-learning model was developed to use the segmented lung CT image as input to predict lung ventilation function map. In order to accomplish the first task of this study, we used the CT images as the “network input” and the reference mask images as the "network-output." We then trained them with a designated 2D U-shape backbone network and successfully generated the first model. For testing the model performance, Pixel Accuracy, Pixel Recall, Pixel Precision, and Intersection of Union (IOU) were used as assessment criteria to evaluate the quality of model-generated lung masks based on the ground truth masks. In order to accomplish the second task, we used the segmented lung CT images as the “network input” and SPECT images as the "network-output." We then trained them with another designated 3D U-shape backbone network and successfully generated the second model. For testing the performance of the second model, the correlation coefficient (Spearman's coefficient) (Piantadosi) was used as assessment criteria to evaluate the correlation between the model-generated lung function images and the ground truth SPECT images. In order to achieve the optimal outcome, this study applied parallel studies that compared the influence of different training strategies on the outcome (see Chapter 3.3.2). The different train strategies include two aspects for DL Model 1 and four aspects for DL Model 2. Training the designed network with three-channels data as input provided the best results for image segmentation. For test case 1, the Pixel Accuracy is 0.935±0.033, the Pixel recall is 0.942±0.029, Pixel Precision is 0.942±0.032, and IoU is 0.891±0.042. For test case 2, the Pixel Accuracy is 0.950±0.024, the Pixel recall is 0.961±0.015, Pixel Precision is 0.943±0.028, and IoU is 0.909±0.036. For “deriving lung function images,” training the designed network using the ground truth mask to segment the chest CT with [-1,1] normalization and 32 ⅹ 32 ⅹ 64 training patch size as inputs provided the best results. The Spearman's correlation coefficients for cases 1 and 2 got 0.8689±0.038 and 0.8716±0.036, respectively. The preliminary study using the designed U-shape backbone convolutional neural networks (CNNs) achieved satisfactory auto-segmentation results and derived promising results of the lung function map. It indicates the feasibility of directly deriving the lung ventilation state (SPECT-like images) from CT images. The CNN-derived “SPECT-like” lung functional images might be used to reference FLA-RT.

dc.identifier.uri

https://hdl.handle.net/10161/25317

dc.subject

Physics

dc.subject

Computer science

dc.subject

CNNs

dc.subject

Convolutional Neural Network

dc.subject

Deep Learning

dc.subject

FLA-RT

dc.subject

Image Segmentation

dc.subject

Lung Function MAP

dc.title

Deriving Lung Ventilation MAP Directly from Auto Segmented CT Images Using Deep Convolutional Neural Network (CNN)

dc.type

Master's thesis

duke.embargo.months

23.375342465753423

duke.embargo.release

2024-05-26T00:00:00Z

Files

Collections