Browsing by Author "Lo, Joseph Y"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Adaptability index: quantifying CT tube current modulation performance from dose and quality informatics(2017-03-17) Ria, F; Wilson, JM; Zhang, Y; Samei, EThe balance between risk and benefit in modern CT scanners is governed by the automatic adaptation mechanisms that adjust x-ray flux for accommodating patient size to achieve certain image noise values. The effectiveness of this adaptation is an important aspect of CT performance and should ideally be characterized in the context of real patient cases. Objective of this study was to characterize CT performance with an index that includes image-noise and radiation dose across a clinical patient population. The study included 1526 examinations performed by three scanners, from two vendors, used for two clinical protocols (abdominopelvic and chest). The dose-patient size and noise-patient size dependencies were linearized, and a 3D-fit was performed for each protocol and each scanner with a planar function. In the fit residual plots the Root Mean Square Error (RMSE) values were estimated as a metric of CT adaptability across the patient population. The RMSE values were between 0.0344 HU1/2 and 0.0215 HU1/2: different scanners offer varying degrees of reproducibility of noise and dose across the population. This analysis could be performed with phantoms, but phantom data would only provide information concerning specific exposure parameters for a scan: instead, a general population comparison is a way to obtain new information related to the relevant clinical adaptability of scanner models. A theoretical relationship between image noise, CTDIvol and patient size was determined based on real patient data. This relationship may provide a new index related to the scanners' adaptability concerning image quality and radiation dose across a patient population. © (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.Item Open Access Multi-label annotation of text reports from computed tomography of the chest, abdomen, and pelvis using deep learning.(BMC medical informatics and decision making, 2022-04) D'Anniballe, Vincent M; Tushar, Fakrul Islam; Faryna, Khrystyna; Han, Songyue; Mazurowski, Maciej A; Rubin, Geoffrey D; Lo, Joseph YBackground
There is progress to be made in building artificially intelligent systems to detect abnormalities that are not only accurate but can handle the true breadth of findings that radiologists encounter in body (chest, abdomen, and pelvis) computed tomography (CT). Currently, the major bottleneck for developing multi-disease classifiers is a lack of manually annotated data. The purpose of this work was to develop high throughput multi-label annotators for body CT reports that can be applied across a variety of abnormalities, organs, and disease states thereby mitigating the need for human annotation.Methods
We used a dictionary approach to develop rule-based algorithms (RBA) for extraction of disease labels from radiology text reports. We targeted three organ systems (lungs/pleura, liver/gallbladder, kidneys/ureters) with four diseases per system based on their prevalence in our dataset. To expand the algorithms beyond pre-defined keywords, attention-guided recurrent neural networks (RNN) were trained using the RBA-extracted labels to classify reports as being positive for one or more diseases or normal for each organ system. Alternative effects on disease classification performance were evaluated using random initialization or pre-trained embedding as well as different sizes of training datasets. The RBA was tested on a subset of 2158 manually labeled reports and performance was reported as accuracy and F-score. The RNN was tested against a test set of 48,758 reports labeled by RBA and performance was reported as area under the receiver operating characteristic curve (AUC), with 95% CIs calculated using the DeLong method.Results
Manual validation of the RBA confirmed 91-99% accuracy across the 15 different labels. Our models extracted disease labels from 261,229 radiology reports of 112,501 unique subjects. Pre-trained models outperformed random initialization across all diseases. As the training dataset size was reduced, performance was robust except for a few diseases with a relatively small number of cases. Pre-trained classification AUCs reached > 0.95 for all four disease outcomes and normality across all three organ systems.Conclusions
Our label-extracting pipeline was able to encompass a variety of cases and diseases in body CT reports by generalizing beyond strict rules with exceptional accuracy. The method described can be easily adapted to enable automated labeling of hospital-scale medical data sets for training image-based disease classifiers.