Browsing by Author "Bedoya, Armando D"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Open Access Development, Implementation, and Evaluation of an In-Hospital Optimized Early Warning Score for Patient Deterioration.(MDM policy & practice, 2020-01-10) O'Brien, Cara; Goldstein, Benjamin A; Shen, Yueqi; Phelan, Matthew; Lambert, Curtis; Bedoya, Armando D; Steorts, Rebecca CBackground. Identification of patients at risk of deteriorating during their hospitalization is an important concern. However, many off-shelf scores have poor in-center performance. In this article, we report our experience developing, implementing, and evaluating an in-hospital score for deterioration. Methods. We abstracted 3 years of data (2014-2016) and identified patients on medical wards that died or were transferred to the intensive care unit. We developed a time-varying risk model and then implemented the model over a 10-week period to assess prospective predictive performance. We compared performance to our currently used tool, National Early Warning Score. In order to aid clinical decision making, we transformed the quantitative score into a three-level clinical decision support tool. Results. The developed risk score had an average area under the curve of 0.814 (95% confidence interval = 0.79-0.83) versus 0.740 (95% confidence interval = 0.72-0.76) for the National Early Warning Score. We found the proposed score was able to respond to acute clinical changes in patients' clinical status. Upon implementing the score, we were able to achieve the desired positive predictive value but needed to retune the thresholds to get the desired sensitivity. Discussion. This work illustrates the potential for academic medical centers to build, refine, and implement risk models that are targeted to their patient population and work flow.Item Open Access Incorporating informatively collected laboratory data from EHR in clinical prediction models.(BMC medical informatics and decision making, 2024-07) Sun, Minghui; Engelhard, Matthew M; Bedoya, Armando D; Goldstein, Benjamin ABackground
Electronic Health Records (EHR) are widely used to develop clinical prediction models (CPMs). However, one of the challenges is that there is often a degree of informative missing data. For example, laboratory measures are typically taken when a clinician is concerned that there is a need. When data are the so-called Not Missing at Random (NMAR), analytic strategies based on other missingness mechanisms are inappropriate. In this work, we seek to compare the impact of different strategies for handling missing data on CPMs performance.Methods
We considered a predictive model for rapid inpatient deterioration as an exemplar implementation. This model incorporated twelve laboratory measures with varying levels of missingness. Five labs had missingness rate levels around 50%, and the other seven had missingness levels around 90%. We included them based on the belief that their missingness status can be highly informational for the prediction. In our study, we explicitly compared the various missing data strategies: mean imputation, normal-value imputation, conditional imputation, categorical encoding, and missingness embeddings. Some of these were also combined with the last observation carried forward (LOCF). We implemented logistic LASSO regression, multilayer perceptron (MLP), and long short-term memory (LSTM) models as the downstream classifiers. We compared the AUROC of testing data and used bootstrapping to construct 95% confidence intervals.Results
We had 105,198 inpatient encounters, with 4.7% having experienced the deterioration outcome of interest. LSTM models generally outperformed other cross-sectional models, where embedding approaches and categorical encoding yielded the best results. For the cross-sectional models, normal-value imputation with LOCF generated the best results.Conclusion
Strategies that accounted for the possibility of NMAR missing data yielded better model performance than those did not. The embedding method had an advantage as it did not require prior clinical knowledge. Using LOCF could enhance the performance of cross-sectional models but have countereffects in LSTM models.Item Open Access Machine learning for early detection of sepsis: an internal and temporal validation study.(JAMIA open, 2020-07) Bedoya, Armando D; Futoma, Joseph; Clement, Meredith E; Corey, Kristin; Brajer, Nathan; Lin, Anthony; Simons, Morgan G; Gao, Michael; Nichols, Marshall; Balu, Suresh; Heller, Katherine; Sendak, Mark; O'Brien, CaraObjective
Determine if deep learning detects sepsis earlier and more accurately than other models. To evaluate model performance using implementation-oriented metrics that simulate clinical practice.Materials and methods
We trained internally and temporally validated a deep learning model (multi-output Gaussian process and recurrent neural network [MGP-RNN]) to detect sepsis using encounters from adult hospitalized patients at a large tertiary academic center. Sepsis was defined as the presence of 2 or more systemic inflammatory response syndrome (SIRS) criteria, a blood culture order, and at least one element of end-organ failure. The training dataset included demographics, comorbidities, vital signs, medication administrations, and labs from October 1, 2014 to December 1, 2015, while the temporal validation dataset was from March 1, 2018 to August 31, 2018. Comparisons were made to 3 machine learning methods, random forest (RF), Cox regression (CR), and penalized logistic regression (PLR), and 3 clinical scores used to detect sepsis, SIRS, quick Sequential Organ Failure Assessment (qSOFA), and National Early Warning Score (NEWS). Traditional discrimination statistics such as the C-statistic as well as metrics aligned with operational implementation were assessed.Results
The training set and internal validation included 42 979 encounters, while the temporal validation set included 39 786 encounters. The C-statistic for predicting sepsis within 4 h of onset was 0.88 for the MGP-RNN compared to 0.836 for RF, 0.849 for CR, 0.822 for PLR, 0.756 for SIRS, 0.619 for NEWS, and 0.481 for qSOFA. MGP-RNN detected sepsis a median of 5 h in advance. Temporal validation assessment continued to show the MGP-RNN outperform all 7 clinical risk score and machine learning comparisons.Conclusions
We developed and validated a novel deep learning model to detect sepsis. Using our data elements and feature set, our modeling approach outperformed other machine learning methods and clinical scores.Item Open Access The Eyes Have It-for Idiopathic Pulmonary Fibrosis: a Preliminary Observation.(Pulmonary therapy, 2022-09) Pleasants, Roy A; Bedoya, Armando D; Boggan, Joel M; Welty-Wolf, Karen; Tighe, Robert MIntroduction
The disease origins of idiopathic pulmonary fibrosis (IPF), which occurs at higher rates in certain races/ethnicities, are not understood. The highest rates occur in white persons of European descent, particularly those with light skin, who are also susceptible to lysosomal organelle dysfunction of the skin leading to fibroproliferative disease . We had observed clinically that the vast majority of patients with IPF had light-colored eyes, suggesting a phenotypic characteristic.Methods
We pursued this observation through a research database from the USA Veterans Administration, a population that has a high occurrence of IPF due to predominance of elderly male smokers. Using this medical records database, which included facial photos, we compared the frequency of light (blue, green, hazel) and dark (light brown, brown) eyes among white patients diagnosed with IPF compared with a control group of lung granuloma only (no other radiologic evidence of interstitial lung disease).Results
Light eye color was significantly more prevalent in patients with IPF than in the control group with lung granuloma [114/147 (77.6%) versus 129/263 (49.0%], p < 0.001), indicating that light-colored eyes are a phenotype associated with IPF .Conclusion
We provide evidence that light eye color is predominant among white persons with IPF.