Browsing by Subject "Big Data"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Big Data Analytics and Sensor-Enhanced Activity Management to Improve Effectiveness and Efficiency of Outpatient Medical Rehabilitation.(International journal of environmental research and public health, 2020-01) Jones, Mike; Collier, George; Reinkensmeyer, David J; DeRuyter, Frank; Dzivak, John; Zondervan, Daniel; Morris, JohnNumerous societal trends are compelling a transition from inpatient to outpatient venues of care for medical rehabilitation. While there are advantages to outpatient rehabilitation (e.g., lower cost, more relevant to home and community function), there are also challenges including lack of information about how patient progress observed in the outpatient clinic translates into improved functional performance at home. At present, outpatient providers must rely on patient-reported information about functional progress (or lack thereof) at home and in the community. Information and communication technologies (ICT) offer another option-data collected about the patient's adherence, performance and progress made on home exercises could be used to help guide course corrections between clinic visits, enhancing effectiveness and efficiency of outpatient care. In this article, we describe our efforts to explore use of sensor-enhanced home exercise and big data analytics in medical rehabilitation. The goal of this work is to demonstrate how sensor-enhanced exercise can improve rehabilitation outcomes for patients with significant neurological impairment (e.g., from stroke, traumatic brain injury, and spinal cord injury). We provide an overview of big data analysis and explain how it may be used to optimize outpatient rehabilitation, creating a more efficient model of care. We describe our planned development efforts to build advanced analytic tools to guide home-based rehabilitation and our proposed randomized trial to evaluate effectiveness and implementation of this approach.Item Open Access Making Clinical Practice Guidelines Pragmatic: How Big Data and Real World Evidence Can Close the Gap.(Annals of the Academy of Medicine, Singapore, 2018-12) Chew, Si Yuan; Koh, Mariko S; Loo, Chian Min; Thumboo, Julian; Shantakumar, Sumitra; Matchar, David BClinical practice guidelines (CPGs) have become ubiquitous in every field of medicine today but there has been limited success in implementation and improvement in health outcomes. Guidelines are largely based on the results of traditional randomised controlled trials (RCTs) which adopt a highly selective process to maximise the intervention's chance of demonstrating efficacy thus having high internal validity but lacking external validity. Therefore, guidelines based on these RCTs often suffer from a gap between trial efficacy and real world effectiveness and is one of the common reasons contributing to poor guideline adherence by physicians. "Real World Evidence" (RWE) can complement RCTs in CPG development. RWE-in the form of data from integrated electronic health records-represents the vast and varied collective experience of frontline doctors and patients. RWE has the potential to fill the gap in current guidelines by balancing information about whether a test or treatment works (efficacy) with data on how it works in real world practice (effectiveness). RWE can also advance the agenda of precision medicine in everyday practice by engaging frontline stakeholders in pragmatic biomarker studies. This will enable guideline developers to more precisely determine not only whether a clinical test or treatment is recommended, but for whom and when. Singapore is well positioned to ride the big data and RWE wave as we have the advantages of high digital interconnectivity, an integrated National Electronic Health Record (NEHR), and governmental support in the form of the Smart Nation initiative.Item Open Access Neuroimaging-based classification of PTSD using data-driven computational approaches: A multisite big data study from the ENIGMA-PGC PTSD consortium.(NeuroImage, 2023-12) Zhu, Xi; Kim, Yoojean; Ravid, Orren; He, Xiaofu; Suarez-Jimenez, Benjamin; Zilcha-Mano, Sigal; Lazarov, Amit; Lee, Seonjoo; Abdallah, Chadi G; Angstadt, Michael; Averill, Christopher L; Baird, C Lexi; Baugh, Lee A; Blackford, Jennifer U; Bomyea, Jessica; Bruce, Steven E; Bryant, Richard A; Cao, Zhihong; Choi, Kyle; Cisler, Josh; Cotton, Andrew S; Daniels, Judith K; Davenport, Nicholas D; Davidson, Richard J; DeBellis, Michael D; Dennis, Emily L; Densmore, Maria; deRoon-Cassini, Terri; Disner, Seth G; Hage, Wissam El; Etkin, Amit; Fani, Negar; Fercho, Kelene A; Fitzgerald, Jacklynn; Forster, Gina L; Frijling, Jessie L; Geuze, Elbert; Gonenc, Atilla; Gordon, Evan M; Gruber, Staci; Grupe, Daniel W; Guenette, Jeffrey P; Haswell, Courtney C; Herringa, Ryan J; Herzog, Julia; Hofmann, David Bernd; Hosseini, Bobak; Hudson, Anna R; Huggins, Ashley A; Ipser, Jonathan C; Jahanshad, Neda; Jia-Richards, Meilin; Jovanovic, Tanja; Kaufman, Milissa L; Kennis, Mitzy; King, Anthony; Kinzel, Philipp; Koch, Saskia BJ; Koerte, Inga K; Koopowitz, Sheri M; Korgaonkar, Mayuresh S; Krystal, John H; Lanius, Ruth; Larson, Christine L; Lebois, Lauren AM; Li, Gen; Liberzon, Israel; Lu, Guang Ming; Luo, Yifeng; Magnotta, Vincent A; Manthey, Antje; Maron-Katz, Adi; May, Geoffery; McLaughlin, Katie; Mueller, Sven C; Nawijn, Laura; Nelson, Steven M; Neufeld, Richard WJ; Nitschke, Jack B; O'Leary, Erin M; Olatunji, Bunmi O; Olff, Miranda; Peverill, Matthew; Phan, K Luan; Qi, Rongfeng; Quidé, Yann; Rektor, Ivan; Ressler, Kerry; Riha, Pavel; Ross, Marisa; Rosso, Isabelle M; Salminen, Lauren E; Sambrook, Kelly; Schmahl, Christian; Shenton, Martha E; Sheridan, Margaret; Shih, Chiahao; Sicorello, Maurizio; Sierk, Anika; Simmons, Alan N; Simons, Raluca M; Simons, Jeffrey S; Sponheim, Scott R; Stein, Murray B; Stein, Dan J; Stevens, Jennifer S; Straube, Thomas; Sun, Delin; Théberge, Jean; Thompson, Paul M; Thomopoulos, Sophia I; van der Wee, Nic JA; van der Werff, Steven JA; van Erp, Theo GM; van Rooij, Sanne JH; van Zuiden, Mirjam; Varkevisser, Tim; Veltman, Dick J; Vermeiren, Robert RJM; Walter, Henrik; Wang, Li; Wang, Xin; Weis, Carissa; Winternitz, Sherry; Xie, Hong; Zhu, Ye; Wall, Melanie; Neria, Yuval; Morey, Rajendra ABackground
Recent advances in data-driven computational approaches have been helpful in devising tools to objectively diagnose psychiatric disorders. However, current machine learning studies limited to small homogeneous samples, different methodologies, and different imaging collection protocols, limit the ability to directly compare and generalize their results. Here we aimed to classify individuals with PTSD versus controls and assess the generalizability using a large heterogeneous brain datasets from the ENIGMA-PGC PTSD Working group.Methods
We analyzed brain MRI data from 3,477 structural-MRI; 2,495 resting state-fMRI; and 1,952 diffusion-MRI. First, we identified the brain features that best distinguish individuals with PTSD from controls using traditional machine learning methods. Second, we assessed the utility of the denoising variational autoencoder (DVAE) and evaluated its classification performance. Third, we assessed the generalizability and reproducibility of both models using leave-one-site-out cross-validation procedure for each modality.Results
We found lower performance in classifying PTSD vs. controls with data from over 20 sites (60 % test AUC for s-MRI, 59 % for rs-fMRI and 56 % for d-MRI), as compared to other studies run on single-site data. The performance increased when classifying PTSD from HC without trauma history in each modality (75 % AUC). The classification performance remained intact when applying the DVAE framework, which reduced the number of features. Finally, we found that the DVAE framework achieved better generalization to unseen datasets compared with the traditional machine learning frameworks, albeit performance was slightly above chance.Conclusion
These results have the potential to provide a baseline classification performance for PTSD when using large scale neuroimaging datasets. Our findings show that the control group used can heavily affect classification performance. The DVAE framework provided better generalizability for the multi-site data. This may be more significant in clinical practice since the neuroimaging-based diagnostic DVAE classification models are much less site-specific, rendering them more generalizable.