Browsing by Author "Hong, Hwanhee"
Results Per Page
Sort Options
Item Open Access Caveat emptor: the combined effects of multiplicity and selective reporting.(Trials, 2018-09-17) Li, Tianjing; Mayo-Wilson, Evan; Fusco, Nicole; Hong, Hwanhee; Dickersin, KayClinical trials and systematic reviews of clinical trials inform healthcare decisions. There is growing concern, however, about results from clinical trials that cannot be reproduced. Reasons for nonreproducibility include that outcomes are defined in multiple ways, results can be obtained using multiple methods of analysis, and trial findings are reported in multiple sources ("multiplicity"). Multiplicity combined with selective reporting can influence dissemination of trial findings and decision-making. In particular, users of evidence might be misled by exposure to selected sources and overly optimistic representations of intervention effects. In this commentary, drawing from our experience in the Multiple Data Sources in Systematic Reviews (MUDS) study and evidence from previous research, we offer practical recommendations to enhance the reproducibility of clinical trials and systematic reviews.Item Open Access Comparison of methods that combine multiple randomized trials to estimate heterogeneous treatment effects.(Statistics in medicine, 2024-03) Brantner, Carly Lupton; Nguyen, Trang Quynh; Tang, Tengjie; Zhao, Congwen; Hong, Hwanhee; Stuart, Elizabeth AIndividualized treatment decisions can improve health outcomes, but using data to make these decisions in a reliable, precise, and generalizable way is challenging with a single dataset. Leveraging multiple randomized controlled trials allows for the combination of datasets with unconfounded treatment assignment to better estimate heterogeneous treatment effects. This article discusses several nonparametric approaches for estimating heterogeneous treatment effects using data from multiple trials. We extend single-study methods to a scenario with multiple trials and explore their performance through a simulation study, with data generation scenarios that have differing levels of cross-trial heterogeneity. The simulations demonstrate that methods that directly allow for heterogeneity of the treatment effect across trials perform better than methods that do not, and that the choice of single-study method matters based on the functional form of the treatment effect. Finally, we discuss which methods perform well in each setting and then apply them to four randomized controlled trials to examine effect heterogeneity of treatments for major depressive disorder.Item Open Access Correction to: Integrating multiple data sources (MUDS) for meta-analysis to improve patient-centered outcomes research: a protocol for a systematic review.(Alzheimer's research & therapy, 2018-02-16) Mayo-Wilson, Evan; Hutfless, Susan; Li, Tianjing; Gresham, Gillian; Fusco, Nicole; Ehmsen, Jeffrey; Heyward, James; Vedula, Swaroop; Lock, Diana; Haythornthwaite, Jennifer; Payne, Jennifer L; Cowley, Theresa; Tolbert, Elizabeth; Rosman, Lori; Twose, Claire; Stuart, Elizabeth A; Hong, Hwanhee; Doshi, Peter; Suarez-Cuervo, Catalina; Singh, Sonal; Dickersin, KayCORRECTION:The correct title of the article [1] should be "Integrating multiple data sources (MUDS) for meta-analysis to improve patient-centered outcomes research: a protocol". The article is a protocol for a methodological study, not a systematic review.Item Open Access Methods for Integrating Trials and Non-experimental Data to Examine Treatment Effect Heterogeneity.(Statistical science : a review journal of the Institute of Mathematical Statistics, 2023-11) Brantner, Carly Lupton; Chang, Ting-Hsuan; Nguyen, Trang Quynh; Hong, Hwanhee; Stefano, Leon Di; Stuart, Elizabeth AEstimating treatment effects conditional on observed covariates can improve the ability to tailor treatments to particular individuals. Doing so effectively requires dealing with potential confounding, and also enough data to adequately estimate effect moderation. A recent influx of work has looked into estimating treatment effect heterogeneity using data from multiple randomized controlled trials and/or observational datasets. With many new methods available for assessing treatment effect heterogeneity using multiple studies, it is important to understand which methods are best used in which setting, how the methods compare to one another, and what needs to be done to continue progress in this field. This paper reviews these methods broken down by data setting: aggregate-level data, federated learning, and individual participant-level data. We define the conditional average treatment effect and discuss differences between parametric and nonparametric estimators, and we list key assumptions, both those that are required within a single study and those that are necessary for data combination. After describing existing approaches, we compare and contrast them and reveal open areas for future research. This review demonstrates that there are many possible approaches for estimating treatment effect heterogeneity through the combination of datasets, but that there is substantial work to be done to compare these methods through case studies and simulations, extend them to different settings, and refine them to account for various challenges present in real data.Item Embargo Understanding Contributions of Indirect and Direct Evidence to Statistical Power in Bayesian Network Meta-Analysis: Simulation Studies and Real-World Applications(2024) Shen, YichengNetwork meta-analysis (NMA) has become a popular tool for simultaneously evaluating multiple interventions in biomedical studies. However, performing power analysis in NMAs can be challenging because it depends on network structures and forms of hypotheses to be tested. In this study, we investigate how varying evidence structures within Bayesian NMAs influence the statistical power and bias of relative treatment effect estimates using simulations and real-world case studies with binary outcomes. We first conduct a comprehensive simulation study to examine the properties of power in NMA under various scenarios, including different effect sizes, between-study heterogeneity, evidence composition, and model parameterizations. We then provide case studies to illustrate the practical application of our methods for networks of different sizes and structures. Our results suggest that powers are notably sensitive to the type and strength of evidence for each hypothesis and can be improved by increasing sample sizes, reducing between-study heterogeneity, and adding more direct evidence. In addition, power behaviors of contrast-based and arm-based parameterizations largely agree. The findings provide insights into the empirical implications of power analysis in Bayesian NMA, optimizing future NMA design for more reliable and robust healthcare decision-making.