Browsing by Author "Kim, Kanghyun"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Open Access A multiple instance learning approach for detecting COVID-19 in peripheral blood smears.(PLOS digital health, 2022-08) Cooke, Colin L; Kim, Kanghyun; Xu, Shiqi; Chaware, Amey; Yao, Xing; Yang, Xi; Neff, Jadee; Pittman, Patricia; McCall, Chad; Glass, Carolyn; Jiang, Xiaoyin Sara; Horstmeyer, RoarkeA wide variety of diseases are commonly diagnosed via the visual examination of cell morphology within a peripheral blood smear. For certain diseases, such as COVID-19, morphological impact across the multitude of blood cell types is still poorly understood. In this paper, we present a multiple instance learning-based approach to aggregate high-resolution morphological information across many blood cells and cell types to automatically diagnose disease at a per-patient level. We integrated image and diagnostic information from across 236 patients to demonstrate not only that there is a significant link between blood and a patient's COVID-19 infection status, but also that novel machine learning approaches offer a powerful and scalable means to analyze peripheral blood smears. Our results both backup and enhance hematological findings relating blood cell morphology to COVID-19, and offer a high diagnostic efficacy; with a 79% accuracy and a ROC-AUC of 0.90.Item Open Access Gigapixel imaging with a novel multi-camera array microscope.(eLife, 2022-12) Thomson, Eric E; Harfouche, Mark; Kim, Kanghyun; Konda, Pavan C; Seitz, Catherine W; Cooke, Colin; Xu, Shiqi; Jacobs, Whitney S; Blazing, Robin; Chen, Yang; Sharma, Sunanda; Dunn, Timothy W; Park, Jaehee; Horstmeyer, Roarke W; Naumann, Eva AThe dynamics of living organisms are organized across many spatial scales. However, current cost-effective imaging systems can measure only a subset of these scales at once. We have created a scalable multi-camera array microscope (MCAM) that enables comprehensive high-resolution recording from multiple spatial scales simultaneously, ranging from structures that approach the cellular scale to large-group behavioral dynamics. By collecting data from up to 96 cameras, we computationally generate gigapixel-scale images and movies with a field of view over hundreds of square centimeters at an optical resolution of 18 µm. This allows us to observe the behavior and fine anatomical features of numerous freely moving model organisms on multiple spatial scales, including larval zebrafish, fruit flies, nematodes, carpenter ants, and slime mold. Further, the MCAM architecture allows stereoscopic tracking of the z-position of organisms using the overlapping field of view from adjacent cameras. Overall, by removing the bottlenecks imposed by single-camera image acquisition systems, the MCAM provides a powerful platform for investigating detailed biological features and behavioral processes of small model organisms across a wide range of spatial scales.Item Open Access Learned sensing: jointly optimized microscope hardware for accurate image classification.(Biomedical optics express, 2019-12) Muthumbi, Alex; Chaware, Amey; Kim, Kanghyun; Zhou, Kevin C; Konda, Pavan Chandra; Chen, Richard; Judkewitz, Benjamin; Erdmann, Andreas; Kappes, Barbara; Horstmeyer, RoarkeSince its invention, the microscope has been optimized for interpretation by a human observer. With the recent development of deep learning algorithms for automated image analysis, there is now a clear need to re-design the microscope's hardware for specific interpretation tasks. To increase the speed and accuracy of automated image classification, this work presents a method to co-optimize how a sample is illuminated in a microscope, along with a pipeline to automatically classify the resulting image, using a deep neural network. By adding a "physical layer" to a deep classification network, we are able to jointly optimize for specific illumination patterns that highlight the most important sample features for the particular learning task at hand, which may not be obvious under standard illumination. We demonstrate how our learned sensing approach for illumination design can automatically identify malaria-infected cells with up to 5-10% greater accuracy than standard and alternative microscope lighting designs. We show that this joint hardware-software design procedure generalizes to offer accurate diagnoses for two different blood smear types, and experimentally show how our new procedure can translate across different experimental setups while maintaining high accuracy.Item Open Access Parallelized computational 3D video microscopy of freely moving organisms at multiple gigapixels per second.(Nature photonics, 2023-05) Zhou, Kevin C; Harfouche, Mark; Cooke, Colin L; Park, Jaehee; Konda, Pavan C; Kreiss, Lucas; Kim, Kanghyun; Jönsson, Joakim; Doman, Thomas; Reamey, Paul; Saliu, Veton; Cook, Clare B; Zheng, Maxwell; Bechtel, John P; Bègue, Aurélien; McCarroll, Matthew; Bagwell, Jennifer; Horstmeyer, Gregor; Bagnat, Michel; Horstmeyer, RoarkeWide field of view microscopy that can resolve 3D information at high speed and spatial resolution is highly desirable for studying the behaviour of freely moving model organisms. However, it is challenging to design an optical instrument that optimises all these properties simultaneously. Existing techniques typically require the acquisition of sequential image snapshots to observe large areas or measure 3D information, thus compromising on speed and throughput. Here, we present 3D-RAPID, a computational microscope based on a synchronized array of 54 cameras that can capture high-speed 3D topographic videos over an area of 135 cm2, achieving up to 230 frames per second at spatiotemporal throughputs exceeding 5 gigapixels per second. 3D-RAPID employs a 3D reconstruction algorithm that, for each synchronized snapshot, fuses all 54 images into a composite that includes a co-registered 3D height map. The self-supervised 3D reconstruction algorithm trains a neural network to map raw photometric images to 3D topography using stereo overlap redundancy and ray-propagation physics as the only supervision mechanism. The resulting reconstruction process is thus robust to generalization errors and scales to arbitrarily long videos from arbitrarily sized camera arrays. We demonstrate the broad applicability of 3D-RAPID with collections of several freely behaving organisms, including ants, fruit flies, and zebrafish larvae.Item Open Access Physics-enhanced machine learning for virtual fluorescence microscopy(CoRR, 2020-04-08) Cooke, Colin L; Kong, Fanjie; Chaware, Amey; Zhou, Kevin C; Kim, Kanghyun; Xu, Rong; Ando, D Michael; Yang, Samuel J; Konda, Pavan Chandra; Horstmeyer, RoarkeThis paper introduces a new method of data-driven microscope design for virtual fluorescence microscopy. Our results show that by including a model of illumination within the first layers of a deep convolutional neural network, it is possible to learn task-specific LED patterns that substantially improve the ability to infer fluorescence image information from unstained transmission microscopy images. We validated our method on two different experimental setups, with different magnifications and different sample types, to show a consistent improvement in performance as compared to conventional illumination methods. Additionally, to understand the importance of learned illumination on inference task, we varied the dynamic range of the fluorescent image targets (from one to seven bits), and showed that the margin of improvement for learned patterns increased with the information content of the target. This work demonstrates the power of programmable optical elements at enabling better machine learning algorithm performance and at providing physical insight into next generation of machine-controlled imaging systems.