Browsing by Author "Horstmeyer, Roarke"
Results Per Page
Sort Options
Item Open Access A multiple instance learning approach for detecting COVID-19 in peripheral blood smears.(PLOS digital health, 2022-08) Cooke, Colin L; Kim, Kanghyun; Xu, Shiqi; Chaware, Amey; Yao, Xing; Yang, Xi; Neff, Jadee; Pittman, Patricia; McCall, Chad; Glass, Carolyn; Jiang, Xiaoyin Sara; Horstmeyer, RoarkeA wide variety of diseases are commonly diagnosed via the visual examination of cell morphology within a peripheral blood smear. For certain diseases, such as COVID-19, morphological impact across the multitude of blood cell types is still poorly understood. In this paper, we present a multiple instance learning-based approach to aggregate high-resolution morphological information across many blood cells and cell types to automatically diagnose disease at a per-patient level. We integrated image and diagnostic information from across 236 patients to demonstrate not only that there is a significant link between blood and a patient's COVID-19 infection status, but also that novel machine learning approaches offer a powerful and scalable means to analyze peripheral blood smears. Our results both backup and enhance hematological findings relating blood cell morphology to COVID-19, and offer a high diagnostic efficacy; with a 79% accuracy and a ROC-AUC of 0.90.Item Open Access Computational Bio-Optical Imaging with Novel Sensor Arrays(2023) Xu, ShiqiOptical imaging is an essential tool for studying life sciences. Existing biomedical optical systems range from microscopes in clinics that use wave optics principles to examine pathological samples at high resolution, to photoplethysmography in everyday smartwatches utilizing diffuse optics technologies for monitoring deep tissue physiology. An optical system, such as a photography solution in a studio, typically consists of three parts: illumination, objects of interest, and recording devices. Over the past decades, thanks to rapid advancements in semiconductor manufacturing, numerous new and exciting optical devices have emerged. These include low-cost, small form-factor LEDs and CMOS camera sensors in budget tablet devices, as well as high-density time-of-flight array detectors in recent generations of iPhones, for example. Moore's Law, on the other hand, has driven significant development in powerful yet inexpensive computational tools. As a result, nowadays, analogous to other medical imaging modalities such as X-ray CT and MRI, multiplexed optical measurements that may not resemble the object of interest can be recorded and post-processed to reconstruct useful images for human perception. In this thesis, several new computational optical imaging techniques at different scales will be discussed. These range from vectorial tomographic microscopies for imaging anisotropic cells and tissue, to high-throughput imaging systems capable of recording eukaryotic colonies at mesoscopic scales, and novel single-photon-sensitive sensing methods for non-invasive imaging of macroscopic transient dynamics deep within turbid volumes.
Item Open Access Deep image prior for undersampling high-speed photoacoustic microscopy.(Photoacoustics, 2021-06) Vu, Tri; DiSpirito, Anthony; Li, Daiwei; Wang, Zixuan; Zhu, Xiaoyi; Chen, Maomao; Jiang, Laiming; Zhang, Dong; Luo, Jianwen; Zhang, Yu Shrike; Zhou, Qifa; Horstmeyer, Roarke; Yao, JunjiePhotoacoustic microscopy (PAM) is an emerging imaging method combining light and sound. However, limited by the laser's repetition rate, state-of-the-art high-speed PAM technology often sacrifices spatial sampling density (i.e., undersampling) for increased imaging speed over a large field-of-view. Deep learning (DL) methods have recently been used to improve sparsely sampled PAM images; however, these methods often require time-consuming pre-training and large training dataset with ground truth. Here, we propose the use of deep image prior (DIP) to improve the image quality of undersampled PAM images. Unlike other DL approaches, DIP requires neither pre-training nor fully-sampled ground truth, enabling its flexible and fast implementation on various imaging targets. Our results have demonstrated substantial improvement in PAM images with as few as 1.4 % of the fully sampled pixels on high-speed PAM. Our approach outperforms interpolation, is competitive with pre-trained supervised DL method, and is readily translated to other high-speed, undersampling imaging modalities.Item Open Access Diffraction tomography with a deep image priorZhou, Kevin C; Horstmeyer, RoarkeWe present a tomographic imaging technique, termed Deep Prior Diffraction Tomography (DP-DT), to reconstruct the 3D refractive index (RI) of thick biological samples at high resolution from a sequence of low-resolution images collected under angularly varying illumination. DP-DT processes the multi-angle data using a phase retrieval algorithm that is extended by a deep image prior (DIP), which reparameterizes the 3D sample reconstruction with an untrained, deep generative 3D convolutional neural network (CNN). We show that DP-DT effectively addresses the missing cone problem, which otherwise degrades the resolution and quality of standard 3D reconstruction algorithms. As DP-DT does not require pre-captured data or pre-training, it is not biased towards any particular dataset. Hence, it is a general technique that can be applied to a wide variety of 3D samples, including scenarios in which large datasets for supervised training would be infeasible or expensive. We applied DP-DT to obtain 3D RI maps of bead phantoms and complex biological specimens, both in simulation and experiment, and show that DP-DT produces higher-quality results than standard regularization techniques. We further demonstrate the generality of DP-DT, using two different scattering models, the first Born and multi-slice models. Our results point to the potential benefits of DP-DT for other 3D imaging modalities, including X-ray computed tomography, magnetic resonance imaging, and electron microscopy.Item Open Access Imaging dynamics beneath turbid media via parallelized single-photon detection(CoRR, 2021-07-03) Xu, Shiqi; Yang, Xi; Liu, Wenhui; Jonsson, Joakim; Qian, Ruobing; Konda, Pavan Chandra; Zhou, Kevin C; Kreiss, Lucas; Dai, Qionghai; Wang, Haoqian; Berrocal, Edouard; Horstmeyer, RoarkeNoninvasive optical imaging through dynamic scattering media has numerous important biomedical applications but still remains a challenging task. While standard diffuse imaging methods measure optical absorption or fluorescent emission, it is also well-established that the temporal correlation of scattered coherent light diffuses through tissue much like optical intensity. Few works to date, however, have aimed to experimentally measure and process such temporal correlation data to demonstrate deep-tissue video reconstruction of decorrelation dynamics. In this work, we utilize a single-photon avalanche diode (SPAD) array camera to simultaneously monitor the temporal dynamics of speckle fluctuations at the single-photon level from 12 different phantom tissue surface locations delivered via a customized fiber bundle array. We then apply a deep neural network to convert the acquired single-photon measurements into video of scattering dynamics beneath rapidly decorrelating tissue phantoms. We demonstrate the ability to reconstruct images of transient (0.1-0.4s) dynamic events occurring up to 8 mm beneath a decorrelating tissue phantom with millimeter-scale resolution, and highlight how our model can flexibly extend to monitor flow speed within buried phantom vessels.Item Open Access Learned sensing: jointly optimized microscope hardware for accurate image classification.(Biomedical optics express, 2019-12) Muthumbi, Alex; Chaware, Amey; Kim, Kanghyun; Zhou, Kevin C; Konda, Pavan Chandra; Chen, Richard; Judkewitz, Benjamin; Erdmann, Andreas; Kappes, Barbara; Horstmeyer, RoarkeSince its invention, the microscope has been optimized for interpretation by a human observer. With the recent development of deep learning algorithms for automated image analysis, there is now a clear need to re-design the microscope's hardware for specific interpretation tasks. To increase the speed and accuracy of automated image classification, this work presents a method to co-optimize how a sample is illuminated in a microscope, along with a pipeline to automatically classify the resulting image, using a deep neural network. By adding a "physical layer" to a deep classification network, we are able to jointly optimize for specific illumination patterns that highlight the most important sample features for the particular learning task at hand, which may not be obvious under standard illumination. We demonstrate how our learned sensing approach for illumination design can automatically identify malaria-infected cells with up to 5-10% greater accuracy than standard and alternative microscope lighting designs. We show that this joint hardware-software design procedure generalizes to offer accurate diagnoses for two different blood smear types, and experimentally show how our new procedure can translate across different experimental setups while maintaining high accuracy.Item Open Access Mesoscopic photogrammetry with an unstabilized phone camera(CVPR 2021, 2020-12-10) Zhou, Kevin C; Cooke, Colin; Park, Jaehee; Qian, Ruobing; Horstmeyer, Roarke; Izatt, Joseph A; Farsiu, SinaWe present a feature-free photogrammetric technique that enables quantitative 3D mesoscopic (mm-scale height variation) imaging with tens-of-micron accuracy from sequences of images acquired by a smartphone at close range (several cm) under freehand motion without additional hardware. Our end-to-end, pixel-intensity-based approach jointly registers and stitches all the images by estimating a coaligned height map, which acts as a pixel-wise radial deformation field that orthorectifies each camera image to allow homographic registration. The height maps themselves are reparameterized as the output of an untrained encoder-decoder convolutional neural network (CNN) with the raw camera images as the input, which effectively removes many reconstruction artifacts. Our method also jointly estimates both the camera's dynamic 6D pose and its distortion using a nonparametric model, the latter of which is especially important in mesoscopic applications when using cameras not designed for imaging at short working distances, such as smartphone cameras. We also propose strategies for reducing computation time and memory, applicable to other multi-frame registration problems. Finally, we demonstrate our method using sequences of multi-megapixel images captured by an unstabilized smartphone on a variety of samples (e.g., painting brushstrokes, circuit board, seeds).Item Open Access Parallelized computational 3D video microscopy of freely moving organisms at multiple gigapixels per second.(Nature photonics, 2023-05) Zhou, Kevin C; Harfouche, Mark; Cooke, Colin L; Park, Jaehee; Konda, Pavan C; Kreiss, Lucas; Kim, Kanghyun; Jönsson, Joakim; Doman, Thomas; Reamey, Paul; Saliu, Veton; Cook, Clare B; Zheng, Maxwell; Bechtel, John P; Bègue, Aurélien; McCarroll, Matthew; Bagwell, Jennifer; Horstmeyer, Gregor; Bagnat, Michel; Horstmeyer, RoarkeWide field of view microscopy that can resolve 3D information at high speed and spatial resolution is highly desirable for studying the behaviour of freely moving model organisms. However, it is challenging to design an optical instrument that optimises all these properties simultaneously. Existing techniques typically require the acquisition of sequential image snapshots to observe large areas or measure 3D information, thus compromising on speed and throughput. Here, we present 3D-RAPID, a computational microscope based on a synchronized array of 54 cameras that can capture high-speed 3D topographic videos over an area of 135 cm2, achieving up to 230 frames per second at spatiotemporal throughputs exceeding 5 gigapixels per second. 3D-RAPID employs a 3D reconstruction algorithm that, for each synchronized snapshot, fuses all 54 images into a composite that includes a co-registered 3D height map. The self-supervised 3D reconstruction algorithm trains a neural network to map raw photometric images to 3D topography using stereo overlap redundancy and ray-propagation physics as the only supervision mechanism. The resulting reconstruction process is thus robust to generalization errors and scales to arbitrarily long videos from arbitrarily sized camera arrays. We demonstrate the broad applicability of 3D-RAPID with collections of several freely behaving organisms, including ants, fruit flies, and zebrafish larvae.Item Open Access Physics-enhanced machine learning for virtual fluorescence microscopy(CoRR, 2020-04-08) Cooke, Colin L; Kong, Fanjie; Chaware, Amey; Zhou, Kevin C; Kim, Kanghyun; Xu, Rong; Ando, D Michael; Yang, Samuel J; Konda, Pavan Chandra; Horstmeyer, RoarkeThis paper introduces a new method of data-driven microscope design for virtual fluorescence microscopy. Our results show that by including a model of illumination within the first layers of a deep convolutional neural network, it is possible to learn task-specific LED patterns that substantially improve the ability to infer fluorescence image information from unstained transmission microscopy images. We validated our method on two different experimental setups, with different magnifications and different sample types, to show a consistent improvement in performance as compared to conventional illumination methods. Additionally, to understand the importance of learned illumination on inference task, we varied the dynamic range of the fluorescent image targets (from one to seven bits), and showed that the margin of improvement for learned patterns increased with the information content of the target. This work demonstrates the power of programmable optical elements at enabling better machine learning algorithm performance and at providing physical insight into next generation of machine-controlled imaging systems.Item Open Access Quantitative Jones matrix imaging using vectorial Fourier ptychography(2021-09-30) Dai, Xiang; Xu, Shiqi; Yang, Xi; Zhou, Kevin C; Glass, Carolyn; Konda, Pavan Chandra; Horstmeyer, RoarkeThis paper presents a microscopic imaging technique that uses variable-angle illumination to recover the complex polarimetric properties of a specimen at high resolution and over a large field-of-view. The approach extends Fourier ptychography, which is a synthetic aperture-based imaging approach to improve resolution with phaseless measurements, to additionally account for the vectorial nature of light. After images are acquired using a standard microscope outfitted with an LED illumination array and two polarizers, our vectorial Fourier Ptychography (vFP) algorithm solves for the complex 2x2 Jones matrix of the anisotropic specimen of interest at each resolved spatial location. We introduce a new sequential Gauss-Newton-based solver that additionally jointly estimates and removes polarization-dependent imaging system aberrations. We demonstrate effective vFP performance by generating large-area (29 mm$^2$), high-resolution (1.24 $\mu$m full-pitch) reconstructions of sample absorption, phase, orientation, diattenuation, and retardance for a variety of calibration samples and biological specimens.