Browsing by Author "Qian, Ruobing"
- Results Per Page
- Sort Options
Item Open Access Development of Novel Optical Design and Signal Processing Approaches in Optical Coherence Imaging(2020) Qian, RuobingOptical coherence tomography (OCT) is a non-invasive optical imaging modality which can provide high-resolution, cross-sectional images of retina and cornea. It has become a standard of care in ophthalmology for the diagnosis and monitoring of ocular diseases. However, current OCT systems face several major challenges, among which include: (1) difficult alignment and fixation in pediatric retinal imaging (2) limited cellular-level contrast for ophthalmic disease diagnosis and (3) expensive hardware and intensive computation requirements for real-time high-speed 3D imaging.
This dissertation describes the development of several novel optical design and signal processing approaches in OCT and optical coherence imaging technologies to address these limitations. We first describe a long working distance swept-source OCT system to facilitate retinal imaging in young children (chapter 2). The system incorporates two custom lenses and a novel compact 2f retinal scanning configuration to achieve a working distance of 350mm with a 16o OCT field of view. The system achieves high quality retinal imaging of children as young as 21 months old without sedation in the clinic. We then present a spectroscopic OCT technology that utilizes time-frequency analysis to obtain quantitative diagnostic information of cellular responses in the anterior chamber of the eye, which can indicate many ocular diseases such as hyphema and anterior uveitis. We demonstrate that this technology can differentiate and quantify the composition of anterior chamber blood cells such as red blood cells and subtypes of WBCs, including granulocytes, lymphocytes and monocytes (chapter 3 and 4). Finally, we describe a coherence-based 3D imaging technique that uses a grating for fast beam steering, a swept-source laser with long coherence length, and time-frequency analysis for depth retrieval (chapter 5). We demonstrate that the system can achieve high-speed 3D imaging with sub-millimeter axial resolution and tens of centimeters axial imaging ranging.
Item Open Access Imaging dynamics beneath turbid media via parallelized single-photon detection(CoRR, 2021-07-03) Xu, Shiqi; Yang, Xi; Liu, Wenhui; Jonsson, Joakim; Qian, Ruobing; Konda, Pavan Chandra; Zhou, Kevin C; Kreiss, Lucas; Dai, Qionghai; Wang, Haoqian; Berrocal, Edouard; Horstmeyer, RoarkeNoninvasive optical imaging through dynamic scattering media has numerous important biomedical applications but still remains a challenging task. While standard diffuse imaging methods measure optical absorption or fluorescent emission, it is also well-established that the temporal correlation of scattered coherent light diffuses through tissue much like optical intensity. Few works to date, however, have aimed to experimentally measure and process such temporal correlation data to demonstrate deep-tissue video reconstruction of decorrelation dynamics. In this work, we utilize a single-photon avalanche diode (SPAD) array camera to simultaneously monitor the temporal dynamics of speckle fluctuations at the single-photon level from 12 different phantom tissue surface locations delivered via a customized fiber bundle array. We then apply a deep neural network to convert the acquired single-photon measurements into video of scattering dynamics beneath rapidly decorrelating tissue phantoms. We demonstrate the ability to reconstruct images of transient (0.1-0.4s) dynamic events occurring up to 8 mm beneath a decorrelating tissue phantom with millimeter-scale resolution, and highlight how our model can flexibly extend to monitor flow speed within buried phantom vessels.Item Open Access Mesoscopic photogrammetry with an unstabilized phone camera(CVPR 2021, 2020-12-10) Zhou, Kevin C; Cooke, Colin; Park, Jaehee; Qian, Ruobing; Horstmeyer, Roarke; Izatt, Joseph A; Farsiu, SinaWe present a feature-free photogrammetric technique that enables quantitative 3D mesoscopic (mm-scale height variation) imaging with tens-of-micron accuracy from sequences of images acquired by a smartphone at close range (several cm) under freehand motion without additional hardware. Our end-to-end, pixel-intensity-based approach jointly registers and stitches all the images by estimating a coaligned height map, which acts as a pixel-wise radial deformation field that orthorectifies each camera image to allow homographic registration. The height maps themselves are reparameterized as the output of an untrained encoder-decoder convolutional neural network (CNN) with the raw camera images as the input, which effectively removes many reconstruction artifacts. Our method also jointly estimates both the camera's dynamic 6D pose and its distortion using a nonparametric model, the latter of which is especially important in mesoscopic applications when using cameras not designed for imaging at short working distances, such as smartphone cameras. We also propose strategies for reducing computation time and memory, applicable to other multi-frame registration problems. Finally, we demonstrate our method using sequences of multi-megapixel images captured by an unstabilized smartphone on a variety of samples (e.g., painting brushstrokes, circuit board, seeds).