Computational 3D Optical Imaging Using Wavevector Diversity

Thumbnail Image



Journal Title

Journal ISSN

Volume Title

Repository Usage Stats



The explosion in the popularity and success of deep learning in the past decade has accelerated the development of computationally efficient, GPU-accelerated frameworks, such as TensorFlow and PyTorch, for rapid prototyping of neural networks. In this dissertation, we show that these deep learning tools are also well-suited for computational 3D imaging problems, specifically optical diffraction tomography (ODT), photogrammetry, and our newly proposed optical coherence refraction tomography (OCRT). Underlying these computational 3D imaging techniques is a physical model that demands multiple measurements taken with either angular diversity, wavelength diversity, or both. This requirement can be compactly summarized as wavevector (or k-vector) diversity, where the magnitude and direction of the wavevector correspond to the color and angle of the light, respectively.

To understand the importance of wavevector diversity for 3D imaging, this dissertation starts by advancing a unified k-space theory of optical coherence tomography (OCT), the most comprehensive and inclusive theoretical description of OCT to date that not only describes the transfer functions of all major forms of OCT and other coherent techniques (e.g., confocal microscopy, holography, ODT), but also includes the fundamental concepts of OCT, such as speckle, dispersion, aberration, and the tradeoff between lateral resolution and depth of focus (DOF).

Consistent with this unified theory, we implemented in TensorFlow a reconstruction algorithm for ODT, a technique that relies on illumination angular diversity to achieve 3D refractive index (RI) imaging. We propose a new method for filling the well-known “missing cone” of the ODT transfer function by reparameterizing the 3D sample as the output of an untrained neural network known as a deep image prior (DIP), which we show to outperform traditional regularization strategies.

Next, we introduce OCRT, a computational extension of OCT that incorporates extreme angular diversity over OCT's already high wavelength diversity to enable resolution-enhanced, speckle-reduced reconstructions that overcome the lateral-resolution-DOF tradeoff. OCRT also jointly reconstructs quantitative RI maps of the sample using a ray-based physical model implemented in TensorFlow. We also demonstrate spectroscopic OCRT (SOCRT), an extension of spectroscopic OCT (SOCT) that overcomes its tradeoff between spectral and axial resolution.

Motivated to make OCRT more widely applicable, we propose a new use of conic-section (e.g., parabolic, ellipsoidal) mirrors to allow fast multi-view imaging over very high angular ranges (up to 360°) using galvanometers without requiring sample rotation. We theoretically characterize the achievable fields of view (FOVs) as a function of many imaging system parameters (e.g., NA, wavelength, incidence angle, focal length, and telecentricity). Based on these predictions, we constructed a parabolic-mirror-based imaging system that facilitates multi-view OCT volume capture with millimetric FOVs over up to ±75°, which we combined to perform 3D OCRT reconstructions of zebrafish, fruitfly, and mouse tissue.

Finally, we adapted the OCRT reconstruction algorithm to photogrammetric 3D mesoscopic imaging with tens-of-micron accuracy, using a sequence of smartphone camera images taken at close range under freehand motion. 3D estimation was possible due to the angular diversity afforded by the nontelecentricity of smartphone cameras, using a similar ray-based model as for OCRT. We show that careful modeling of lens distortion and incorporation of a DIP are both pivotal for obtaining high 3D accuracy using devices not designed for close-range imaging.





Zhou, Kevin (2021). Computational 3D Optical Imaging Using Wavevector Diversity. Dissertation, Duke University. Retrieved from


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.