Development of Coherent LiDAR for Mesoscopic-Scale Applications

Loading...

Date

2025

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

0
views
3
downloads

Abstract

This dissertation presents the development of innovative coherent Light Detection and Ranging (LiDAR) technologies, encompassing advancements to existing methodologies and the invention of novel approaches. These systems address key challenges in mesoscopic-scale applications and demonstrate their potential in diverse fields such as virtual/augmented reality (AR/VR), robotic vision, facial recognition, and biomedical imaging.In many emerging technical domains, there is a growing demand for fast and accurate three-dimensional (3D) surface imaging at the mesoscopic scale (~mm to m). While numerous 3D imaging techniques have been explored, LiDAR has gained significant attention, particularly in the autonomous driving industry for long-range imaging and ranging (~m to km). However, adapting LiDAR systems for mesoscopic-scale applications presents several challenges: (1) limited imaging speed and field of view (FOV) due to reliance on mechanical scanning techniques, (2) insufficient depth accuracy, as many applications demand sub-millimeter precision, and (3) inefficient spectral bandwidth utilization, which drives up costs, especially for coherent LiDAR systems. First, to overcome limitations in imaging speed and depth accuracy, we developed a high-speed frequency-modulated continuous wave (FMCW) LiDAR system. This system integrates grating-based beam steering and compressed time-frequency analysis, achieving densely sampled 3D imaging at video rates. It delivers sub-millimeter depth accuracy across tens of centimeters, with a depth voxel acquisition rate of 7.6 MHz. The system successfully imaged both static and dynamic targets, such as a flexing human hand. Despite its advantages, the system's field of view was constrained by the sample arm design. Next, to address the field of view limitation, we redesigned the system and incorporated a reflecting telescope into the sample arm. The improved system produces 3D depth maps at 33 Hz, with an expanded FOV of 48° × 68° and a 32.8-cm depth range. With a resolution of 507 × 500 pixels, the system captures quantitative depth, reflectivity, and velocity measurements of static and dynamic objects, including a moving robotic arm. This advancement significantly enhanced the imaging capability for mesoscopic-scale applications. Finally, we developed a phase-resolved coherent LiDAR system combining synthetic wavelength phase-based ranging with line-scan off-axis holography. This system achieves micron-scale depth precision, leveraging an akinetic tunable laser for rapid wavelength switching and a galvanometer mirror for slow-axis scanning. It operates with a field of view of 12.8 mm × 34 mm and a 50 ms image acquisition time while addressing challenges such as shot noise and speckle noise. The system offers efficient spectral bandwidth utilization, reducing costs compared to other coherent LiDAR systems, such as FMCW LiDAR. It demonstrated the ability to image various objects made of different materials. These advancements advance the field of coherent LiDAR, enabling precise, high-resolution, and high-speed depth sensing for mesoscopic applications. The demonstrated systems lay the groundwork for transformative technologies in mesoscopic applications such as robotic vision, industrial inspection, and immersive digital experiences like AR/VR.

Description

Provenance

Subjects

Optics, Biomedical engineering, Electrical engineering, 3D imaging, Interferometer, LiDAR, OCT, Synthetic wavelength

Citation

Citation

Zhang, Jingkai (2025). Development of Coherent LiDAR for Mesoscopic-Scale Applications. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/32661.

Collections


Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.