Optical Design for Parallel Cameras

Thumbnail Image



Journal Title

Journal ISSN

Volume Title

Repository Usage Stats



The Majority of imaging systems require optical lenses to increase the light throughput as well as to form an isomorphic mapping. Advances in optical lenses improve observing power. However, as imaging resolution reaches about the magnitude of $10^8$ or higher, such as gigapixel cameras, the conventional monolithic lens architecture and processing routine is no longer sustainable due to the non-linearly increased optical size, weight, complexity and therefore the overall cost. The information efficiency measured by pixels per unit-cost drops drastically as the aperture size and field of view (FoV) march toward extreme values. On the one hand, reducing the up-scaled wavefront error to a fraction of wavelength requires more surfaces and more complex figures. On the other hand, the scheme of sampling 3-dimensional scenes with a single 2-dimensional aperture does not scale well, when the sampling space is extended. Correction for shift-varying sampling and non-uniform luminance aggravated by wide-field angles can easily lead to an explosion of the lens complexity.

Parallel cameras utilize multiple apertures and discrete focal planes to reduce camera complexity via the principle of divide and conquer. The high information efficiency of lenses with small aperture and narrow FoV is preserved. Also, modular design gives flexibility in configuration and reconfiguration, provides easy adaptation and inexpensive maintenance.

Multiscale lens design utilizes optical elements in various size scales. Large aperture optics collects light coherently, and small aperture optics enable efficient light processing. Monocentric multiscale (MMS) lenses exemplify this idea by adopting a multi-layered spherical lens as the front objective and an array of microcameras at the rear for segmenting and relaying the wide-field image onto disjoint focal planes. First generation as-constructed MMS lenses adopted Keplerian style, which features a real intermediate image surface. In this dissertation, we investigate another design style termed "Galilean", which eliminates the intermediate image surface, therefore leading to significantly reduced lens size and weight.

The FoV shape of a parallel camera is determined by the formation of the camera arrays. Arranging array cameras in myriad formations allows FoV to be captured in different shapes. This flexibility in FoV format arrangement facilitates customized camera applications and new visual experiences.

Parallel cameras can consist of dozens or even hundreds of imaging channels. Each channel requires an independent focusing mechanism for all in focus capture. The tight budget on packing space and expense desires small and inexpensive focusing mechanism. This dissertation addresses this problem with the voice coil motor (VCM) based focusing mechanism found on mobile platforms. We propose miniaturized optics in long focal length designs, thus reduces the traveling range of the focusing group, and enables universal focus.

Along the same line of building cost-efficient and small size lens systems, we explore ways of making thin lenses with low telephoto ratios. We illustrate a catadioptric design achieving a telephoto ratio of 0.35. The combination of high index material and meta-surfaces could push this value down to 0.18, as shown by one of our design examples.





Pang, Wubin (2020). Optical Design for Parallel Cameras. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/20986.


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.