Browsing by Author "Brady, David"
Results Per Page
Sort Options
Item Open Access Coded Aperture X-ray Tomographic Imaging with Energy Sensitive Detectors(2017) Hassan, MehadiCoherent scatter imaging techniques have experienced a renaissance in the past two decades from an evolution of detector technology and computational imaging techniques. X-ray diffraction requires a precise knowledge of object location and is time consuming; transforming diffractometry into a practical imaging technique involves spatially resolving the sample in 3-dimensions and speeding up the measurement process. The introduction of a coded aperture in a conventional X-ray diffraction system provides 3D localization of the scatterer as well as drastic reductions in the acquisition time due to the ability to perform multiplexed measurements. This theses document contains two strategies involving coded apertures to address the aforementioned challenges of X-ray coherent scatter measurements.
The first technique places the coded aperture between source and object to structure the incident illumination. A single pixel detector captures temporally modulated coherent scatter data from an object as it travels through the illumination. From these measurements, 2D spatial and 1D spectral information is recovered at each point within a planar slice of an object. Compared to previous techniques, this approach is able to reduce the overall scan time of objects by 1-2 orders of magnitude.
The second measurement technique demonstrates snapshot coherent scatter tomography. A planar slice of an object is illuminated by a fan beam and the scatter data is modulated by a coded aperture between object and detector. The spatially modulated data is captured with a linear array of energy sensitive detectors, and the recovered data shows that the system can image objects that are 13 mm in range and 2 mm in cross range with a fractional momentum transfer resolution of 15\%. The technique also allows a 100x speedup when compared to pencil beam systems using the same components.
Continuing with the theme of snapshot tomography with energy sensitive detectors, I study the impact of detectors properties such as detection area, choice of energies and energy resolution for pencil and fan beam coded aperture coherent scatter systems. I simulate various detector geometries and determine that energy resolution has the largest impact for pencil beam geometries while detector area has the largest impact for fan beam geometries. These results can be used to build detectors which can potentially help implement pencil and/or fan beam coded aperture coherent scatter systems in applications involving medicine and security.
Item Open Access Coded Measurement for Imaging and Spectroscopy(2009) Portnoy, Andrew DavidThis thesis describes three computational optical systems and their underlying coding strategies. These codes are useful in a variety of optical imaging and spectroscopic applications. Two multichannel cameras are described. They both use a lenslet array to generate multiple copies of a scene on the detector. Digital processing combines the measured data into a single image. The visible system uses focal plane coding, and the long wave infrared (LWIR) system uses shift coding. With proper calibration, the multichannel interpolation results recover contrast for targets at frequencies beyond the aliasing limit of the individual subimages. This thesis also describes a LWIR imaging system that simultaneously measures four wavelength channels each with narrow bandwidth. In this system, lenses, aperture masks, and dispersive optics implement a spatially varying spectral code.
Item Open Access Coding Strategies and Implementations of Compressive Sensing(2016) Tsai, Tsung-HanThis dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.
This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.
Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.
Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Item Open Access Coding Strategies for X-ray Tomography(2016) Holmgren, AndrewThis work focuses on the construction and application of coded apertures to compressive X-ray tomography. Coded apertures can be made in a number of ways, each method having an impact on system background and signal contrast. Methods of constructing coded apertures for structuring X-ray illumination and scatter are compared and analyzed. Apertures can create structured X-ray bundles that investigate specific sets of object voxels. The tailored bundles of rays form a code (or pattern) and are later estimated through computational inversion. Structured illumination can be used to subsample object voxels and make inversion feasible for low dose computed tomography (CT) systems, or it can be used to reduce background in limited angle CT systems.
On the detection side, coded apertures modulate X-ray scatter signals to determine the position and radiance of scatter points. By forming object dependent projections in measurement space, coded apertures multiplex modulated scatter signals onto a detector. The multiplexed signals can be inverted with knowledge of the code pattern and system geometry. This work shows two systems capable of determining object position and type in a 2D plane, by illuminating objects with an X-ray `fan beam,' using coded apertures and compressive measurements. Scatter tomography can help identify materials in security and medicine that may be ambiguous with transmission tomography alone.
Item Open Access Compressive holography.(2012) Lim, Se HoonCompressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography : estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field information with a small fill factor because object scattered fields are locally redundant. Incoherent image estimation is adopted for the expanded modulation transfer function and compressive reconstruction.Item Open Access Compressive Spectral and Coherence Imaging(2010) Wagadarikar, Ashwin AshokThis dissertation describes two computational sensors that were used to demonstrate applications of generalized sampling of the optical field. The first sensor was an incoherent imaging system designed for compressive measurement of the power spectral density in the scene (spectral imaging). The other sensor was an interferometer used to compressively measure the mutual intensity of the optical field (coherence imaging) for imaging through turbulence. Each sensor made anisomorphic measurements of the optical signal of interest and digital post-processing of these measurements was required to recover the signal. The optical hardware and post-processing software were co-designed to permit acquisition of the signal of interest with sub-Nyquist rate sampling, given the prior information that the signal is sparse or compressible in some basis.
Compressive spectral imaging was achieved by a coded aperture snapshot spectral imager (CASSI), which used a coded aperture and a dispersive element to modulate the optical field and capture a 2D projection of the 3D spectral image of the scene in a snapshot. Prior information of the scene, such as piecewise smoothness of objects in the scene, could be enforced by numerical estimation algorithms to recover an estimate of the spectral image from the snapshot measurement.
Hypothesizing that turbulence between the scene and CASSI would introduce spectral diversity of the point spread function, CASSI's snapshot spectral imaging capability could be used to image objects in the scene through the turbulence. However, no turbulence-induced spectral diversity of the point spread function was observed experimentally. Thus, coherence functions, which are multi-dimensional functions that completely determine optical fields observed by intensity detectors, were considered. These functions have previously been used to image through turbulence after extensive and time-consuming sampling of such functions. Thus, compressive coherence imaging was attempted as an alternative means of imaging through turbulence.
Compressive coherence imaging was demonstrated by using a rotational shear interferometer to measure just a 2D subset of the 4D mutual intensity, a coherence function that captures the optical field correlation between all the pairs of points in the aperture. By imposing a sparsity constraint on the possible distribution of objects in the scene, both the object distribution and the isoplanatic phase distortion induced by the turbulence could be estimated with the small number of measurements made by the interferometer.
Item Open Access Computational Mass Spectrometry(2015) Chen, Evan XuguangConventional mass spectrometry sensing has isomorphic nature, which means measure the input mass spectrum abundance function by a resemble of delta function to avoid ambiguity. However, the delta function nature of traditional mass spectrometry sensing approach imposes trade-offs between mass resolution and throughput/mass analysis time. This dissertation proposes a new field of mass spectrometry sensing which combines both computational signal processing and hardware modification to break the above trade-offs. We introduce the concept of generalized sensing matrix/discretized forward model in mass spectrometry filed. The presence of forward model can bridge the cap between sensing system hardware design and computational sensing algorithm including compressive sensing, feature/variable selection machine learning algorithms, and stat-of-art inversion algorithms.
Throughout this dissertation, the main theme is the sensing matrix/forward model design subject to the physical constraints of varies types of mass analyzers. For quadrupole ion trap systems, we develop a new compressive and multiplexed mass analysis approach mutli Resonant Frequency Excitation (mRFE) ejection which can reduce mass analysis time by a factor 3-6 without losing mass spectra specificity for chemical classification. A new information-theoretical adaptive sensing and classification framework has proposed on quadrupole mass filter systems, and it can significantly reduces the number of measurements needed and achieve a high level of classification accuracy. Furthermore, we present a coded aperture sector mass spectrometry which can yield a order-of-magnitude throughput gain without compromising mass resolution compare to conventional single slit sector mass spectrometer.
Item Open Access Computational Optical Imaging Systems for Spectroscopy and Wide Field-of-View Gigapixel Photography(2013) Kittle, David S.This dissertation explores computational optical imaging methods to circumvent the physical limitations of classical sensing. An ideal imaging system would maximize resolution in time, spectral bandwidth, three-dimensional object space, and polarization. Practically, increasing any one parameter will correspondingly decrease the others.
Spectrometers strive to measure the power spectral density of the object scene. Traditional pushbroom spectral imagers acquire high resolution spectral and spatial resolution at the expense of acquisition time. Multiplexed spectral imagers acquire spectral and spatial information at each instant of time. Using a coded aperture and dispersive element, the coded aperture snapshot spectral imagers (CASSI) here described leverage correlations between voxels in the spatial-spectral data cube to compressively sample the power spectral density with minimal loss in spatial-spectral resolution while maintaining high temporal resolution.
Photography is limited by similar physical constraints. Low f/# systems are required for high spatial resolution to circumvent diffraction limits and allow for more photon transfer to the film plain, but require larger optical volumes and more optical elements. Wide field systems similarly suffer from increasing complexity and optical volume. Incorporating a multi-scale optical system, the f/#, resolving power, optical volume and wide field of view become much less coupled. This system uses a single objective lens that images onto a curved spherical focal plane which is relayed by small micro-optics to discrete focal planes. Using this design methodology allows for gigapixel designs at low f/# that are only a few pounds and smaller than a one-foot hemisphere.
Computational imaging systems add the necessary step of forward modeling and calibration. Since the mapping from object space to image space is no longer directly readable, post-processing is required to display the required data. The CASSI system uses an undersampled measurement matrix that requires inversion while the multi-scale camera requires image stitching and compositing methods for billions of pixels in the image. Calibration methods and a testbed are demonstrated that were developed specifically for these computational imaging systems.
Item Open Access Computational spectral microscopy and compressive millimeter-wave holography(2010) Fernandez, Christy AnnThis dissertation describes three computational sensors. The first sensor is a scanning multi-spectral aperture-coded microscope containing a coded aperture spectrometer that is vertically scanned through a microscope intermediate image plane. The spectrometer aperture-code spatially encodes the object spectral data and nonnegative
least squares inversion combined with a series of reconfigured two-dimensional (2D spatial-spectral) scanned measurements enables three-dimensional (3D) (x, y, λ) object estimation. The second sensor is a coded aperture snapshot spectral imager that employs a compressive optical architecture to record a spectrally filtered projection
of a 3D object data cube onto a 2D detector array. Two nonlinear and adapted TV-minimization schemes are presented for 3D (x,y,λ) object estimation from a 2D compressed snapshot. Both sensors are interfaced to laboratory-grade microscopes and
applied to fluorescence microscopy. The third sensor is a millimeter-wave holographic imaging system that is used to study the impact of 2D compressive measurement on 3D (x,y,z) data estimation. Holography is a natural compressive encoder since a 3D
parabolic slice of the object band volume is recorded onto a 2D planar surface. An adapted nonlinear TV-minimization algorithm is used for 3D tomographic estimation from a 2D and a sparse 2D hologram composite. This strategy aims to reduce scan time costs associated with millimeter-wave image acquisition using a single pixel receiver.
Item Open Access Implicit and Explicit Codes For Diffraction Tomography(2014) Mrozack, AlexanderDiffraction tomography is the attempt to estimate the scattering density of an object from measurements of a scattered coherent field. This work moves to overcome many of the constraints and limitations of the current state of the art. In general, these constraints present themselves as physical and cost limitations. The limitations ``encode" the data, giving rise to the title of this dissertation. Implicit coding is the encoding of the data by the acquisition system. For instance, coherent scatter is bound to be sampled on specific arcs in the Fourier space of the scattering density. Explicit coding is the choice of how the data is sampled within the implicit coding limitations. The beam patterns of an antenna may be optimized to better detect certain types of targets, or datasets may be subsampled if prior knowledge of the scene is introduced in some way.
We investigate both of these types of data coding, introduce a method for sampling a particular type of scene with high efficiency, and present strategies for overcoming a specific type of implicit data encoding which is detrimental to ``pure" image estimation known as speckle. The final chapter of this dissertation incorporates both implicit and explicit coding strategies, to demonstrate the importance of taking both into account for a new paradigm in diffraction tomography known as frequency diversity imaging. Frequency diversity imaging explicitly encodes coherent fields on the illumination wavelength. Combining this paradigm with speckle estimation requires a new way to evaluate the quality of explicit codes.
Item Open Access Improving Radar Imaging with Computational Imaging and Novel Antenna Design(2017) Zhu, RuoyuTraditional radar imaging systems are implemented using the focal plane
technique, steering beam antennas, or synthetic aperture imaging. These conventional
methods require either a large number of sensors to form a focal plane array similar to the
idea of an optical camera, or a single transceiver mechanically scanning the field of view.
The former results in expensive systems whereas the latter results in long acquisition time.
Computational imaging methods are widely used for the ability to acquire information
beyond the recorded pixels, thus are ideal options for reducing the number of radar
sensors in radar imaging systems. Novel antenna designs such as the frequency diverse
antennas are capable of optimizing antennas for computational imaging algorithms. This
thesis tries to find a solution for improving the efficiency of radar imaging using a method
that combines computational imaging and novel antenna designs. This thesis first
proposes two solutions to improve the two aspects of the tradeoff respectively, i.e. the
number of sensors and mechanical scanning. A method using time-of-flight imaging
algorithm with a sparse array of antennas is proposed as a solution to reduce the number
of sensors required to estimate a reflective surface. An adaptive algorithm based on the
Bayesian compressive sensing framework is proposed as a solution to minimize
mechanical scanning for synthetic aperture imaging systems. The thesis then explores the
feasibility to further improve radar imaging systems by combining computational
imaging and antenna design methods as a solution. A rapid prototyping method for
manufacturing custom-designed antennas is developed for implementing antenna
designs quickly in a laboratory environment. This method has facilitated the design of a
frequency diverse antenna based on a leaky waveguide design, which can be used under
computational imaging framework to perform 3D imaging. The proposed system is
capable of performing imaging and target localization using only one antenna and
without mechanical scanning, thus is a promising solution to ultimately improve the
efficiency for radar imaging.
Item Open Access Inline holographic coherent anti-Stokes Raman microscopy.(Opt Express, 2010-04-12) Xu, Qian; Shi, Kebin; Li, Haifeng; Choi, Kerkil; Horisaki, Ryoichi; Brady, David; Psaltis, Demetri; Liu, ZhiwenWe demonstrate a simple approach for inline holographic coherent anti-Stokes Raman scattering (CARS) microscopy, in which a layer of uniform nonlinear medium is placed in front of a specimen to be imaged. The reference wave created by four-wave mixing in the nonlinear medium can interfere with the CARS signal generated in the specimen to result in an inline hologram. We experimentally and theoretically investigate the inline CARS holography and show that it has chemical selectivity and can allow for three-dimensional imaging.Item Open Access New Urban Structural Change and Racial and Ethnic Inequality in Wages, Homeownership, and Health(2013) Finnigan, RyanIn 2010, approximately 84% of the American population lives in a metropolitan area. Different metropolitan areas are characterized by distinct labor markets and economies, housing markets and residential patterns, socioeconomic and demographic factors, and according to some, even distinct 'spirits.' The nature and influence of such structural factors lie at the heart of urban sociology, and have particularly profound effects on patterns of racial and ethnic stratification. This dissertation examines new urban structural changes arising within recent decades, and their implications for racial/ethnic stratification. Specifically, I study the transition to the 'new economy' and racial/ethnic wage inequality; increases in the level and inequality of housing prices and racial/ethnic stratification in homeownership; and increased income inequality, combined with population aging, and racial/ethnic disparities in disability and poor health. I measure metropolitan-level structural factors and racial/ethnic inequalities with data from 5% samples of the 1980, 1990, and 2000 Censuses; the 2010 American Community Survey (ACS); and the 1999-2001 and 2009-2011 Current Population Surveys (CPS). Cross-sectional multilevel regression models examine the spatial distributions of structural factors and racial/ethnic inequality, and the fixed-effects regression models identify the impact of changes in structural factors over time on observed trends in racial stratification. Additionally, I distinguish between effects on minority-white gaps in resource access, and minorities' levels of resource access. This dissertation also makes novel contributions to the field by empirically documenting complex patterns of inequalities among the country's four largest racial and ethnic groups. Perhaps most relevant to theories of racial stratification, this dissertation demonstrates seemingly race-neutral structural changes can have racially stratified effects.
Chapter 1 describes the foundational literature in urban sociology and racial/ethnic stratification, and provides an overview of the subsequent chapters. Chapter 2 measures the transition to the `new economy' with six structural factors of labor markets: skill-biased technological change, financialization, the rise of the creative class, employment casualization, immigration, and deunionization. Overall, the results indicate the observed Latino-white wage gap may be up to 40% larger in 2010 than in the theoretical absence of the new economy, and the black-white wage gap may be up to 31% larger. Chapter 3 focuses on the long-term trend toward higher and more unequally distributed home prices within local housing markets, epitomized by the housing crisis of the late 2000s. Increases in housing market inequality worsen the Asian-white homeownership gap, but narrow the black-white and Latino-white gaps. However, the level of homeownership is reduced for all groups. Chapter 4 empirically tests the frequently-debated Income Inequality Hypothesis, that macro-level income inequality undermines population health, and hypothesizes any negative effect on health is stronger in areas with greater population aging. The results provide no support for the Income Inequality Hypothesis or any of its proposed extensions, but the chapter's analytic approach may be fruitfully applied to future examinations of structural determinants of health. The theoretical and substantive conclusion of the dissertation is that metropolitan areas represent salient, and changing structural contexts that significantly shape patterns racial/ethnic stratification in America.
Item Open Access Optical Design for Parallel Cameras(2020) Pang, WubinThe Majority of imaging systems require optical lenses to increase the light throughput as well as to form an isomorphic mapping. Advances in optical lenses improve observing power. However, as imaging resolution reaches about the magnitude of $10^8$ or higher, such as gigapixel cameras, the conventional monolithic lens architecture and processing routine is no longer sustainable due to the non-linearly increased optical size, weight, complexity and therefore the overall cost. The information efficiency measured by pixels per unit-cost drops drastically as the aperture size and field of view (FoV) march toward extreme values. On the one hand, reducing the up-scaled wavefront error to a fraction of wavelength requires more surfaces and more complex figures. On the other hand, the scheme of sampling 3-dimensional scenes with a single 2-dimensional aperture does not scale well, when the sampling space is extended. Correction for shift-varying sampling and non-uniform luminance aggravated by wide-field angles can easily lead to an explosion of the lens complexity.
Parallel cameras utilize multiple apertures and discrete focal planes to reduce camera complexity via the principle of divide and conquer. The high information efficiency of lenses with small aperture and narrow FoV is preserved. Also, modular design gives flexibility in configuration and reconfiguration, provides easy adaptation and inexpensive maintenance.
Multiscale lens design utilizes optical elements in various size scales. Large aperture optics collects light coherently, and small aperture optics enable efficient light processing. Monocentric multiscale (MMS) lenses exemplify this idea by adopting a multi-layered spherical lens as the front objective and an array of microcameras at the rear for segmenting and relaying the wide-field image onto disjoint focal planes. First generation as-constructed MMS lenses adopted Keplerian style, which features a real intermediate image surface. In this dissertation, we investigate another design style termed "Galilean", which eliminates the intermediate image surface, therefore leading to significantly reduced lens size and weight.
The FoV shape of a parallel camera is determined by the formation of the camera arrays. Arranging array cameras in myriad formations allows FoV to be captured in different shapes. This flexibility in FoV format arrangement facilitates customized camera applications and new visual experiences.
Parallel cameras can consist of dozens or even hundreds of imaging channels. Each channel requires an independent focusing mechanism for all in focus capture. The tight budget on packing space and expense desires small and inexpensive focusing mechanism. This dissertation addresses this problem with the voice coil motor (VCM) based focusing mechanism found on mobile platforms. We propose miniaturized optics in long focal length designs, thus reduces the traveling range of the focusing group, and enables universal focus.
Along the same line of building cost-efficient and small size lens systems, we explore ways of making thin lenses with low telephoto ratios. We illustrate a catadioptric design achieving a telephoto ratio of 0.35. The combination of high index material and meta-surfaces could push this value down to 0.18, as shown by one of our design examples.
Item Open Access Physical Designs in Artificial Neural Imaging(2022) Huang, QianArtificial neural networks fundamentally shift the paradigm of computational imaging. Powerful neural processing is not only taking place of the conventional algorithms, but also embracing radical and physically plausible forward models that better sample the high dimensional light field. Physical designs of sampling in turn tailor simulation and neural algorithms for optimal inverse estimation. Sampling, simulation and neural algorithms as three essential components compose a novel imaging paradigm -- artificial neural imaging, in which they interact and improve themselves in an upward spiral.
Here we present three concrete examples of artificial neural imaging and the important roles physical designs play. In all-in-focus imaging, we see autofocus, sampling and fusion algorithms are redefined for optimizing the image quality of a camera with limited depth of field. Image-based neural autofocus acts 5-10x faster than traditional algorithms. The focus control based on the rule or reinforcement learning dynamically estimates the environment and optimizes the focus trajectory. Along with the neural fusion algorithm, the pipeline outperforms traditional focal stacking approaches in static and dynamic scenes. In scatter ptychography, we show imaging the secondary scatters reflected by a remote target under coherent illumination can create a synthetic aperture on the scatterer. The reconstruction of the object through phase retrieval algorithms can drastically exceed the resolution of directly viewing the target. In the lab experiment we demonstrate 32x resolution improvement relative to direct imaging using error-reduction and plug-and-play algorithms. In array camera imaging, we demonstrate heterogeneous multiaperture designs that have better sampling structures and physics-aware transformers for feature-based data fusion. The proposed transformer incorporates the physical information of the camera array as its receptive fields, demonstrating the superior ability of image compositing on array cameras with diverse resolutions, focal lengths, focal planes, color spaces, and exposures. We also demonstrate a scalable pipeline of data synthesis through computer graphics software that empowers the transformers.
The examples above justify artificial neural imaging and the physical designs interweaved. We expect better designs in sampling, simulation, neural algorithms and eventually better estimation of the light field.
Item Open Access Poverty and Place in the Context of the American South(2015) Baker, Regina SmallsIn the United States, poverty has been historically higher and disproportionately concentrated in the American South. Despite this fact, much of the conventional poverty literature in the United States has focused on urban poverty in cities, particularly in the Northeast and Midwest. Relatively less American poverty research has focused on the enduring economic distress in the South, which Wimberley (2008:899) calls “a neglected regional crisis of historic and contemporary urgency.” Accordingly, this dissertation contributes to the inequality literature by focusing much needed attention on poverty in the South.
Each empirical chapter focuses on a different aspect of poverty in the South. Chapter 2 examines why poverty is higher in the South relative to the Non-South. Chapter 3 focuses on poverty predictors within the South and whether there are differences in the sub-regions of the Deep South and Peripheral South. These two chapters compare the roles of family demography, economic structure, racial/ethnic composition and heterogeneity, and power resources in shaping poverty. Chapter 4 examines whether poverty in the South has been shaped by historical racial regimes.
The Luxembourg Income Study (LIS) United States datasets (2000, 2004, 2007, 2010, and 2013) (derived from the U.S. Census Current Population Survey (CPS) Annual Social and Economic Supplement) provide all the individual-level data for this study. The LIS sample of 745,135 individuals is nested in rich economic, political, and racial state-level data compiled from multiple sources (e.g. U.S. Census Bureau, U.S. Department of Agriculture, University of Kentucky Center for Poverty Research, etc.). Analyses involve a combination of techniques including linear probability regression models to predict poverty and binary decomposition of poverty differences.
Chapter 2 results suggest that power resources, followed by economic structure, are most important in explaining the higher poverty in the South. This underscores the salience of political and economic contexts in shaping poverty across place. Chapter 3 results indicate that individual-level economic factors are the largest predictors of poverty within the South, and even more so in the Deep South. Moreover, divergent results between the South, Deep South, and Peripheral South illustrate how the impact of poverty predictors can vary in different contexts. Chapter 4 results show significant bivariate associations between historical race regimes and poverty among Southern states, although regression models fail to yield significant effects. Conversely, historical race regimes do have a small, but significant effect in explaining the Black-White poverty gap. Results also suggest that employment and education are key to understanding poverty among Blacks and the Black-White poverty gap. Collectively, these chapters underscore why place is so important for understanding poverty and inequality. They also illustrate the salience of micro and macro characteristics of place for helping create, maintain, and reproduce systems of inequality across place.
Item Open Access Power, Policy and Health in Rich Democracies(2014) Reynolds, Megan M.Comparative social scientists have offered rich insights into how macro-level political factors affect stratification processes such as class, gender and racial inequality. Medical sociologists, on the other hand, have long emphasized the importance of stratification for health and health inequalities at the individual level. Yet, only recently has research in either field attended to the macro-level factors that impact health. This dissertation contributes to the growing scholarship in that area by investigating the influence of public healthcare social policies, organized labor and Left party power on infant mortality, life expectancy at birth and life expectancy at age 65. I do so using the framework of power resources, a theory which has been only sparsely applied to the study of health.
The analyses include country-level pooled time series models of 22 rich democracies between 1960 and 2010. Data is drawn from the Comparative Welfare States dataset (Brady et al 2014), which provides information on indicators of welfare state development, its causes, and its consequences between the periods 1960 to 2011. I use fixed effects regression models to examine the influence on health of two forms of healthcare spending, six forms of non-health social welfare transfers and the triad of union density, Left parties and socialized medicine. I also supplement with a variety of alternatives to test the sensitivity of results to estimation technique.
Chapter 1 discusses the foundational literature on the social determinants of health and political economy of health. Chapter 2 focuses on the role of public healthcare effort and socialized medicine as predictors of countries' infant mortality and life expectancy at birth and at age 60. I show that socialized medicine (as represented by the ratio to total health spending) improves all population health outcomes in addition to, and adjusted for, the effect of healthcare effort (as represented by the ratio to GDP). Moreover, socialized medicine is a better predictor of population health than healthcare effort and its effect sizes are comparable to those of GDP per capita. Chapter 3 examines the association of infant mortality and life expectancy with old age-survivor transfers, incapacity transfers, family transfers, active labor market transfers, unemployment transfers, housing transfers and education transfers. For infant mortality, overall and educational spending matters, whereas for life expectancy, incapacity does. Family transfers matter only for life expectancy at birth. For all outcomes, unemployment transfers are beneficial and housing and aging-survivor benefits are not significant. Chapter 4 investigates the association of organized labor with infant mortality and life expectancy and devotes additional attention to the potential role of Left parties and social policy in this relationship. Results suggest that in nations where a greater proportion of the labor force is unionized, more lives are lost below the age of one and individuals live shorter lives. These results are contrary to the hypotheses generated by the theory of power resources and allied research.
This dissertation contributes to literatures in medical sociology, sociology of inequality and political sociology. This dissertation highlights the pertinence of power resources theory to the subject of health and further encourages its application to this relatively new domain. Additionally, by highlighting the importance of institutions and politics for health, it extends research on macro-level sources of inequality to the outcome of health and complements the existing emphasis in medical sociology on the fundamental, distal causes of health.
Item Open Access Sampling and Signal Estimation in Computational Optical Sensors(2007-12-14) Shankar, MohanComputational sensing utilizes non-conventional sampling mechanisms along with processing algorithms for accomplishing various sensing tasks. It provides additional flexibility in designing imaging or spectroscopic systems. This dissertation analyzes sampling and signal estimation techniques through three computational sensing systems to accomplish specific tasks. The first is thin long-wave infrared imaging systems through multichannel sampling. Significant reduction in optical system thickness is obtained over a conventional system by modifying conventional sampling mechanisms and applying reconstruction algorithms. In addition, an information theoretic analysis of sampling in conventional as well as multichannel imaging systems is also performed. The feasibility of performing multichannel sampling for imaging is demonstrated using an information theoretic metric. The second system is an application of the multichannel system for the design of compressive low-power video sensors. Two sampling schemes have been demonstrated that utilize spatial as well as temporal aliasing. The third system is a novel computational spectroscopic system for detecting chemicals that utilizes the surface plasmon resonances to encode information about the chemicals that are tested.Item Open Access Sampling in Computational Cameras(2022) Wang, ChengyuThis dissertation contributes to computational imaging by studying the intersection of sampling and artificial intelligence (AI). It has been demonstrated that AI shows superior performance in various image processing problems, ranging from super- resolution to classification. In this work we demonstrate that combining AI with intelligent data sampling enables new camera capabilities.We start with traditional image signal processing (ISP) in digital cameras and show that AI has significantly improved the performance of ISP functions, such as demosaicing, denoising, and white balance. We than demonstrate a deep-learning (DL)-based image signal processor that regroups the ISP functions and achieves fast image processing with an end-to-end network. We further study the image compression strategies and show that AI is also a helpful tool for imaging system design. Following the study on image processing, we turn to the camera autofocus con- trol. With the demonstration of a DL-based autofocus pipeline and saliency de- tection network, we show that AI achieves 5 - 10x faster autofocus compared to traditional contrast maximization and allows content-based autofocus control. We also demonstrate an all-in-focus imaging pipeline to produce all-in-focus images or videos. This shows that AI extends the concept of camera control from optimizing an instantaneous image to producing the control trajectory that optimizes the sampling eciency and long-term image or video quality. Next we consider coherent phase retrieval. We first study the Fisher information and the Cram ́er-Rao lower bound on the mean squared error of coherent signal estimation from the squared modulus of its linear transform. Then we demonstrate two coding strategies to achieve optimal phase retrieval, and we use simulations to show practical implementations of these strategies. These simulation take the advantage of well-developed deep learning libraries. Finally we focus on Fourier ptychography, a technique combining aperture synthe- sis and phase retrieval. We build a snapshot ptychography system using a camera array and deep neural estimation, which achieves 6.7ˆ improvement in resolution compared to a single camera. We also present simulations considering various aper- ture distributions and multiple snapshots to show design considerations of such as system.
Item Open Access Sampling Strategies and Neural Processing for Array Cameras(2023) Hu, MinghaoArtificial intelligence (AI) reshapes computational imaging systems. Deep neural networks (DNN) not only show superior reconstruction performance over conventional ones handling the same sampling systems, these new reconstruction algorithms also call for new sampling strategies. In this dissertation, we study how DNN reconstruction algorithms and sampling strategy can be jointly designed to boost the system performance.
First, two DNNs for sensor fusion tasks based on convolutional neural networks (CNN) and transformers are proposed. They are able to fuse frames with different resolution, different wave band, or different temporal window. The amount of frames can also vary, showing great flexibility and scalability. A reasonable computational load is achieved by a proper receptive field design balancing the flexibility and complexity. Visual pleasing reconstruction results are achieved.
Then we demonstrate how DNN reconstruction algorithms favor certain sampling strategy over another, with snapshot compressive imaging (SCI) task as an example. Using synthetic datasets, we compare quasi-random coded sampling and multi-aperture multi-scale manifold sampling under DNN reconstruction. The latter sampling strategy requires much simpler physical setup, yet gives comparable, if not better, reconstruction image quality.
At the end, we design and build a multifocal array camera fitting the DNN reconstruction. With commercial on-the-shelf cameras and lenses, the array camera achieves a nearly 70 degree field of view (FoV), a 0.1m - 17.1m depth of field (DoF), and the ability to resolve objects with 2mm granularity. One final output image contains about 33M RGB pixels.
Overall, we explore the joint design of DNN reconstruction algorithms and physics sampling. With our research, we hope to develop more compact, more accurate, and larger covering range computational imaging systems.