Browsing by Subject "Algorithm"
- Results Per Page
- Sort Options
Item Open Access A Q-Learning Approach to Minefield Characterization from Unmanned Aerial Vehicles(2012) Daugherty, Stephen GreysonThe treasure hunt problem to determine how a computational agent can maximize its ability to detect and/or classify multiple targets located in a region of interest (ROI) populated with multiple obstacles. One particular instance of this problem involves optimizing the performance of a sensor mounted on an unmanned aerial vehicle (UAV) flying over a littoral region in order to detect mines buried underground.
Buried objects (including non-metallic ones) have an effect on the thermal conductivity and heat retention of the soil in which they reside. Because of this, objects that are not very deep below the surface often create measurable thermal anomalies on the surface soil. Because of this, infrared (IR) sensors have the potential to find mines and minelike objects (referred to in this thesis as clutters).
As the sensor flies over the ROI, sensor data is obtained. The sensor receives the data as pixellated infrared light signatures. Using this, ground temperature measurements are recorded and used to generate a two-dimensional thermal profile of the field of view (FOV) and map that profile onto the geography of the ROI.
The input stream of thermal data is then passed to an image processor that estimates the size and shape of the detected target. Then a Bayesian Network (BN) trained from a database of known mines and clutters is used to provide the posterior probability that the evidence obtained by the IR sensor for each detected target was the result of a mine or a clutter. The output is a confidence level (CL), and each target is classified as a mine or a clutter according to the most likely explanation (MLE) for the sensor evidence. Though the sensor may produce incomplete, noisy data, inferences from the BN attenuate the problem.
Since sensor performance depends on altitude and environmental conditions, the value of the IR information can be further improved by choosing the flight path intelligently. This thesis assumes that the UAV is flying through an environmentally homogeneous ROI and addresses the question of how the optimal altitude can be determined for any given multi-dimensional environmental state.
In general, high altitudes result in poor resolution, whereas low altitudes result in very limited FOVs. The problem of weighing these tradeoffs can be addressed by creating a scoring function that is directly dependent on a comparison between sensor outputs and ground truth. The scoring function provides a flexible framework through which multiple mission objectives can be addressed by assigning different weights to correct detections, correct non-detections, false detections, and false non-detections.
The scoring function provides a metric of sensor performance that can be used as feedback to optimize the sensor altitude as a function of the environmental conditions. In turn, the scoring function can be empirically evaluated over a number of different altitudes and then converted to empirical Q scores that also weigh future rewards against immediate ones. These values can be used to train a neural network (NN). The NN filters the data and interpolates between discrete Q-values to provide information about the optimal sensor altitude.
The research described in this thesis can be used to determine the optimal control policy for an aircraft in two different situations. The global maximum of the Q-function can be used to determine the altitude at which a UAV should cruise over an ROI for which the environmental conditions are known a priori. Alternatively, the local maxima of the Q-function can be used to determine the altitude to which a UAV should move if the environmental variables change during flight.
This thesis includes the results of computer simulations of a sensor flying over an ROI. The ROI is populated with targets whose characteristics are based on actual mines and minelike objects. The IR sensor itself is modeled by using a BN to create a stochastic simulation of the sensor performance. The results demonstrate how Q-learning can be applied to signals from a UAV-mounted IR sensor whose data stream is preprocessed by a BN classifier in order to determine an optimal flight policy for a given set of environmental conditions.
Item Open Access Algorithms for Public Decision Making(2019) Fain, Brandon ThomasIn public decision making, we are confronted with the problem of aggregating the conflicting preferences of many individuals about outcomes that affect the group. Examples of public decision making include allocating shared public resources and social choice or voting. We study these problems from the perspective of an algorithm designer who takes the preferences of the individuals and the constraints of the decision making problem as input and efficiently computes a solution with provable guarantees with respect to fairness and welfare, as defined on individual preferences.
Concerning fairness, we develop the theory of group fairness as core or proportionality in the allocation of public goods. The core is a stability based notion adapted from cooperative game theory, and we show extensive algorithmic connections between the core solution concept and optimizing the Nash social welfare, the geometric mean of utilities. We explore applications in public budgeting, multi-issue voting, memory sharing, and fair clustering in unsupervised machine learning.
Regarding welfare, we extend recent work in implicit utilitarian social choice to choose approximately optimal public outcomes with respect to underlying cardinal valuations using limited ordinal information. We propose simple randomized algorithms with strong utilitarian social cost guarantees when the space of outcomes is metric. We also study many other desirable properties of our algorithms, including approximating the second moment of utilitarian social cost. We explore applications in voting for public projects, preference elicitation, and deliberation.
Item Open Access Correlated Polarity Noise Reduction: Development, Analysis, and Application of a Novel Noise Reduction Paradigm(2013) Wells, Jered RImage noise is a pervasive problem in medical imaging. It is a property endemic to all imaging modalities and one especially familiar in those modalities that employ ionizing radiation. Statistical uncertainty is a major limiting factor in the reduction of ionizing radiation dose; patient exposure must be minimized but high image quality must also be achieved to retain the clinical utility of medical images. One way to achieve the goal of radiation dose reduction is through the use of image post processing with noise reduction algorithms. By acquiring images at lower than normal exposure followed by algorithmic noise reduction, it is possible to restore image noise to near normal levels. However, many denoising algorithms degrade the integrity of other image quality components in the process.
In this dissertation, a new noise reduction algorithm is investigated: Correlated Polarity Noise Reduction (CPNR). CPNR is a novel noise reduction technique that uses a statistical approach to reduce noise variance while maintaining excellent resolution and a "normal" noise appearance. In this work, the algorithm is developed in detail with the introduction of several methods for improving polarity estimation accuracy and maintaining the normality of the residual noise intensity distribution. Several image quality characteristics are assessed in the production of this new algorithm including its effects on residual noise texture, residual noise magnitude distribution, resolution effects, and nonlinear distortion effects. An in-depth review of current linear methods for medical imaging system resolution analysis will be presented along with several newly discovered improvements to existing techniques. This is followed by the presentation of a new paradigm for quantifying the frequency response and distortion properties of nonlinear algorithms. Finally, the new CPNR algorithm is applied to computed tomography (CT) to assess its efficacy as a dose reduction tool in 3-D imaging.
It was found that the CPNR algorithm can be used to reduce x ray dose in projection radiography by a factor of at least two without objectionable degradation of image resolution. This is comparable to other nonlinear image denoising algorithms such as the bilateral filter and wavelet denoising. However, CPNR can accomplish this level of dose reduction with few edge effects and negligible nonlinear distortion of the anatomical signal as evidenced by the newly developed nonlinear assessment paradigm. In application to multi-detector CT, XCAT simulations showed that CPNR can be used to reduce noise variance by 40% with minimal blurring of anatomical structures under a filtered back-projection reconstruction paradigm. When an apodization filter was applied, only 33% noise variance reduction was achieved, but the edge-saving qualities were largely retained. In application to cone-beam CT for daily patient positioning in radiation therapy, up to 49% noise variance reduction was achieved with as little as 1% reduction in the task transfer function measured from reconstructed data at the cutoff frequency.
This work concludes that the CPNR paradigm shows promise as a viable noise reduction tool which can be used to maintain current standards of clinical image quality at almost half of normal radiation exposure This algorithm has favorable resolution and nonlinear distortion properties as measured using a newly developed set of metrics for nonlinear algorithm resolution and distortion assessment. Simulation studies and the initial application of CPNR to cone-beam CT data reveal that CPNR may be used to reduce CT dose by 40%-49% with minimal degradation of image resolution.
Item Open Access Essays on Identification and Promotion of Game-Theoretic Cooperation(2018) Moon, CatherineThis dissertation looks at how to identify and promote cooperation in a multiagent system, first theoretically through the lens of computational game theory and later empirically through a human subject experiment. Chapter 2 studies the network dynamics leading to a potential unraveling of cooperation and identify the subset of agents that can form an enforceable cooperative agreement with. This is an important problem, because cooperation is harder to sustain when information of defection, and thus the consequent punishment, transfers slowly through the network structures from a larger community. Chapter 3 examines a model that studies cooperation in a broader strategic context where agents may interact in multiple different domains, or games, simultaneously. Even if a game independently does not give an agent sufficient incentive to play the cooperative action, there may be hope for cooperation when multiple games with compensating asymmetries are put together. Exploiting compensating asymmetries, we can find an institutional arrangement that would either ensure maximum incentives for cooperation or require minimum subsidy to establish sufficient incentives for cooperation. Lastly, Chapter 4 studies a two-layered public good game to empirically examine whether community enforcement through existing bilateral relationships can encourage cooperation in a social dilemma situation. Here, it is found that how the situation is presented matters greatly to real life agents, as their understanding of whether they are in a cooperative or a competitive, strategic setting changes the level of overall cooperation.
Item Open Access Methodologies and analysis of metabolic flux in mammalian systems(2021) Liu, ShiyuQuantification of metabolic fluxes has broad applications in studying metabolic physiology. Isotope tracing with heavy labeled carbon (13C labeled metabolic flux analysis, 13C-MFA) is a promising strategy due to its ability to compute metabolic fluxes from isotope tracing profiles without relying on assumptions such as metabolic objectives or enzyme kinetic parameters. However, most current 13C-MFA methods have limitations on model scope, algorithmic efficacy and user interaction, especially given the availability of modern mass spectrometry-based metabolomics techniques and computational resources. In this study, a new 13C-MFA framework is developed, with emphasis on flux resolution in larger-sized metabolic network. Novel simulation methods of isotope tracing data are also used to guide algorithm development. The new MFA methodology has been applied to isotope-labeled cultured cancer cell line and isotope-infused mice. In cultured cancer cell line, the new MFA framework enabled the discovery of long-term interaction from one-carbon metabolism to pentose phosphate pathway and TCA cycle. In isotope-infused mice, the new MFA framework directly measured systemic glucose and lactate contribution to TCA cycle under physiological condition, which confirms the conventional knowledge that glucose is the main direct energy source in body. Taken together the new MFA methodology will offer unprecedented opportunities for expanding research in metabolic physiology.
Item Open Access PSTD Method for Thermoacoustic Tomography (TAT) and Related Experimental Investigation(2009) Ye, GangIn this work, the simulation (forward problem) and reconstruction (inverse problem) in Thermoacoustic Tomography (TAT) are studied using a pseudospectral time-domain (PSTD) method with 4th-order time integration.
The objective of the TAT simulation is to solve for the thermoacoustic pressure field in an inhomogeneous medium. Using the PSTD method, the spatial derivatives of pressure field and particle velocity can be obtained using fast fourier transform (FFT). Since the Fourier transforms used to represent the spatial derivatives of smooth functions are exact, only 2 points per wavelength are needed in the spatial discretization. The time integration is achieved by a 4th-order method to effectively reduce the computational time. The results of the algorithm are validated by analytical solutions. Perfectly Matched Layers (PMLs) are applied to absorb the outgoing waves and avoid ``wraparound'' effect. The maximum attenuation coefficient of the PMLs has an optimum value to minimize the reflections due to discretization and wraparound effect for 2D and 3D problems. Different PML profiles are also compared, quadratic profile is chosen because it can minimize the overall reflection. Spatial smoothing is needed for PSTD to avoid Gibbs' phenomenon in the modeling of a point source, and the effect of the smoothing function is studied.
In the TAT reconstruction problem, the PSTD method is used to reconstruct the thermoacoustic sources by solving the thermoacoustic wave equations in a reversed temporal order within the framework of time reversal imaging. The back-propagated pressure waves then refocus at the spatial locations of the original sources. Most other TAT reconstruction algorithms are based on the assumption that the tissue medium is acoustically homogeneous. In practice, however, even the mild tissue inhomogeneity will cause large phase errors and cause spatial misplacement and distortion of the sources. The proposed PSTD method utilizes a two-step process to solve this problem. In the first step, a homogeneous time reversal reconstruction is performed. Since an inhomogeneity itself is usually a source because of spatially dependent electrical conductivity (thus microwave absorption), the spatial location and the shape of the inhomogeneity can be estimated. In the second step, the updated acoustic property map is loaded followed by an inhomogeneous reconstruction. Numerical results show that this method greatly improves the reconstruction results. Images with improved quality are reconstructed from experimental data.
A 3D PSTD algorithm is developed and validated. Numerical results show that the PSTD algorithm with the 4th-order time integration is capable of simulating large 3D acoustic problems accurately and efficiently. A 3D breast phantom model is used to study the inhomogeneous reconstruction in 3D. Improved results over the homogeneous method are observed.
A preliminary study of the Thermoacoustic Tomography (TAT) using continuous-wave (CW) modulated microwaves is summarized. The theoretical background, system configuration, experiment setup, and measurement results are presented.
Item Open Access Signal Improvement and Contrast Enhancement in Magnetic Resonance Imaging(2015) Han, YiThis thesis reports advances in magnetic resonance imaging (MRI), with the ultimate goal of improving signal and contrast in biomedical applications. More specifically, novel MRI pulse sequences have been designed to characterize microstructure, enhance signal and contrast in tissue, and image functional processes. In this thesis, rat brain and red bone marrow images are acquired using iMQCs (intermolecular multiple quantum coherences) between spins that are 10 μm to 500 μm apart. As an important application, iMQCs images in different directions can be used for anisotropy mapping. We investigate tissue microstructure by analyzing anisotropy mapping. At the same time, we simulated images expected from rat brain without microstructure. We compare those with experimental results to prove that the dipolar field from the overall shape only has small contributions to the experimental iMQC signal. Besides magnitude of iMQCs, phase of iMQCs should be studied as well. The phase anisotropy maps built by our method can clearly show susceptibility information in kidneys. It may provide meaningful diagnostic information. To deeply study susceptibility, the modified-crazed sequence is developed. Combining phase data of modified-crazed images and phase data of iMQCs images is very promising to construct microstructure maps. Obviously, the phase image in all above techniques needs to be highly-contrasted and clear. To achieve the goal, algorithm tools from Susceptibility-Weighted Imaging (SWI) and Susceptibility Tensor Imaging (STI) stands out superb useful and creative in our system.
Item Open Access Unified Design and Optimization Tools for Digital Microfluidic Biochips(2011) Zhao, YangDigital microfluidics is an emerging technology that provides fluid-handling capability on a chip. Biochips based on digital microfluidics have therefore enabled the automation of laboratory procedures in biochemistry. By reducing the rate of sample and reagent consumption, digital microfluidic biochips allow continuous sampling and analysis for real-time biochemical analysis, with application to clinical diagnostics, immunoassays, and DNA sequencing. Recent advances in technology and applications serve as a powerful driver for research on computer-aided design (CAD) tools for biochips.
This thesis research is focused on a design automation framework that addresses chip synthesis, droplet routing, control-pin mapping, testing and diagnosis, and error recovery. In contrast to prior work on automated design techniques for digital microfluidics, the emphasis here is on practical CAD optimization methods that can target different design problems in a unified manner. Constraints arising from the underlying technology and the application domain are directly incorporated in the optimization framework.
The avoidance of cross-contamination during droplet routing is a key design challenge for biochips. As a first step in this thesis research, a droplet-routing method based on disjoint droplet routes has been developed to avoid cross-contamination during the design of droplet flow paths. A wash-operation synchronization method has been developed to synchronize wash-droplet routing steps with sample/reagent droplet-routing steps by controlling the order of arrival of droplets at cross-contamination sites.
In pin-constrained digital microfluidic biochips, concurrently-implemented fluidic operations may involve pin-actuation conflicts if they are not carefully synchronized. A two-phase optimization method has been proposed to identify and synchronize these fluidic operations. The goal is to implement these fluidic operations without pin-actuation conflict, and minimize the duration of implementing the outcome sequence after synchronization.
Due to the interdependence between droplet routing and pin-count reduction, this thesis presents two optimization methods to concurrently solve the droplet-routing and the pin-mapping design problems. First, an integer linear programming (ILP)-based optimization method has been developed to minimize the number of control pins. Next an efficient heuristic approach has been developed to tackle the co-optimization problem.
Dependability is an important system attribute for microfluidic biochips. Robust testing methods are therefore needed to ensure correct results. This thesis presents a built-in self-test (BIST) method for digital microfluidic biochips. This method utilizes digital microfluidic logic gates to implement the BIST architecture. A cost-effective fault diagnosis method has also been proposed to locate a single defective cell, multiple
rows/columns with defective cells, as well as an unknown number of rows/columns-under-test with defective cells. A BIST method for on-line testing of digital microfluidic biochips has been proposed. An automatic test pattern generation (ATPG) method has been proposed for non-regular digital microfluidic chips. A pin-count-aware online testing method has been developed for pin-constrained designs to support the execution of both fault testing and the target bioassay protocol.
To better monitor and manage the execution of bioassays, control flow has been incorporated in the design and optimization framework. A synthesis method has been developed to incorporate control paths and an error-recovery mechanism during chip design. This method addresses the problem of recovering from fluidic errors that occur
during on-chip bioassay execution.
In summary, this thesis research has led to a set of unified design tools for digital microfluidics. This work is expected to reduce human effort during biochip design and biochip usage, and enable low-cost manufacture and more widespread adoption for laboratory procedures.