Browsing by Subject "Electrical engineering"
Results Per Page
Sort Options
Item Open Access A CG-FFT Based Fast Full Wave Imaging Method and its Potential Industrial Applications(2015) Yu, ZhiruThis dissertation focuses on a FFT based forward EM solver and its application in inverse problems. The main contributions of this work are two folded. On the one hand, it presents the first scaled lab experiment system in the oil and gas industry for through casing hydraulic fracture evaluation. This system is established to validate the feasibility of contrasts enhanced fractures evaluation. On the other hand, this work proposes a FFT based VIE solver for hydraulic fracture evaluation. This efficient solver is needed for numerical analysis of such problem. The solver is then generalized to accommodate scattering simulations for anisotropic inhomogeneous magnetodielectric objects. The inverse problem on anisotropic objects are also studied.
Before going into details of specific applications, some background knowledge is presented. This dissertation starts with an introduction to inverse problems. Then algorithms for forward and inverse problems are discussed. The discussion on forward problem focuses on the VIE formulation and a frequency domain solver. Discussion on inverse problems focuses on iterative methods.
The rest of the dissertation is organized by the two categories of inverse problems, namely the inverse source problem and the inverse scattering problem.
The inverse source problem is studied via an application in microelectronics. In this application, a FFT based inverse source solver is applied to process near field data obtained by near field scanners. Examples show that, with the help of this inverse source solver, the resolution of unknown current source images on a device under test is greatly improved. Due to the improvement in resolution, more flexibility is given to the near field scan system.
Both the forward and inverse solver for inverse scattering problems are studied in detail. As a forward solver for inverse scattering problems, a fast FFT based method for solving VIE of magnetodielectric objects with large electromagnetic contrasts are presented due to the increasing interest in contrasts enhanced full wave EM imaging. This newly developed VIE solver assigns different basis functions of different orders to expand flux densities and vector potentials. Thus, it is called the mixed ordered BCGS-FFT method. The mixed order BCGS-FFT method maintains benefits of high order basis functions for VIE while keeping correct boundary conditions for flux densities and vector potentials. Examples show that this method has an excellent performance on both isotropic and anisotropic objects with high contrasts. Examples also verify that this method is valid in both high and low frequencies. Based on the mixed order BCGS-FFT method, an inverse scattering solver for anisotropic objects is studied. The inverse solver is formulated and solved by the variational born iterative method. An example given in this section shows a successful inversion on an anisotropic magnetodielectric object.
Finally, a lab scale hydraulic fractures evaluation system for oil/gas reservoir based on previous discussed inverse solver is presented. This system has been setup to verify the numerical results obtained from previously described inverse solvers. These scaled experiments verify the accuracy of the forward solver as well as the performance of the inverse solver. Examples show that the inverse scattering model is able to evaluate contrasts enhanced hydraulic fractures in a shale formation. Furthermore, this system, for the first time in the oil and gas industry, verifies that hydraulic fractures can be imaged through a metallic casing.
Item Open Access A Hybrid Spectral-Element / Finite-Element Time-Domain Method for Multiscale Electromagnetic Simulations(2010) Chen, JiefuIn this study we propose a fast hybrid spectral-element time-domain (SETD) / finite-element time-domain (FETD) method for transient analysis of multiscale electromagnetic problems, where electrically fine structures with details much smaller than a typical wavelength and electrically coarse structures comparable to or larger than a typical wavelength coexist.
Simulations of multiscale electromagnetic problems, such as electromagnetic interference (EMI), electromagnetic compatibility (EMC), and electronic packaging, can be very challenging for conventional numerical methods. In terms of spatial discretization, conventional methods use a single mesh for the whole structure, thus a high discretization density required to capture the geometric characteristics of electrically fine structures will inevitably lead to a large number of wasted unknowns in the electrically coarse parts. This issue will become especially severe for orthogonal grids used by the popular finite-difference time-domain (FDTD) method. In terms of temporal integration, dense meshes in electrically fine domains will make the time step size extremely small for numerical methods with explicit time-stepping schemes. Implicit schemes can surpass stability criterion limited by the Courant-Friedrichs-Levy (CFL) condition. However, due to the large system matrices generated by conventional methods, it is almost impossible to employ implicit schemes to the whole structure for time-stepping.
To address these challenges, we propose an efficient hybrid SETD/FETD method for transient electromagnetic simulations by taking advantages of the strengths of these two methods while avoiding their weaknesses in multiscale problems. More specifically, a multiscale structure is divided into several subdomains based on the electrical size of each part, and a hybrid spectral-element / finite-element scheme is proposed for spatial discretization. The hexahedron-based spectral elements with higher interpolation degrees are efficient in modeling electrically coarse structures, and the tetrahedron-based finite elements with lower interpolation degrees are flexible in discretizing electrically fine structures with complex shapes. A non-spurious finite element method (FEM) as well as a non-spurious spectral element method (SEM) is proposed to make the hybrid SEM/FEM discretization work. For time integration we employ hybrid implicit / explicit (IMEX) time-stepping schemes, where explicit schemes are used for electrically coarse subdomains discretized by coarse spectral element meshes, and implicit schemes are used to overcome the CFL limit for electrically fine subdomains discretized by dense finite element meshes. Numerical examples show that the proposed hybrid SETD/FETD method is free of spurious modes, is flexible in discretizing sophisticated structure, and is more efficient than conventional methods for multiscale electromagnetic simulations.
Item Open Access A Molecular-scale Programmable Stochastic Process Based On Resonance Energy Transfer Networks: Modeling And Applications(2016) Wang, SiyangWhile molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Item Open Access A Semi-Empirical Monte Carlo Method of Organic Photovoltaic Device Performance in Resonant, Infrared, Matrix-Assisted Pulsed Laser Evaporation (RIR-MAPLE) Films(2015) Atewologun, AyomideUtilizing the power of Monte Carlo simulations, a novel, semi-empirical method for investigating the performance of organic photovoltaics (OPVs) in resonant infrared, matrix-assisted pulsed laser evaporation (RIR-MAPLE) films is explored. Emulsion-based RIR-MAPLE offers a unique and powerful alternative to solution processing in depositing organic materials for use in solar cells: in particular, its usefulness in controlling the nanoscale morphology of organic thin films and the potential for creating novel hetero-structures make it a suitable experimental backdrop for investigating trends through simulation and gaining a better understanding of how different thin film characteristics impact OPV device performance.
The work presented in this dissertation explores the creation of a simulation tool that relies heavily on measureable properties of RIR-MAPLE films that impact efficiency and can be used to inform film deposition and dictate the paths for future improvements in OPV devices. The original nanoscale implementation of the Monte Carlo method for investigating OPV performance is transformed to enable direct comparison between simulation and experimental external quantum efficiency results. Next, a unique microscale formulation of the Dynamic Monte Carlo (DMC) model is developed based on the observable, fundamental differences between the morphologies of RIR-MAPLE and solution-processed bulk heterojunction (BHJ) films. This microscale model enables us to examine the sensitivity of device performance to various structural and electronic properties of the devices. Specifically, using confocal microscopy, we obtain an average microscale feature size for the RIR-MAPLE P3HT:PC61BM (1:1) BHJ system that represents a strategic starting point for utilizing the DMC as an empirical tool.
Building on this, the RIR-MAPLE P3HT:PC61BM OPV system is studied using input simulation parameters obtained from films with different material ratios and overall device structures based on characterization techniques such as grazing incidence-wide angle X-ray scattering (GI-WAXS) and X-ray photoelectron spectroscopy (XPS). The results from the microscale DMC simulation compare favorably to experimental data and allow us to articulate a well-informed critique on the strengths and limitations of the model as a predictive tool. The DMC is then used to analyze a different RIR-MAPLE BHJ system: PCPDTBT:PC71BM, where the deposition technique itself is investigated for differences in the primary solvents used during film deposition.
Finally, a multi-scale DMC model is introduced where morphology measurements taken at two different size scales, as well as structural and electrical characterization, provide a template that mimics the operation of OPVs. This final, semi-empirical tool presents a unique simulation opportunity for exploring the different properties of RIR-MAPLE deposited OPVs, their effects on OPV performance and potential design routes for improving device efficiencies.
Item Open Access Accelerated Sepsis Diagnosis by Seamless Integration of Nucleic Acid Purification and Detection(2014) Hsu, BangNingBackground The diagnosis of sepsis is challenging because the infection can be caused by more than 50 species of pathogens that might exist in the bloodstream in very low concentrations, e.g., less than 1 colony-forming unit/ml. As a result, among the current sepsis diagnostic methods there is an unsatisfactory trade-off between the assay time and the specificity of the derived diagnostic information. Although the present qPCR-based test is more specific than biomarker detection and faster than culturing, its 6 ~ 10 hr turnaround remains suboptimal relative to the 7.6%/hr rapid deterioration of the survival rate, and the 3 hr hands-on time is labor-intensive. To address these issues, this work aims to utilize the advances in microfluidic technologies to expedite and automate the ``nucleic acid purification - qPCR sequence detection'' workflow.
Methods and Results This task is evaluated to be best approached by combining immiscible phase filtration (IPF) and digital microfluidic droplet actuation (DM) on a fluidic device. In IPF, as nucleic acid-bound magnetic beads are transported from an aqueous phase to an immiscible phase, the carryover of aqueous contaminants is minimized by the high interfacial tension. Thus, unlike a conventional bead-based assay, the necessary degree of purification can be attained in a few wash steps. After IPF reduces the sample volume from a milliliter-sized lysate to a microliter-sized eluent, DM can be used to automatically prepare the PCR mixture. This begins with compartmenting the eluent in accordance with the desired number of multiplex qPCR reactions, and then transporting droplets of the PCR reagents to mix with the eluent droplets. Under the outlined approach, the IPF - DM integration should lead to a notably reduced turnaround and a hands-free ``lysate-to-answer'' operation.
As the first step towards such a diagnostic device, the primary objective of this thesis is to verify the feasibility of the IPF - DM integration. This is achieved in four phases. First, the suitable assays, fluidic device, and auxiliary systems are developed. Second, the extent of purification obtained per IPF wash, and hence the number of washes needed for uninhibited qPCR, are estimated via off-chip UV absorbance measurement and on-chip qPCR. Third, the performance of on-chip qPCR, particularly the copy number - threshold cycle correlation, is characterized. Lastly, the above developments accumulate to an experiment that includes the following on-chip steps: DNA purification by IPF, PCR mixture preparation via DM, and target quantification using qPCR - thereby demonstrating the core procedures in the proposed approach.
Conclusions It is proposed to expedite and automate qPCR-based multiplex sparse pathogen detection by combining IPF and DM on a fluidic device. As a start, this work demonstrated the feasibility of the IPF - DM integration. However, a more thermally robust device structure will be needed for later quantitative investigations, e.g., improving the bead - buffer mixing. Importantly, evidences indicate that future iterations of the IPF - DM fluidic device could reduce the sample-to-answer time by 75% to 1.5 hr and decrease the hands-on time by 90% to approximately 20 min.
Item Open Access Accurate and Efficient Methods for the Scattering Simulation of Dielectric Objects in a Layered Medium(2019) Huang, WeifengElectromagnetic scattering in a layered medium (LM) is important for many engineering applications, including the hydrocarbon exploration. Various computational methods for tackling well logging simulations are summarized. Given their advantages and limitations, main attention is devoted to the surface integral equation (SIE) and its hybridization with the finite element method (FEM).
The thin dielectric sheet (TDS) based SIE, i.e., TDS-SIE, is introduced to the simulation of fractures. Its accuracy and efficiency are extensively demonstrated by simulating both conductive and resistive fractures. Fractures of variant apertures, conductivities, dipping angles, and extensions are also simulated and analyzed. With the aid of layered medium Green's functions (LMGFs), TDS-SIE is extended into the LM, which results in the solver entitled LM-TDS-SIE.
In order to consider the borehole effect, the well-known loop and tree basis functions are utilized to overcome low-frequency breakdown of the Poggio, Miller, Chang, Harrington, Wu, and Tsai (PMCHWT) formulation. This leads to the loop-tree (LT) enhanced PMCHWT, which can be hybridized with TDS-SIE to simulate borehole and fracture together. The resultant solver referred to as LT-TDS is further extended into the LM, which leads to the solver entitled LM-LT-TDS.
For inhomogeneous or complex structures, SIE is not suitable for their scattering simulations. It becomes advantageous to hybridize FEM with SIE in the framework of domain decomposition method (DDM), which allows independent treatment of each subdomain and nonconformal meshes between them. This hybridization can be substantially enhanced by the adoption of LMGFs and loop-tree bases, leading to the solver entitled LM-LT-DDM. In comparison with LM-LT-TDS, this solver is more powerful and able to handle more general low-frequency scattering problems in layered media.
Item Open Access Adaptive Brain-Computer Interface Systems For Communication in People with Severe Neuromuscular Disabilities(2016) Mainsah, Boyla OBrain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.
Item Open Access An Information-Theoretic Analysis of X-Ray Architectures for Anomaly Detection(2018) Coccarelli, David ScottX-ray scanning equipment currently establishes a first line of defense in the aviation security space. The efficacy of these scanners is crucial to preventing the harmful use of threatening objects and materials. In this dissertation, I introduce a principled approach to the analyses of these systems by exploring performance limits of system architectures and modalities. Moreover, I validate the use of simulation as a design tool with experimental data as well as extend the use of simulation to create high-fidelity realizations of a real-world system measurements.
Conventional performance analysis of detection systems confounds the effects of the system architecture (sources, detectors, system geometry, etc.) with the effects of the detection algorithm. We disentangle the performance of the system hardware and detection algorithm so as to focus on analyzing the performance of just the system hardware. To accomplish this, we introduce an information-theoretic approach to this problem. This approach is based on a metric derived from Cauchy-Schwarz mutual information and is analogous to the channel capacity concept from communications engineering. We develop and utilize a framework that can produce thousands of system simulations representative of a notional baggage ensembles. These simulations and the prior knowledge of the virtual baggage allow us to analyze the system as it relays information pertinent to a detection task.
In this dissertation, I discuss the application of this information-theoretic approach to study variations of X-ray transmission architectures as well as novel screening systems based on X-ray scatter and phase. The results show how effective use of this metric can impact design decisions for X-ray systems. Moreover, I introduce a database of experimentally acquired X-ray data both as a means to validate the simulation approach and to produce a database ripe for further reconstruction and classification investigations. Next, I show the implementation of improvements to the ensemble representation in the information-theoretic material model. Finally I extend the simulation tool toward high-fidelity representation of real-world deployed systems.
Item Open Access Analysis of μECoG Design Elements for Optimized Signal Acquisition(2022) Williams, Ashley JerriHigh density electrode arrays that can record spatially and temporally detailed neural information provide a new horizon for the scientific exploration of the brain. Chief amongst these new tools is micro-electrocorticography (µECoG), which has grown in usage over the past two decades. As µECoG arrays increase in contact number, density, and complexity, the form factors of arrays will also have to change in tandem – particularly the size and spacing (pitch) of electrode contacts on the array. The continued growth of the field of µECoG research and innovation is hampered by a lack of understanding of how fundamental design aspects of the arrays may impact the information obtained from µECoG in different recording bands of interest and animal models. Utilizing thin-film fabrication to create novel experimental arrays and novel analysis techniques, the work in this dissertation provides an understanding of how differences in electrode contact size and spacing can impact neural metric acquisition in four experimentally and clinically relevant frequency bands of local field potential (LFP), high gamma (HGB), spike band power (SBP), and high frequency broadband (HFB). This dissertation provides innovative arrays that allow for experimental variation within a recording session, unlike much of the work previously published comparing contact size and pitch.
This dissertation shows my work of designing, testing, and implementing novel designs of μECoG arrays to explore the questions of how contact size and pitch may impact neural metrics in rodents and non-human primates (NHPs). In Chapter 2, I used a novel 60-channel array with four different contact size diameters in rodents to explore how contact size, impedance, and noise may impact neural metrics we collect in auditory experiments. We determined that contact size may selectively play a role in neural metric information content acquisition, and that the factors of impedance and noise can impact them significantly in higher frequency bands. This work also showed the ability to resolve multi-unit spiking activity from the surface of the brain. In Chapter 3, I show results obtained using a 61-channel array with different contact pitch in rodents, giving clarity to how the spatial sampling of the neural field may be impacted by the pitch of the electrode contacts used. These results suggest the neural field in higher frequency bands show greater changes at shorter field lengths than lower frequency bands. In Chapter 4, I utilized a larger 244-channel array in a NHP with varied contact sizes to explore how contact size may impact information content obtained from NHPs in the motor-related areas of the brain. Chapter 5 concludes the investigation of how design characteristics may impact neural information content by using an array with a local reference electrode contact to explore how local re-referencing can improve the neural metrics obtained.
The results from this dissertation provide a comprehensive understanding to how the information in the neural field may be impacted by the electrode designs chosen. The utilization of novel in-house fabricated arrays provides a method to explore these neuroscience questions rapidly and at low-cost.
Item Open Access Appearance-based Gaze Estimation and Applications in Healthcare(2020) Chang, ZhuoqingGaze estimation, the ability to predict where a person is looking, has become an indispensable technology in healthcare research. Current tools for gaze estimation rely on specialized hardware and are typically used in well-controlled laboratory settings. Novel appearance-based methods directly estimate a person's gaze from the appearance of their eyes, making gaze estimation possible with ubiquitous, low-cost devices, such as webcams and smartphones. This dissertation presents new methods on appearance-based gaze estimation as well as applying this technology to solve challenging problems in practical healthcare applications.
One limitation of appearance-based methods is the need to collect a large amount of training data to learn the highly variant eye appearance space. To address this fundamental issue, we develop a method to synthesize novel images of the eye using data from a low-cost RGB-D camera and show that this data augmentation technique can improve gaze estimation accuracy significantly. In addition, we explore the potential of utilizing visual saliency information as a means to transparently collect weakly-labelled gaze data at scale. We show that the collected data can be used to personalize a generic gaze estimation model to achieve better performance on an individual.
In healthcare applications, the possibility of replacing specialized hardware with ubiquitous devices when performing eye-gaze analysis is a major asset that appearance-based methods brings to the table. In the first application, we assess the risk of autism in toddlers by analyzing videos of them watching a set of expert-curated stimuli on a mobile device. We show that appearance-based methods can be used to estimate their gaze position on the device screen and that differences between the autistic and typically-developing populations are significant. In the second application, we attempt to detect oculomotor abnormalities in people with cerebellar ataxia using video recorded from a mobile phone. By tracking the iris movement of participants while they watch a short video stimuli, we show that we are able to achieve high sensitivity and specificity in differentiating people with smooth pursuit oculomotor abnormalities from those without.
Item Open Access Applied Millimeter Wave Radar Vibrometry(2023) Centers, JessicaIn this dissertation, novel uses of millimeter-wave (mmW) radars are developed and analyzed. While automotive mmW radars have been ubiquitous in advanced driver assistance systems (ADAS), their ability to sense motions at sub-millimeter scale allows them to also find application in systems that require accurate measurements of surface vibrations. While laser Doppler vibrometers (LDVs) are routinely used to measure such vibrations, the lower size, weight, power, and cost (SWAPc) of mmW radars make vibrometry viable for a variety of new applications. In this work, we consider two such applications: everything-to-vehicle (X2V) wireless communications and non-acoustic human speech analysis.
Within this dissertation, a wireless communication system that uses the radar as a vibrometer is introduced. This system, termed vibrational radar backscatter communications (VRBC), receives messages by observing phase modulations on the radar signal that are caused by vibrations on the surface of a transponder over time. It is shown that this form of wireless communication provides the ability to simultaneously detect, isolate, and decode messages from multiple sources thanks to the spatial resolution of the radar. Additionally, VRBC requires no RF emission on the end of the transponder. Since automotive radars and the conventional X2V solutions are often at odds for spectrum allocations, this characteristic of VRBC is incredibly valuable.
Using an off-the-shelf, resonant transponder, a real VRBC data collection is presented and used to demonstrate the signal processing techniques necessary to decode a VRBC message. This real data collection proves to achieve a data rate just under 100 bps at approximately 5 meters distance. Rates of this scale can provide warning messages or concise situational awareness information in applications such as X2V, but naturally higher rates are desirable. For that reason, this dissertation includes discussion on how to design a more optimal VRBC system via transponder design, messaging scheme choice, and using any afforded flexibility in radar parameter choice.
Through the use of an analytical upper bound on VRBC rate and simulation results, we see that rates closer to 1 kbps should be achievable for a transponder approximately the size of a license plate at ranges under 200 meters. The added benefits of requiring no RF spectrum or network scheduling protocols uniquely positions VRBC as a desirable solution in spaces like X2V over commonly considered, higher rate solutions such as direct short range communications (DSRC).
Upon implementing a VRBC system, a handful of complications were encountered. This document designates a full chapter to solving these cases. This includes properly modeling intersymbol interference caused by resonant surfaces and utilizing sequence detection methods rather than single symbol maximum likelihood methods to improve detection in these cases. Additionally, an analysis on what an ideal clutter filter should look like and how it can begin to be achieved is presented. Lastly, a method for mitigating platform vibrational noise at both the radar and the transponder are presented. Using these methods, message detection errors are better avoided, though more optimal system design fundamentally proves to limit what rates are achievable.
Towards non-acoustic human speech analysis, it is shown in this dissertation that the vibrations of a person's throat during speech generation can be accurately captured using a mmW radar. These measurements prove to be similar to those achieved by the more expensive vibrometry alternative of an LDV with less than 10 dB of SNR depreciation at the first two speech harmonics in the signal's spectrogram. Furthermore, we find that mmW radar vibrometry data resembles a low-pass filtered version of its corresponding acoustic data. We show that this type of data achieves 53% performance in a speaker identification system as opposed to 11\% in a speech recognition system. This performance suggests potential for a mmW radar vibrometry in context-blind speaker identification systems if the performance of the speaker identification system can be further improved without causing the context of the speech more recognizable.
In this dissertation, mmW radar vibrational returns are modelled and signal processing chains are provided to allow for these vibrations to be estimated and used in application. In many cases, the work outlined could be used in other areas of mmW radar vibrometry even though it was originally motivated by potentially unrelated applications. It is the hope of this dissertation that the provided models, signal processing methods, visualizations, analytical bound, and results not only justify mmW radar in human speech analysis and backscatter communications, but that they also contribute to the community's understanding of how certain vibrational movements can be best observed, processed, and made useful more broadly.
Item Open Access Architecture Framework for Trapped-ion Quantum Computer based on Performance Simulation Tool(2015) Ahsan, MuhammadThe challenge of building scalable quantum computer lies in striking appropriate balance between designing a reliable system architecture from large number of faulty computational resources and improving the physical quality of system components. The detailed investigation of performance variation with physics of the components and the system architecture requires adequate performance simulation tool. In this thesis we demonstrate a software tool capable of (1) mapping and scheduling the quantum circuit on a realistic quantum hardware architecture with physical resource constraints, (2) evaluating the performance metrics such as the execution time and the success probability of the algorithm execution, and (3) analyzing the constituents of these metrics and visualizing resource utilization to identify system components which crucially define the overall performance.
Using this versatile tool, we explore vast design space for modular quantum computer architecture based on trapped ions. We find that while success probability is uniformly determined by the fidelity of physical quantum operation, the execution time is a function of system resources invested at various layers of design hierarchy. At physical level, the number of lasers performing quantum gates, impact the latency of the fault-tolerant circuit blocks execution. When these blocks are used to construct meaningful arithmetic circuit such as quantum adders, the number of ancilla qubits for complicated non-clifford gates and entanglement resources to establish long-distance communication channels, become major performance limiting factors. Next, in order to factorize large integers, these adders are assembled into modular exponentiation circuit comprising bulk of Shor's algorithm. At this stage, the overall scaling of resource-constraint performance with the size of problem, describes the effectiveness of chosen design. By matching the resource investment with the pace of advancement in hardware technology, we find optimal designs for different types of quantum adders. Conclusively, we show that 2,048-bit Shor's algorithm can be reliably executed within the resource budget of 1.5 million qubits.
Item Open Access Attack Countermeasure Trees: A Non-state-space Approach Towards Analyzing Security and Finding Optimal Countermeasure Set(2010) Roy, ArpanAttack tree (AT) is one of the widely used non-statespace
models in security analysis. The basic formalism of AT
does not take into account defense mechanisms. Defense trees
(DTs) have been developed to investigate the effect of defense
mechanisms usinghg measures such as attack cost, security
investment cost, return on attack (ROA) and return on investment
(ROI). DT, however, places defense mechanisms only at the
leaf nodes and the corresponding ROI/ROA analysis does not
incorporate the probabilities of attack. In attack response tree
(ART), attack and response are both captured but ART suffers
from the problem of state-space explosion, since solution of
ART is obtained by means of a state space model. In this
paper, we present a novel attack tree paradigm called attack
countermeasure tree (ACT) which avoids the generation and
solution of the state-space model and takes into account attacks as
well as countermeasures (in the form of detection and mitigation
events). In ACT, detection and mitigation are allowed not just at
the leaf node but also at the intermediate nodes while at the same
time the state-space explosion problem is avoided in its analysis.
We use single and multiobjective optimization to find optimal
countermeasures under different constraints. We illustrate the
features of ACT using several case studies.
Item Open Access Automatic Volumetric Analysis of the Left Ventricle in 3D Apical Echocardiographs(2015) Wald, Andrew JamesApically-acquired 3D echocardiographs (echoes) are becoming a standard data component in the clinical evaluation of left ventricular (LV) function. Ejection fraction (EF) is one of the key quantitative biomarkers derived from echoes and used by echocardiographers to study a patient's heart function. In present clinical practice, EF is either grossly estimated by experienced observers, approximated using orthogonal 2D slices and Simpson's method, determined by manual segmentation of the LV lumen, or measured using semi-automatic proprietary software such as Philips QLab-3DQ. Each of these methods requires particular skill by the operator, and may be time-intensive, subject to variability, or both.
To address this, I have developed a novel, fully automatic method to LV segmentation in 3D echoes that offers EF calculation on clinical datasets at the push of a button. The solution is built on a pipeline that utilizes a number of image processing and feature detection methods specifically adopted to the 3D ultrasound modality. It is designed to be reasonably robust at handling dropout and missing features typical in clinical echocardiography. It is hypothesized that this method can displace the need for sonographer input, yet provide results statistically indistinguishable from those of experienced sonographers using QLab-3DQ, the current gold standard that is employed at Duke University Hospital.
A pre-clinical validation set, which was also used for iterative algorithm development, consisted of 70 cases previously seen at Duke. Of these, manual segmentations of 7 clinical cases were compared to the algorithm. The final algorithm predicts EF within ± 0.02 ratio units for 5 of them, and ± 0.09 units for the remaining 2 cases, within common clinical tolerance. Another 13 of the cases, often used for sonographer training and rated as having good image quality, were analyzed using QLab-3DQ, in which 11 cases showed concordance (± 0.10) with the algorithm. The remaining 50 cases retrospectively recruited at Duke and representative of everyday image quality showed 62% concordance (± 0.10) of QLab-3DQ with the algorithm. The fraction of concordant cases is highly dependent on image quality, and concordance improves greatly upon disqualification of poor quality images. Visual comparison of the QLab-3DQ segmentation to my algorithm overlaid on top of the original echoes also suggests that my method may be preferable or of high utility even in cases of EF discordance. This paper describes the algorithm and offers justifications for the adopted methods. The paper also discusses the design of a retrospective clinical trial now underway at Duke with 60 additional unseen cases intended only for independent validation.
Item Open Access Autonomous Robot Packing of Complex-shaped Objects(2020) Wang, FanWith the unprecedented growth of the E-Commerce market, robotic warehouse automation has attracted much interest and capital investment. Compared to a conventional labor-intensive approach, an automated robot warehouse brings potential benefits such as increased uptime, higher total throughput, and lower accident rates. To date, warehouse automation has mostly developed in inventory mobilization and object picking.
Recently, one area that has attracted a lot of research attention is automated packaging or packing, a process during which robots stow objects into small confined spaces, such as shipping boxes. Automatic item packing is complementary to item picking in warehouse settings. Packing items densely improves the storage capacity, decreases the delivery cost, and saves packing materials. However, it is a demanding manipulation task that has not been thoroughly explored by the research community.
This dissertation focuses on packing objects of arbitrary shapes and weights into a single shipping box with a robot manipulator. I seek to advance the state-of-the-art in robot packing with regards to optimizing container size for a set of objects, planning object placements for stability and feasibility, and increasing robustness of packing execution with a robot manipulator.
The three main innovations presented in this dissertation are:
1. The implementation of a constrained packing planner that outputs stable and collision-free placements of objects when packed with a robot manipulator. Experimental evaluation of the method is conducted with a realistic physical simulator on a dataset of scanned real-world items, demonstrating stable and high-quality packing plans compared with other 3D packing methods.
2. The proposal and implementation of a framework for evaluating the ability to pack a set of known items presented in an unknown order of arrival within a given container size. This allows packing algorithms to work in more realistic warehouse scenarios, as well as provides a means of optimizing container size to ensure successful packing under unknown item arrival order conditions.
3. The systematic evaluation of the proposed planner under real-world uncertainties such as vision, grasping, and modeling errors. To conduct this evaluation, I built a hardware and software packing testbed that is representative of the current state-of-the-art in sensing, perception, and planing. An evaluation of the testbed is then performed to study the error sources and to model their magnitude. Subsequently, robustness measures are proposed to improve the packing success rate under such errors.
Overall, empirical results demonstrate that a success rate of up to 98\% can be achieved by a physical robot despite real-world uncertainties, demonstrating that these contributions have the potential to realize robust, dense automatic object packing.
Item Open Access Bayesian and Information-Theoretic Learning of High Dimensional Data(2012) Chen, MinhuaThe concept of sparseness is harnessed to learn a low dimensional representation of high dimensional data. This sparseness assumption is exploited in multiple ways. In the Bayesian Elastic Net, a small number of correlated features are identified for the response variable. In the sparse Factor Analysis for biomarker trajectories, the high dimensional gene expression data is reduced to a small number of latent factors, each with a prototypical dynamic trajectory. In the Bayesian Graphical LASSO, the inverse covariance matrix of the data distribution is assumed to be sparse, inducing a sparsely connected Gaussian graph. In the nonparametric Mixture of Factor Analyzers, the covariance matrices in the Gaussian Mixture Model are forced to be low-rank, which is closely related to the concept of block sparsity.
Finally in the information-theoretic projection design, a linear projection matrix is explicitly sought for information-preserving dimensionality reduction. All the methods mentioned above prove to be effective in learning both simulated and real high dimensional datasets.
Item Open Access Bayesian Nonparametric Modeling of Latent Structures(2014) Xing, ZhengmingUnprecedented amount of data has been collected in diverse fields such as social network, infectious disease and political science in this information explosive era. The high dimensional, complex and heterogeneous data imposes tremendous challenges on traditional statistical models. Bayesian nonparametric methods address these challenges by providing models that can fit the data with growing complexity. In this thesis, we design novel Bayesian nonparametric models on dataset from three different fields, hyperspectral images analysis, infectious disease and voting behaviors.
First, we consider analysis of noisy and incomplete hyperspectral imagery, with the objective of removing the noise and inferring the missing data. The noise statistics may be wavelength-dependent, and the fraction of data missing (at random) may be substantial, including potentially entire bands, offering the potential to significantly reduce the quantity of data that need be measured. We achieve this objective by employing Bayesian dictionary learning model, considering two distinct means of imposing sparse dictionary usage and drawing the dictionary elements from a Gaussian process prior, imposing structure on the wavelength dependence of the dictionary elements.
Second, a Bayesian statistical model is developed for analysis of the time-evolving properties of infectious disease, with a particular focus on viruses. The model employs a latent semi-Markovian state process, and the state-transition statistics are driven by three terms: ($i$) a general time-evolving trend of the overall population, ($ii$) a semi-periodic term that accounts for effects caused by the days of the week, and ($iii$) a regression term that relates the probability of infection to covariates (here, specifically, to the Google Flu Trends data).
Third, extensive information on 3 million randomly sampled United States citizens is used to construct a statistical model of constituent preferences for each U.S. congressional district. This model is linked to the legislative voting record of the legislator from each district, yielding an integrated model for constituency data, legislative roll-call votes, and the text of the legislation. The model is used to examine the extent to which legislators' voting records are aligned with constituent preferences, and the implications of that alignment (or lack thereof) on subsequent election outcomes. The analysis is based on a Bayesian nonparametric formalism, with fast inference via a stochastic variational Bayesian analysis.
Item Open Access Calibrating and Beamforming Distributed Arrays in Passive Sonar Environments(2022) Ganti, AnilThis dissertation presents methods for calibrating and beamforming a distributed array for detecting and localizing sources of interest using passive sonar. Passive sonar is critical for underwater acoustic surveillance, marine life tracking, and environmental monitoring but is increasingly difficult with greater shipping traffic and other man-made noise sources. Large aperture hydrophone arrays are needed to suppress these sources of interference and find weak targets of interest. Traditionally, large hydrophone arrays are densely sampled uniform arrays which are expensive and time-consuming to deploy and maintain. There is growing interest instead in forming distributed arrays out of low-cost, individually small arrays which are coherently processed to achieve high gain and resolution. Conventional array processing methods are not well suited to this end and this dissertation develops new methods for array calibration and beamforming which ultimately enable high resolution passive sonar at low-cost. This work develops estimation methods for array parameters in uncalibrated, unsynchronized collections of acoustic sensors and also develops adaptive beamforming techniques on such arrays in complex and uncertain ocean environments.
Methods for estimating sampling rate offset (SRO) are developed using a single narrowband source of opportunity whose parameters need not be estimated. A search-free method which jointly estimates all SRO parameters in an acoustic sensor network is presented and shown to improve as the network size increases. A second SRO estimation method is developed for unsynchronized sub-arrays to enable SRO estimation with a source that has a bearing rate. This is of particular value in ocean environments where transiting cargo ships are the most prevalent calibration sources.
Next, a technique for continuously estimating multiple sub-array positions using a single, tonal moving source is presented. Identical, well-calibrated sub-arrays with unknown relative positions exhibit a rotational invariance in the signal structure which is exploited to blindly estimate the inter-array spatial wavefronts. These wavefront measurements are used in an Unscented Kalman Filter (UKF) to continuously improve sub-array position estimates.
Lastly, this work addresses adaptive beamforming in uncertain, complex propagation environments where the modeled wavefronts will inevitably not match the true wavefronts. Adaptive beamforming techniques are developed which maintain gain even with significant signal mismatch due to unknown or uncertain source wavefronts by estimating a target-free covariance matrix from the received data and using just a single gain constraint in the beamformer optimization. Target-free covariances are estimated using an eigendecomposition of the received data and assuming that modes which potentially contain sources of interest can be identified. This method is applied to a distributed array where only part of the array wavefront is explicitly modeled and shown to improve interference suppression and the output signal-to-interference-plus-noise ratio (SINR).
This idea is then extended to realistic environments and a method for finding potential target components is developed. Blind source separation (BSS) methods using second-order statistics are adopted for wideband source separation in shallow-water environments. BSS components are associated with either target or interference based on their received temporal spectra and are automatically labeled with a convolutional neural network (CNN). This method is applicable when sources have overlapping but distinct transmitted spectra, but also when the channel itself colors the received spectra due to range-dependent frequency-selective fading. Simulations in realistic shallow-water environments demonstrate the ability to blindly separate and label uncorrelated components based on frequency-selective fading patterns. These simulations then validate the robustness of the developed wavefront adaptive sensing (WAS) beamformer compared to a standard minimum variance distortionless response (MVDR) beamformer. Finally, this method is demonstrated using real shallow-water data from the SWellEx96 S59 experiment off the coast of Southern California. A simulated target is injected into this data and masked behind a loud towed source. It is shown that the WAS beamformer is able to suppress the towed source and achieve an target output SINR which is close to that of the optimal beamformer.
Item Open Access Classical Coding Approaches to Quantum Applications(2020) Rengaswamy, NarayananQuantum information science strives to leverage the quantum-mechanical nature of our universe in order to achieve large improvements in certain information processing tasks. Such tasks include quantum communications and fault-tolerant quantum computation. In this dissertation, we make contributions to both of these applications.
In deep-space optical communications, the mathematical abstraction of the binary phase shift keying (BPSK) modulated pure-loss optical channel is called the pure-state channel. It takes classical inputs and delivers quantum outputs that are pure (qubit) states. To achieve optimal information transmission, if classical error-correcting codes are employed over this channel, then one needs to develop receivers that collectively measure all output qubits in order to optimally identify the transmitted message. In general, it is hard to determine these optimal collective measurements and even harder to realize them in practice. So, current receivers first measure each qubit channel output and then classically post-process the measurements. This approach is sub-optimal. We investigate a recently proposed quantum algorithm for this task, which is inspired by classical belief-propagation algorithms, and analyze its performance on a simple $5$-bit code. We show that the algorithm makes optimal decisions for the value of each bit and it appears to achieve optimal performance when deciding the full transmitted message. We also provide explicit circuits for the algorithm in terms of standard gates. For deep-space optical communications, this suggests a near-term quantum advantage over the aforementioned sub-optimal scheme. Such a communication advantage appears to be the first of its kind.
Quantum error correction is vital to building a universal fault-tolerant quantum computer. An $[\![ n,k,d ]\!]$ quantum error-correcting code (QECC) protects $k$ information (or logical) qubits by encoding them into quantum states of $n > k$ physical qubits such that any undetectable error must affect at least $d$ physical qubits. In this dissertation we focus on stabilizer QECCs, which are the most widely used type of QECCs. Since we would like to perform universal (i.e., arbitrary) quantum computation on the $k$ logical qubits, an important problem is to determine fault-tolerant $n$-qubit physical operations that induce the desired logical operations. Our first contribution here is a systematic algorithm that can translate a given logical Clifford operation on a stabilizer QECC into all (equivalence classes of) physical Clifford circuits that realize that operation. We exploit binary symplectic matrices to make this translation efficient and call this procedure the Logical Clifford Synthesis (LCS) algorithm.
In order to achieve universality, a quantum computer also needs to implement at least one non-Clifford logical operation. We develop a mathematical framework for a large subset of diagonal (unitary) operations in the Clifford hierarchy, and we refer to these as Quadratic Form Diagonal (QFD) gates. We show that all $1$- and $2$-local diagonal gates in the hierarchy are QFD, and we rigorously derive their action on Pauli matrices. This framework of QFD gates includes many non-Clifford gates and could be of independent interest. Subsequently, we use the QFD formalism to characterize all $[\![ n,k,d ]\!]$ stabilizer codes whose code subspaces are preserved under the transversal action of $T$ and $T^{-1}$ gates on the $n$ physical qubits. The $T$ and $T^{-1}$ gates are among the simplest non-Clifford gates to engineer in the lab. By employing a ``reverse LCS'' strategy, we also discuss the logical operations induced by these physical gates. We discuss some important corollaries related to triorthogonal codes and the optimality of CSS codes with respect to $T$ and $T^{-1}$ gates. We also describe a few purely-classical coding problems motivated by physical constraints arising from fault-tolerance. Finally, we discuss several examples of codes and determine the logical operation induced by physical $Z$-rotations on a family of quantum Reed-Muller codes. A conscious effort has been made to keep this dissertation self-contained, by including necessary background material on quantum information and computation.
Item Open Access Classification and Characterization of Heart Sounds to Identify Heart Abnormalities(2019) LaPorte, EmmaThe main function of the human heart is to act as a pump, facilitating the delivery of oxygenated blood to the many cells within the body. Heart failure (HF) is the medical condition in which a heart cannot adequately pump blood to the body, often resulting from other conditions such as coronary artery disease, previous heart attacks, high blood pressure, diabetes, or abnormal heart valves. HF afflicts approximately 6.5 million adults in the US alone [1] and manifests itself often in the form of fatigue, shortness of breath, increased heart rate, confusion, and more, resulting in a lower quality of life for those afflicted. At the earliest stage of HF, an adequate treatment plan could be relatively manageable, including healthy lifestyle changes such as eating better and exercising more. However, the symptoms (and the heart) worsen overtime if left untreated, requiring more extreme treatment such as surgical intervention and/or a heart transplant [2]. Given the magnitude of this condition, there is potential for large impact both in (1) automating (and thus expediting) the diagnosis of HF and (2) in improving HF treatment options and care. These topics are explored in this work.
An early diagnosis of HF is beneficial because HF left untreated will result in an increasingly severe condition, requiring more extreme treatment and care. Typically, HF is first diagnosed by a physician during auscultation, which is the act of listening to sounds from the heart through a stethoscope [3]. Therefore, physicians are trained to listen to heart sounds and identify them as normal or abnormal. Heart sounds are the acoustic result of the internal pumping mechanism of the heart. Therefore, when the heart is functioning normally, there is a resulting acoustic spectrum representing normal heart sounds, that a physician listens to and identifies as normal. However, when the heart is functioning abnormally, there is a resulting acoustic spectrum that differs from normal heart sounds, that a physician listens to and identifies as abnormal [3]–[5].
One goal of this work is to automate the auscultation process by developing a machine learning algorithm to identify heart sounds as normal or abnormal. An algorithm is developed for this work that extracts features from a digital stethoscope recording and classifies the recording as normal or abnormal. An extensive feature extraction and selection analysis is performed, ultimately resulting in a classification algorithm with an accuracy score of 0.85. This accuracy score is comparable to current high-performing heart sound classification algorithms [6].
The purpose of the first portion of this work is to automate the HF diagnosis process, allowing for more frequent diagnoses and at an earlier stage of HF. For an individual already diagnosed with HF, there is potential to improve current treatment and care. Specifically, if the HF is extreme, an individual may require a surgically implanted medical device called a Left Ventricular Assist Device (LVAD). The purpose of an LVAD is to assist the heart in pumping blood when the heart cannot adequately do so on its own. Although life-saving, LVADs have a high complication rate. These complications are difficult to identify prior to a catastrophic outcome. Therefore, there is a need to monitor LVAD patients to identify these complications. Current methods of monitoring individuals and their LVADs are invasive or require an in-person hospital visit. Acoustical monitoring has the potential to non-invasively remotely monitor LVAD patients to identify abnormalities at an earlier stage. However, this is made difficult because the LVAD pump noise obscures the acoustic spectrum of the native heart sounds.
The second portion of this work focuses on this specific case of HF, in which an individual’s treatment plan includes an LVAD. A signal processing pipeline is proposed to extract the heart sounds in the presence of the LVAD pump noise. The pipeline includes down sampling, filtering, and a heart sound segmentation algorithm to identify states of the cardiac cycle: S1, S2, systole, and diastole. These states are validated using two individuals’ digital stethoscope recordings by comparing the labeled states to the characteristics expected of heart sounds. Both subjects’ labeled states closely paralleled the expectations of heart sounds, validating the signal processing pipeline developed for this work.
This exploratory analysis can be furthered with the ongoing data collection process. With enough data, the goal is to extract clinically relevant information from the underlying heart sounds to assess cardiac function and identify LVAD disfunction prior to a catastrophic outcome. Ultimately, this non-invasive, remote model will allow for earlier diagnosis of LVAD complications.
In total, this work serves two main purposes: the first is developing a machine learning algorithm that automates the HF diagnosis process; the second is extracting heart sounds in the presence of LVAD noise. Both of these topics further the goal of earlier diagnosis and therefore better outcomes for those afflicted with HF.