Browsing by Subject "Electrical engineering"
- Results Per Page
- Sort Options
Item Open Access A CG-FFT Based Fast Full Wave Imaging Method and its Potential Industrial Applications(2015) Yu, ZhiruThis dissertation focuses on a FFT based forward EM solver and its application in inverse problems. The main contributions of this work are two folded. On the one hand, it presents the first scaled lab experiment system in the oil and gas industry for through casing hydraulic fracture evaluation. This system is established to validate the feasibility of contrasts enhanced fractures evaluation. On the other hand, this work proposes a FFT based VIE solver for hydraulic fracture evaluation. This efficient solver is needed for numerical analysis of such problem. The solver is then generalized to accommodate scattering simulations for anisotropic inhomogeneous magnetodielectric objects. The inverse problem on anisotropic objects are also studied.
Before going into details of specific applications, some background knowledge is presented. This dissertation starts with an introduction to inverse problems. Then algorithms for forward and inverse problems are discussed. The discussion on forward problem focuses on the VIE formulation and a frequency domain solver. Discussion on inverse problems focuses on iterative methods.
The rest of the dissertation is organized by the two categories of inverse problems, namely the inverse source problem and the inverse scattering problem.
The inverse source problem is studied via an application in microelectronics. In this application, a FFT based inverse source solver is applied to process near field data obtained by near field scanners. Examples show that, with the help of this inverse source solver, the resolution of unknown current source images on a device under test is greatly improved. Due to the improvement in resolution, more flexibility is given to the near field scan system.
Both the forward and inverse solver for inverse scattering problems are studied in detail. As a forward solver for inverse scattering problems, a fast FFT based method for solving VIE of magnetodielectric objects with large electromagnetic contrasts are presented due to the increasing interest in contrasts enhanced full wave EM imaging. This newly developed VIE solver assigns different basis functions of different orders to expand flux densities and vector potentials. Thus, it is called the mixed ordered BCGS-FFT method. The mixed order BCGS-FFT method maintains benefits of high order basis functions for VIE while keeping correct boundary conditions for flux densities and vector potentials. Examples show that this method has an excellent performance on both isotropic and anisotropic objects with high contrasts. Examples also verify that this method is valid in both high and low frequencies. Based on the mixed order BCGS-FFT method, an inverse scattering solver for anisotropic objects is studied. The inverse solver is formulated and solved by the variational born iterative method. An example given in this section shows a successful inversion on an anisotropic magnetodielectric object.
Finally, a lab scale hydraulic fractures evaluation system for oil/gas reservoir based on previous discussed inverse solver is presented. This system has been setup to verify the numerical results obtained from previously described inverse solvers. These scaled experiments verify the accuracy of the forward solver as well as the performance of the inverse solver. Examples show that the inverse scattering model is able to evaluate contrasts enhanced hydraulic fractures in a shale formation. Furthermore, this system, for the first time in the oil and gas industry, verifies that hydraulic fractures can be imaged through a metallic casing.
Item Open Access A Hybrid Spectral-Element / Finite-Element Time-Domain Method for Multiscale Electromagnetic Simulations(2010) Chen, JiefuIn this study we propose a fast hybrid spectral-element time-domain (SETD) / finite-element time-domain (FETD) method for transient analysis of multiscale electromagnetic problems, where electrically fine structures with details much smaller than a typical wavelength and electrically coarse structures comparable to or larger than a typical wavelength coexist.
Simulations of multiscale electromagnetic problems, such as electromagnetic interference (EMI), electromagnetic compatibility (EMC), and electronic packaging, can be very challenging for conventional numerical methods. In terms of spatial discretization, conventional methods use a single mesh for the whole structure, thus a high discretization density required to capture the geometric characteristics of electrically fine structures will inevitably lead to a large number of wasted unknowns in the electrically coarse parts. This issue will become especially severe for orthogonal grids used by the popular finite-difference time-domain (FDTD) method. In terms of temporal integration, dense meshes in electrically fine domains will make the time step size extremely small for numerical methods with explicit time-stepping schemes. Implicit schemes can surpass stability criterion limited by the Courant-Friedrichs-Levy (CFL) condition. However, due to the large system matrices generated by conventional methods, it is almost impossible to employ implicit schemes to the whole structure for time-stepping.
To address these challenges, we propose an efficient hybrid SETD/FETD method for transient electromagnetic simulations by taking advantages of the strengths of these two methods while avoiding their weaknesses in multiscale problems. More specifically, a multiscale structure is divided into several subdomains based on the electrical size of each part, and a hybrid spectral-element / finite-element scheme is proposed for spatial discretization. The hexahedron-based spectral elements with higher interpolation degrees are efficient in modeling electrically coarse structures, and the tetrahedron-based finite elements with lower interpolation degrees are flexible in discretizing electrically fine structures with complex shapes. A non-spurious finite element method (FEM) as well as a non-spurious spectral element method (SEM) is proposed to make the hybrid SEM/FEM discretization work. For time integration we employ hybrid implicit / explicit (IMEX) time-stepping schemes, where explicit schemes are used for electrically coarse subdomains discretized by coarse spectral element meshes, and implicit schemes are used to overcome the CFL limit for electrically fine subdomains discretized by dense finite element meshes. Numerical examples show that the proposed hybrid SETD/FETD method is free of spurious modes, is flexible in discretizing sophisticated structure, and is more efficient than conventional methods for multiscale electromagnetic simulations.
Item Open Access A Molecular-scale Programmable Stochastic Process Based On Resonance Energy Transfer Networks: Modeling And Applications(2016) Wang, SiyangWhile molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Item Open Access A Semi-Empirical Monte Carlo Method of Organic Photovoltaic Device Performance in Resonant, Infrared, Matrix-Assisted Pulsed Laser Evaporation (RIR-MAPLE) Films(2015) Atewologun, AyomideUtilizing the power of Monte Carlo simulations, a novel, semi-empirical method for investigating the performance of organic photovoltaics (OPVs) in resonant infrared, matrix-assisted pulsed laser evaporation (RIR-MAPLE) films is explored. Emulsion-based RIR-MAPLE offers a unique and powerful alternative to solution processing in depositing organic materials for use in solar cells: in particular, its usefulness in controlling the nanoscale morphology of organic thin films and the potential for creating novel hetero-structures make it a suitable experimental backdrop for investigating trends through simulation and gaining a better understanding of how different thin film characteristics impact OPV device performance.
The work presented in this dissertation explores the creation of a simulation tool that relies heavily on measureable properties of RIR-MAPLE films that impact efficiency and can be used to inform film deposition and dictate the paths for future improvements in OPV devices. The original nanoscale implementation of the Monte Carlo method for investigating OPV performance is transformed to enable direct comparison between simulation and experimental external quantum efficiency results. Next, a unique microscale formulation of the Dynamic Monte Carlo (DMC) model is developed based on the observable, fundamental differences between the morphologies of RIR-MAPLE and solution-processed bulk heterojunction (BHJ) films. This microscale model enables us to examine the sensitivity of device performance to various structural and electronic properties of the devices. Specifically, using confocal microscopy, we obtain an average microscale feature size for the RIR-MAPLE P3HT:PC61BM (1:1) BHJ system that represents a strategic starting point for utilizing the DMC as an empirical tool.
Building on this, the RIR-MAPLE P3HT:PC61BM OPV system is studied using input simulation parameters obtained from films with different material ratios and overall device structures based on characterization techniques such as grazing incidence-wide angle X-ray scattering (GI-WAXS) and X-ray photoelectron spectroscopy (XPS). The results from the microscale DMC simulation compare favorably to experimental data and allow us to articulate a well-informed critique on the strengths and limitations of the model as a predictive tool. The DMC is then used to analyze a different RIR-MAPLE BHJ system: PCPDTBT:PC71BM, where the deposition technique itself is investigated for differences in the primary solvents used during film deposition.
Finally, a multi-scale DMC model is introduced where morphology measurements taken at two different size scales, as well as structural and electrical characterization, provide a template that mimics the operation of OPVs. This final, semi-empirical tool presents a unique simulation opportunity for exploring the different properties of RIR-MAPLE deposited OPVs, their effects on OPV performance and potential design routes for improving device efficiencies.
Item Open Access Accelerated Sepsis Diagnosis by Seamless Integration of Nucleic Acid Purification and Detection(2014) Hsu, BangNingBackground The diagnosis of sepsis is challenging because the infection can be caused by more than 50 species of pathogens that might exist in the bloodstream in very low concentrations, e.g., less than 1 colony-forming unit/ml. As a result, among the current sepsis diagnostic methods there is an unsatisfactory trade-off between the assay time and the specificity of the derived diagnostic information. Although the present qPCR-based test is more specific than biomarker detection and faster than culturing, its 6 ~ 10 hr turnaround remains suboptimal relative to the 7.6%/hr rapid deterioration of the survival rate, and the 3 hr hands-on time is labor-intensive. To address these issues, this work aims to utilize the advances in microfluidic technologies to expedite and automate the ``nucleic acid purification - qPCR sequence detection'' workflow.
Methods and Results This task is evaluated to be best approached by combining immiscible phase filtration (IPF) and digital microfluidic droplet actuation (DM) on a fluidic device. In IPF, as nucleic acid-bound magnetic beads are transported from an aqueous phase to an immiscible phase, the carryover of aqueous contaminants is minimized by the high interfacial tension. Thus, unlike a conventional bead-based assay, the necessary degree of purification can be attained in a few wash steps. After IPF reduces the sample volume from a milliliter-sized lysate to a microliter-sized eluent, DM can be used to automatically prepare the PCR mixture. This begins with compartmenting the eluent in accordance with the desired number of multiplex qPCR reactions, and then transporting droplets of the PCR reagents to mix with the eluent droplets. Under the outlined approach, the IPF - DM integration should lead to a notably reduced turnaround and a hands-free ``lysate-to-answer'' operation.
As the first step towards such a diagnostic device, the primary objective of this thesis is to verify the feasibility of the IPF - DM integration. This is achieved in four phases. First, the suitable assays, fluidic device, and auxiliary systems are developed. Second, the extent of purification obtained per IPF wash, and hence the number of washes needed for uninhibited qPCR, are estimated via off-chip UV absorbance measurement and on-chip qPCR. Third, the performance of on-chip qPCR, particularly the copy number - threshold cycle correlation, is characterized. Lastly, the above developments accumulate to an experiment that includes the following on-chip steps: DNA purification by IPF, PCR mixture preparation via DM, and target quantification using qPCR - thereby demonstrating the core procedures in the proposed approach.
Conclusions It is proposed to expedite and automate qPCR-based multiplex sparse pathogen detection by combining IPF and DM on a fluidic device. As a start, this work demonstrated the feasibility of the IPF - DM integration. However, a more thermally robust device structure will be needed for later quantitative investigations, e.g., improving the bead - buffer mixing. Importantly, evidences indicate that future iterations of the IPF - DM fluidic device could reduce the sample-to-answer time by 75% to 1.5 hr and decrease the hands-on time by 90% to approximately 20 min.
Item Open Access Accurate and Efficient Methods for the Scattering Simulation of Dielectric Objects in a Layered Medium(2019) Huang, WeifengElectromagnetic scattering in a layered medium (LM) is important for many engineering applications, including the hydrocarbon exploration. Various computational methods for tackling well logging simulations are summarized. Given their advantages and limitations, main attention is devoted to the surface integral equation (SIE) and its hybridization with the finite element method (FEM).
The thin dielectric sheet (TDS) based SIE, i.e., TDS-SIE, is introduced to the simulation of fractures. Its accuracy and efficiency are extensively demonstrated by simulating both conductive and resistive fractures. Fractures of variant apertures, conductivities, dipping angles, and extensions are also simulated and analyzed. With the aid of layered medium Green's functions (LMGFs), TDS-SIE is extended into the LM, which results in the solver entitled LM-TDS-SIE.
In order to consider the borehole effect, the well-known loop and tree basis functions are utilized to overcome low-frequency breakdown of the Poggio, Miller, Chang, Harrington, Wu, and Tsai (PMCHWT) formulation. This leads to the loop-tree (LT) enhanced PMCHWT, which can be hybridized with TDS-SIE to simulate borehole and fracture together. The resultant solver referred to as LT-TDS is further extended into the LM, which leads to the solver entitled LM-LT-TDS.
For inhomogeneous or complex structures, SIE is not suitable for their scattering simulations. It becomes advantageous to hybridize FEM with SIE in the framework of domain decomposition method (DDM), which allows independent treatment of each subdomain and nonconformal meshes between them. This hybridization can be substantially enhanced by the adoption of LMGFs and loop-tree bases, leading to the solver entitled LM-LT-DDM. In comparison with LM-LT-TDS, this solver is more powerful and able to handle more general low-frequency scattering problems in layered media.
Item Open Access Adaptive Brain-Computer Interface Systems For Communication in People with Severe Neuromuscular Disabilities(2016) Mainsah, Boyla OBrain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.
Item Open Access Addressing Scalability, Stability, and Sensitivity in Nanomaterial-based Electronic Biosensors(2024) Albarghouthi, Faris MaherThe inaccessibility of medical facilities in remote locations has pushed for scientific advances in the development of point-of-care (POC) systems that would enable timely and accurate detection of diseases. Currently, diagnostic tests rely heavily on the use of centralized primary care facilities due to the need for expensive, bulky, and complex diagnostic tools, leading to greater disparity in outlook between individuals with access to centralized medical facilities and those in remote or underserved locations. Electrical biosensors, including transistor-based devices (i.e., BioFETs), have the potential to offer versatile biomarker detection in a simple, low-cost, and POC manner. Semiconducting carbon nanotubes (CNTs) are among the most explored nanomaterial candidates for BioFETs owing to their high electrical sensitivity and compatibility with diverse fabrication approaches such as direct-write printing, which enables rapid and low-cost fabrication of electronics. However, while CNT-based BioFETs have the power to transform the medical landscape by providing early and POC disease detection, they face several key challenges that have hindered their mass proliferation as diagnostic tools. Among these, challenges in scalability, stability, and sensitivity present a significant hinderance to their utility.
The work presented in this dissertation focuses on tackling these three key BioFET issues. The issue of scalability in printed BioFETs means that to date, there has been difficulty in achieving down-scaled printing of the transistor (the transduction component of the BioFET), particularly to submicron dimensions, due to the limited resolution of current printing techniques (10-30 µm). Overcoming this limitation is important for BioFETs as a large device area means that large patient samples (i.e., blood) are required, with a greater likelihood for differences in signal across various regions of the BioFET, preventing accurate detection. In this dissertation, a novel capillary flow printing (CFP) technique is used to repeatably create fully printed submicron carbon nanotube thin-film transistors (CNT-TFTs). The versatility of this printing technique is demonstrated by printing conducting, semiconducting, and insulating inks – the three necessary components for creating BioFETs – on several types of substrates (SiO2, Kapton, and paper) and through the fabrication of various device architectures. Notably, CFP of these submicron CNT-TFTs yielded on-currents of 1.12 mA/mm, demonstrating the strong transistor performance achievable with CFP. This work highlights the ability of CFP as a viable fabrication method for down-scaled printed transistors (and thus BioFETs), helping overcome BioFET scaling limitations.
Next, stability limitations of BioFETs are addressed through exploration of various passivation strategies. The incompatibility of electronic devices with ionic liquids (including blood, saliva, etc.) presents a challenge to the stability of BioFETs, and this is especially evident in the presence of detrimental leakage currents in solution-gated devices, which often obscure signal detectability. The work presented in this dissertation highlights this often-ignored incompatibility, and points to its responsibility in creating instability for BioFETs in ionic solutions as well as hindered performance characteristics. By exploring the effects of various passivation strategies on the performance and stability of a CNT BioFET, this work finds that encapsulating metal contacts with a thick photoresist (SU-8) and a thin high-k dielectric (HfO2) over the entire chip provides the lowest average leakage current in solution (~2 nA), best initial performance metrics, large-scale device yield, and stability throughout long-duration cycling in phosphate buffered saline. This finding not only provides insight into the importance of passivation strategy in solution-gated devices, particularly with BioFETs, but also enabled the creation of a robust CNT-based biosensing platform that was then used to tackle another stability limitation: signal drift. As binding events take place in a complex milieu above a BioFET, time-based drift of the sensor response can occur, obscuring biomarker detection and adversely affecting device performance. However, as shown in this work, these effects are drastically mitigated through the implementation of a rigorous testing methodology. Thus, by highlighting these often-ignored sources of BioFET instability and demonstrating ways of overcoming them, this work paves the way toward stable and reliable BioFETs.
Finally, BioFET sensitivity is improved using a Debye length extending polymer in conjunction with a novel D4-TFT device architecture. One of the main hindrances to electronic biosensing is Debye length screening, where counterions in an ionic solution form an electrical double layer around the BioFET and obscure charge detection. By growing a polyethylene glycol-like polymer brush interface (POEGMA) above the BioFET surface, the Debye length is significantly increased (from ~0.7 nm to upwards of 25 nm), allowing for accurate detection of large antibody-antigen complexes. When combined with a self-amplifying D4-TFT structure, this enabled realization of one of the highest sensitivities demonstrated in a stable antibody-based BioFET to date (22 aM).
The findings of this dissertation present a key step toward solving a few of the most pertinent issues to the field of electronic biosensing. While BioFETs hold high promise in changing the medical landscape through the widespread use of POC biosensors, they can only be achieved if fundamental issues like scalability, stability, and sensitivity are addressed. Combined, the contributions outlined in this dissertation make significant progress toward that goal, paving the way toward the development of a robust, dependable, and accurate POC BioFET.
Item Open Access An Information-Theoretic Analysis of X-Ray Architectures for Anomaly Detection(2018) Coccarelli, David ScottX-ray scanning equipment currently establishes a first line of defense in the aviation security space. The efficacy of these scanners is crucial to preventing the harmful use of threatening objects and materials. In this dissertation, I introduce a principled approach to the analyses of these systems by exploring performance limits of system architectures and modalities. Moreover, I validate the use of simulation as a design tool with experimental data as well as extend the use of simulation to create high-fidelity realizations of a real-world system measurements.
Conventional performance analysis of detection systems confounds the effects of the system architecture (sources, detectors, system geometry, etc.) with the effects of the detection algorithm. We disentangle the performance of the system hardware and detection algorithm so as to focus on analyzing the performance of just the system hardware. To accomplish this, we introduce an information-theoretic approach to this problem. This approach is based on a metric derived from Cauchy-Schwarz mutual information and is analogous to the channel capacity concept from communications engineering. We develop and utilize a framework that can produce thousands of system simulations representative of a notional baggage ensembles. These simulations and the prior knowledge of the virtual baggage allow us to analyze the system as it relays information pertinent to a detection task.
In this dissertation, I discuss the application of this information-theoretic approach to study variations of X-ray transmission architectures as well as novel screening systems based on X-ray scatter and phase. The results show how effective use of this metric can impact design decisions for X-ray systems. Moreover, I introduce a database of experimentally acquired X-ray data both as a means to validate the simulation approach and to produce a database ripe for further reconstruction and classification investigations. Next, I show the implementation of improvements to the ensemble representation in the information-theoretic material model. Finally I extend the simulation tool toward high-fidelity representation of real-world deployed systems.
Item Open Access Analysis of μECoG Design Elements for Optimized Signal Acquisition(2022) Williams, Ashley JerriHigh density electrode arrays that can record spatially and temporally detailed neural information provide a new horizon for the scientific exploration of the brain. Chief amongst these new tools is micro-electrocorticography (µECoG), which has grown in usage over the past two decades. As µECoG arrays increase in contact number, density, and complexity, the form factors of arrays will also have to change in tandem – particularly the size and spacing (pitch) of electrode contacts on the array. The continued growth of the field of µECoG research and innovation is hampered by a lack of understanding of how fundamental design aspects of the arrays may impact the information obtained from µECoG in different recording bands of interest and animal models. Utilizing thin-film fabrication to create novel experimental arrays and novel analysis techniques, the work in this dissertation provides an understanding of how differences in electrode contact size and spacing can impact neural metric acquisition in four experimentally and clinically relevant frequency bands of local field potential (LFP), high gamma (HGB), spike band power (SBP), and high frequency broadband (HFB). This dissertation provides innovative arrays that allow for experimental variation within a recording session, unlike much of the work previously published comparing contact size and pitch.
This dissertation shows my work of designing, testing, and implementing novel designs of μECoG arrays to explore the questions of how contact size and pitch may impact neural metrics in rodents and non-human primates (NHPs). In Chapter 2, I used a novel 60-channel array with four different contact size diameters in rodents to explore how contact size, impedance, and noise may impact neural metrics we collect in auditory experiments. We determined that contact size may selectively play a role in neural metric information content acquisition, and that the factors of impedance and noise can impact them significantly in higher frequency bands. This work also showed the ability to resolve multi-unit spiking activity from the surface of the brain. In Chapter 3, I show results obtained using a 61-channel array with different contact pitch in rodents, giving clarity to how the spatial sampling of the neural field may be impacted by the pitch of the electrode contacts used. These results suggest the neural field in higher frequency bands show greater changes at shorter field lengths than lower frequency bands. In Chapter 4, I utilized a larger 244-channel array in a NHP with varied contact sizes to explore how contact size may impact information content obtained from NHPs in the motor-related areas of the brain. Chapter 5 concludes the investigation of how design characteristics may impact neural information content by using an array with a local reference electrode contact to explore how local re-referencing can improve the neural metrics obtained.
The results from this dissertation provide a comprehensive understanding to how the information in the neural field may be impacted by the electrode designs chosen. The utilization of novel in-house fabricated arrays provides a method to explore these neuroscience questions rapidly and at low-cost.
Item Open Access Appearance-based Gaze Estimation and Applications in Healthcare(2020) Chang, ZhuoqingGaze estimation, the ability to predict where a person is looking, has become an indispensable technology in healthcare research. Current tools for gaze estimation rely on specialized hardware and are typically used in well-controlled laboratory settings. Novel appearance-based methods directly estimate a person's gaze from the appearance of their eyes, making gaze estimation possible with ubiquitous, low-cost devices, such as webcams and smartphones. This dissertation presents new methods on appearance-based gaze estimation as well as applying this technology to solve challenging problems in practical healthcare applications.
One limitation of appearance-based methods is the need to collect a large amount of training data to learn the highly variant eye appearance space. To address this fundamental issue, we develop a method to synthesize novel images of the eye using data from a low-cost RGB-D camera and show that this data augmentation technique can improve gaze estimation accuracy significantly. In addition, we explore the potential of utilizing visual saliency information as a means to transparently collect weakly-labelled gaze data at scale. We show that the collected data can be used to personalize a generic gaze estimation model to achieve better performance on an individual.
In healthcare applications, the possibility of replacing specialized hardware with ubiquitous devices when performing eye-gaze analysis is a major asset that appearance-based methods brings to the table. In the first application, we assess the risk of autism in toddlers by analyzing videos of them watching a set of expert-curated stimuli on a mobile device. We show that appearance-based methods can be used to estimate their gaze position on the device screen and that differences between the autistic and typically-developing populations are significant. In the second application, we attempt to detect oculomotor abnormalities in people with cerebellar ataxia using video recorded from a mobile phone. By tracking the iris movement of participants while they watch a short video stimuli, we show that we are able to achieve high sensitivity and specificity in differentiating people with smooth pursuit oculomotor abnormalities from those without.
Item Open Access Applied Millimeter Wave Radar Vibrometry(2023) Centers, JessicaIn this dissertation, novel uses of millimeter-wave (mmW) radars are developed and analyzed. While automotive mmW radars have been ubiquitous in advanced driver assistance systems (ADAS), their ability to sense motions at sub-millimeter scale allows them to also find application in systems that require accurate measurements of surface vibrations. While laser Doppler vibrometers (LDVs) are routinely used to measure such vibrations, the lower size, weight, power, and cost (SWAPc) of mmW radars make vibrometry viable for a variety of new applications. In this work, we consider two such applications: everything-to-vehicle (X2V) wireless communications and non-acoustic human speech analysis.
Within this dissertation, a wireless communication system that uses the radar as a vibrometer is introduced. This system, termed vibrational radar backscatter communications (VRBC), receives messages by observing phase modulations on the radar signal that are caused by vibrations on the surface of a transponder over time. It is shown that this form of wireless communication provides the ability to simultaneously detect, isolate, and decode messages from multiple sources thanks to the spatial resolution of the radar. Additionally, VRBC requires no RF emission on the end of the transponder. Since automotive radars and the conventional X2V solutions are often at odds for spectrum allocations, this characteristic of VRBC is incredibly valuable.
Using an off-the-shelf, resonant transponder, a real VRBC data collection is presented and used to demonstrate the signal processing techniques necessary to decode a VRBC message. This real data collection proves to achieve a data rate just under 100 bps at approximately 5 meters distance. Rates of this scale can provide warning messages or concise situational awareness information in applications such as X2V, but naturally higher rates are desirable. For that reason, this dissertation includes discussion on how to design a more optimal VRBC system via transponder design, messaging scheme choice, and using any afforded flexibility in radar parameter choice.
Through the use of an analytical upper bound on VRBC rate and simulation results, we see that rates closer to 1 kbps should be achievable for a transponder approximately the size of a license plate at ranges under 200 meters. The added benefits of requiring no RF spectrum or network scheduling protocols uniquely positions VRBC as a desirable solution in spaces like X2V over commonly considered, higher rate solutions such as direct short range communications (DSRC).
Upon implementing a VRBC system, a handful of complications were encountered. This document designates a full chapter to solving these cases. This includes properly modeling intersymbol interference caused by resonant surfaces and utilizing sequence detection methods rather than single symbol maximum likelihood methods to improve detection in these cases. Additionally, an analysis on what an ideal clutter filter should look like and how it can begin to be achieved is presented. Lastly, a method for mitigating platform vibrational noise at both the radar and the transponder are presented. Using these methods, message detection errors are better avoided, though more optimal system design fundamentally proves to limit what rates are achievable.
Towards non-acoustic human speech analysis, it is shown in this dissertation that the vibrations of a person's throat during speech generation can be accurately captured using a mmW radar. These measurements prove to be similar to those achieved by the more expensive vibrometry alternative of an LDV with less than 10 dB of SNR depreciation at the first two speech harmonics in the signal's spectrogram. Furthermore, we find that mmW radar vibrometry data resembles a low-pass filtered version of its corresponding acoustic data. We show that this type of data achieves 53% performance in a speaker identification system as opposed to 11\% in a speech recognition system. This performance suggests potential for a mmW radar vibrometry in context-blind speaker identification systems if the performance of the speaker identification system can be further improved without causing the context of the speech more recognizable.
In this dissertation, mmW radar vibrational returns are modelled and signal processing chains are provided to allow for these vibrations to be estimated and used in application. In many cases, the work outlined could be used in other areas of mmW radar vibrometry even though it was originally motivated by potentially unrelated applications. It is the hope of this dissertation that the provided models, signal processing methods, visualizations, analytical bound, and results not only justify mmW radar in human speech analysis and backscatter communications, but that they also contribute to the community's understanding of how certain vibrational movements can be best observed, processed, and made useful more broadly.
Item Open Access Architecture Framework for Trapped-ion Quantum Computer based on Performance Simulation Tool(2015) Ahsan, MuhammadThe challenge of building scalable quantum computer lies in striking appropriate balance between designing a reliable system architecture from large number of faulty computational resources and improving the physical quality of system components. The detailed investigation of performance variation with physics of the components and the system architecture requires adequate performance simulation tool. In this thesis we demonstrate a software tool capable of (1) mapping and scheduling the quantum circuit on a realistic quantum hardware architecture with physical resource constraints, (2) evaluating the performance metrics such as the execution time and the success probability of the algorithm execution, and (3) analyzing the constituents of these metrics and visualizing resource utilization to identify system components which crucially define the overall performance.
Using this versatile tool, we explore vast design space for modular quantum computer architecture based on trapped ions. We find that while success probability is uniformly determined by the fidelity of physical quantum operation, the execution time is a function of system resources invested at various layers of design hierarchy. At physical level, the number of lasers performing quantum gates, impact the latency of the fault-tolerant circuit blocks execution. When these blocks are used to construct meaningful arithmetic circuit such as quantum adders, the number of ancilla qubits for complicated non-clifford gates and entanglement resources to establish long-distance communication channels, become major performance limiting factors. Next, in order to factorize large integers, these adders are assembled into modular exponentiation circuit comprising bulk of Shor's algorithm. At this stage, the overall scaling of resource-constraint performance with the size of problem, describes the effectiveness of chosen design. By matching the resource investment with the pace of advancement in hardware technology, we find optimal designs for different types of quantum adders. Conclusively, we show that 2,048-bit Shor's algorithm can be reliably executed within the resource budget of 1.5 million qubits.
Item Open Access Attack Countermeasure Trees: A Non-state-space Approach Towards Analyzing Security and Finding Optimal Countermeasure Set(2010) Roy, ArpanAttack tree (AT) is one of the widely used non-statespace
models in security analysis. The basic formalism of AT
does not take into account defense mechanisms. Defense trees
(DTs) have been developed to investigate the effect of defense
mechanisms usinghg measures such as attack cost, security
investment cost, return on attack (ROA) and return on investment
(ROI). DT, however, places defense mechanisms only at the
leaf nodes and the corresponding ROI/ROA analysis does not
incorporate the probabilities of attack. In attack response tree
(ART), attack and response are both captured but ART suffers
from the problem of state-space explosion, since solution of
ART is obtained by means of a state space model. In this
paper, we present a novel attack tree paradigm called attack
countermeasure tree (ACT) which avoids the generation and
solution of the state-space model and takes into account attacks as
well as countermeasures (in the form of detection and mitigation
events). In ACT, detection and mitigation are allowed not just at
the leaf node but also at the intermediate nodes while at the same
time the state-space explosion problem is avoided in its analysis.
We use single and multiobjective optimization to find optimal
countermeasures under different constraints. We illustrate the
features of ACT using several case studies.
Item Open Access Automatic Volumetric Analysis of the Left Ventricle in 3D Apical Echocardiographs(2015) Wald, Andrew JamesApically-acquired 3D echocardiographs (echoes) are becoming a standard data component in the clinical evaluation of left ventricular (LV) function. Ejection fraction (EF) is one of the key quantitative biomarkers derived from echoes and used by echocardiographers to study a patient's heart function. In present clinical practice, EF is either grossly estimated by experienced observers, approximated using orthogonal 2D slices and Simpson's method, determined by manual segmentation of the LV lumen, or measured using semi-automatic proprietary software such as Philips QLab-3DQ. Each of these methods requires particular skill by the operator, and may be time-intensive, subject to variability, or both.
To address this, I have developed a novel, fully automatic method to LV segmentation in 3D echoes that offers EF calculation on clinical datasets at the push of a button. The solution is built on a pipeline that utilizes a number of image processing and feature detection methods specifically adopted to the 3D ultrasound modality. It is designed to be reasonably robust at handling dropout and missing features typical in clinical echocardiography. It is hypothesized that this method can displace the need for sonographer input, yet provide results statistically indistinguishable from those of experienced sonographers using QLab-3DQ, the current gold standard that is employed at Duke University Hospital.
A pre-clinical validation set, which was also used for iterative algorithm development, consisted of 70 cases previously seen at Duke. Of these, manual segmentations of 7 clinical cases were compared to the algorithm. The final algorithm predicts EF within ± 0.02 ratio units for 5 of them, and ± 0.09 units for the remaining 2 cases, within common clinical tolerance. Another 13 of the cases, often used for sonographer training and rated as having good image quality, were analyzed using QLab-3DQ, in which 11 cases showed concordance (± 0.10) with the algorithm. The remaining 50 cases retrospectively recruited at Duke and representative of everyday image quality showed 62% concordance (± 0.10) of QLab-3DQ with the algorithm. The fraction of concordant cases is highly dependent on image quality, and concordance improves greatly upon disqualification of poor quality images. Visual comparison of the QLab-3DQ segmentation to my algorithm overlaid on top of the original echoes also suggests that my method may be preferable or of high utility even in cases of EF discordance. This paper describes the algorithm and offers justifications for the adopted methods. The paper also discusses the design of a retrospective clinical trial now underway at Duke with 60 additional unseen cases intended only for independent validation.
Item Open Access Autonomous Robot Packing of Complex-shaped Objects(2020) Wang, FanWith the unprecedented growth of the E-Commerce market, robotic warehouse automation has attracted much interest and capital investment. Compared to a conventional labor-intensive approach, an automated robot warehouse brings potential benefits such as increased uptime, higher total throughput, and lower accident rates. To date, warehouse automation has mostly developed in inventory mobilization and object picking.
Recently, one area that has attracted a lot of research attention is automated packaging or packing, a process during which robots stow objects into small confined spaces, such as shipping boxes. Automatic item packing is complementary to item picking in warehouse settings. Packing items densely improves the storage capacity, decreases the delivery cost, and saves packing materials. However, it is a demanding manipulation task that has not been thoroughly explored by the research community.
This dissertation focuses on packing objects of arbitrary shapes and weights into a single shipping box with a robot manipulator. I seek to advance the state-of-the-art in robot packing with regards to optimizing container size for a set of objects, planning object placements for stability and feasibility, and increasing robustness of packing execution with a robot manipulator.
The three main innovations presented in this dissertation are:
1. The implementation of a constrained packing planner that outputs stable and collision-free placements of objects when packed with a robot manipulator. Experimental evaluation of the method is conducted with a realistic physical simulator on a dataset of scanned real-world items, demonstrating stable and high-quality packing plans compared with other 3D packing methods.
2. The proposal and implementation of a framework for evaluating the ability to pack a set of known items presented in an unknown order of arrival within a given container size. This allows packing algorithms to work in more realistic warehouse scenarios, as well as provides a means of optimizing container size to ensure successful packing under unknown item arrival order conditions.
3. The systematic evaluation of the proposed planner under real-world uncertainties such as vision, grasping, and modeling errors. To conduct this evaluation, I built a hardware and software packing testbed that is representative of the current state-of-the-art in sensing, perception, and planing. An evaluation of the testbed is then performed to study the error sources and to model their magnitude. Subsequently, robustness measures are proposed to improve the packing success rate under such errors.
Overall, empirical results demonstrate that a success rate of up to 98\% can be achieved by a physical robot despite real-world uncertainties, demonstrating that these contributions have the potential to realize robust, dense automatic object packing.
Item Open Access Bayesian and Information-Theoretic Learning of High Dimensional Data(2012) Chen, MinhuaThe concept of sparseness is harnessed to learn a low dimensional representation of high dimensional data. This sparseness assumption is exploited in multiple ways. In the Bayesian Elastic Net, a small number of correlated features are identified for the response variable. In the sparse Factor Analysis for biomarker trajectories, the high dimensional gene expression data is reduced to a small number of latent factors, each with a prototypical dynamic trajectory. In the Bayesian Graphical LASSO, the inverse covariance matrix of the data distribution is assumed to be sparse, inducing a sparsely connected Gaussian graph. In the nonparametric Mixture of Factor Analyzers, the covariance matrices in the Gaussian Mixture Model are forced to be low-rank, which is closely related to the concept of block sparsity.
Finally in the information-theoretic projection design, a linear projection matrix is explicitly sought for information-preserving dimensionality reduction. All the methods mentioned above prove to be effective in learning both simulated and real high dimensional datasets.
Item Open Access Bayesian Nonparametric Modeling of Latent Structures(2014) Xing, ZhengmingUnprecedented amount of data has been collected in diverse fields such as social network, infectious disease and political science in this information explosive era. The high dimensional, complex and heterogeneous data imposes tremendous challenges on traditional statistical models. Bayesian nonparametric methods address these challenges by providing models that can fit the data with growing complexity. In this thesis, we design novel Bayesian nonparametric models on dataset from three different fields, hyperspectral images analysis, infectious disease and voting behaviors.
First, we consider analysis of noisy and incomplete hyperspectral imagery, with the objective of removing the noise and inferring the missing data. The noise statistics may be wavelength-dependent, and the fraction of data missing (at random) may be substantial, including potentially entire bands, offering the potential to significantly reduce the quantity of data that need be measured. We achieve this objective by employing Bayesian dictionary learning model, considering two distinct means of imposing sparse dictionary usage and drawing the dictionary elements from a Gaussian process prior, imposing structure on the wavelength dependence of the dictionary elements.
Second, a Bayesian statistical model is developed for analysis of the time-evolving properties of infectious disease, with a particular focus on viruses. The model employs a latent semi-Markovian state process, and the state-transition statistics are driven by three terms: ($i$) a general time-evolving trend of the overall population, ($ii$) a semi-periodic term that accounts for effects caused by the days of the week, and ($iii$) a regression term that relates the probability of infection to covariates (here, specifically, to the Google Flu Trends data).
Third, extensive information on 3 million randomly sampled United States citizens is used to construct a statistical model of constituent preferences for each U.S. congressional district. This model is linked to the legislative voting record of the legislator from each district, yielding an integrated model for constituency data, legislative roll-call votes, and the text of the legislation. The model is used to examine the extent to which legislators' voting records are aligned with constituent preferences, and the implications of that alignment (or lack thereof) on subsequent election outcomes. The analysis is based on a Bayesian nonparametric formalism, with fast inference via a stochastic variational Bayesian analysis.
Item Open Access Calibrating and Beamforming Distributed Arrays in Passive Sonar Environments(2022) Ganti, AnilThis dissertation presents methods for calibrating and beamforming a distributed array for detecting and localizing sources of interest using passive sonar. Passive sonar is critical for underwater acoustic surveillance, marine life tracking, and environmental monitoring but is increasingly difficult with greater shipping traffic and other man-made noise sources. Large aperture hydrophone arrays are needed to suppress these sources of interference and find weak targets of interest. Traditionally, large hydrophone arrays are densely sampled uniform arrays which are expensive and time-consuming to deploy and maintain. There is growing interest instead in forming distributed arrays out of low-cost, individually small arrays which are coherently processed to achieve high gain and resolution. Conventional array processing methods are not well suited to this end and this dissertation develops new methods for array calibration and beamforming which ultimately enable high resolution passive sonar at low-cost. This work develops estimation methods for array parameters in uncalibrated, unsynchronized collections of acoustic sensors and also develops adaptive beamforming techniques on such arrays in complex and uncertain ocean environments.
Methods for estimating sampling rate offset (SRO) are developed using a single narrowband source of opportunity whose parameters need not be estimated. A search-free method which jointly estimates all SRO parameters in an acoustic sensor network is presented and shown to improve as the network size increases. A second SRO estimation method is developed for unsynchronized sub-arrays to enable SRO estimation with a source that has a bearing rate. This is of particular value in ocean environments where transiting cargo ships are the most prevalent calibration sources.
Next, a technique for continuously estimating multiple sub-array positions using a single, tonal moving source is presented. Identical, well-calibrated sub-arrays with unknown relative positions exhibit a rotational invariance in the signal structure which is exploited to blindly estimate the inter-array spatial wavefronts. These wavefront measurements are used in an Unscented Kalman Filter (UKF) to continuously improve sub-array position estimates.
Lastly, this work addresses adaptive beamforming in uncertain, complex propagation environments where the modeled wavefronts will inevitably not match the true wavefronts. Adaptive beamforming techniques are developed which maintain gain even with significant signal mismatch due to unknown or uncertain source wavefronts by estimating a target-free covariance matrix from the received data and using just a single gain constraint in the beamformer optimization. Target-free covariances are estimated using an eigendecomposition of the received data and assuming that modes which potentially contain sources of interest can be identified. This method is applied to a distributed array where only part of the array wavefront is explicitly modeled and shown to improve interference suppression and the output signal-to-interference-plus-noise ratio (SINR).
This idea is then extended to realistic environments and a method for finding potential target components is developed. Blind source separation (BSS) methods using second-order statistics are adopted for wideband source separation in shallow-water environments. BSS components are associated with either target or interference based on their received temporal spectra and are automatically labeled with a convolutional neural network (CNN). This method is applicable when sources have overlapping but distinct transmitted spectra, but also when the channel itself colors the received spectra due to range-dependent frequency-selective fading. Simulations in realistic shallow-water environments demonstrate the ability to blindly separate and label uncorrelated components based on frequency-selective fading patterns. These simulations then validate the robustness of the developed wavefront adaptive sensing (WAS) beamformer compared to a standard minimum variance distortionless response (MVDR) beamformer. Finally, this method is demonstrated using real shallow-water data from the SWellEx96 S59 experiment off the coast of Southern California. A simulated target is injected into this data and masked behind a loud towed source. It is shown that the WAS beamformer is able to suppress the towed source and achieve an target output SINR which is close to that of the optimal beamformer.
Item Open Access Carbon-based Inks and Printing Processes for Environmentally Friendly Sensors and Transistors(2024) Smith, Brittany NicoleThe semiconductor industry currently fabricates electronic devices using materials that are difficult to recycle and energy-intensive processes with significant waste products, resulting in considerable environmental impact. Yet, as the Internet-of-Things (IoT) continues to add sensors and circuitry to everyday objects, there is no end in sight to the proliferation of electronics throughout society, particularly devices that enable new flexible or wearable applications. Additive manufacturing, notably printing, could be a powerful tool for fabricating robust flexible electronics for the IoT through the use of energy-efficient processes and recyclable or biodegradable materials. The work outlined in this dissertation presents contributions to the advancement of printed materials that have environmentally sustainable attributes with the eventual goal of developing manufacturing-ready print processes for electronics that have no environmental impact.
A versatile device for enabling many applications of sustainable electronics, from sensors to circuits, is the thin-film transistor (TFT). To realize a TFT requires the deposition of semiconducting, conducting, and insulating materials into a multilayered device structure. Carbon nanotubes (CNTs) are ideal semiconducting candidates for TFTs due to their high thin-film mobility, compatibility with a wide range of printing approaches, and low environmental impact. Further, CNT-TFTs may be used as a platform to benchmark the electrical properties of insulating and conducting printed materials. Among the layers comprising a CNT-TFT, a gate insulator (i.e., dielectric) has proven to be the most difficult to print due to challenges in film uniformity and post-processing requirements. This challenge was overcome by using nanocellulose as a printable ionic dielectric, showing better device performance than common ionic dielectrics used in the field. Further, the influence of form factor and surface groups on the ionic dielectric performance of nanocellulose were explored, revealing a potential dependence of SS on surface group. The impact of using aqueous high work function metals was also studied for their impact on CNT-TFT performance, demonstrating Au nanoparticle (NP) and PtNP contacts to the CNT channel. Impressively, the aqueous AuNP contacts had a lower contact resistance than previous solvent-based Au inks.
While most of this dissertation work was completed using aerosol jet printing (AJP), significant material waste and limits on resolution (> 10 µm) reduced the environmental benefits of printing sustainable materials with AJP. Therefore, a direct-write printing process called capillary-flow printing (CFP) was used to realize as-printed submicron CNT-TFTs without the use of chemical modifications or physical manipulation. While reducing ink waste and energy consumption, CFP enabled demonstration of submicron fully printed devices in three easy print steps, yielding devices with performance that is competitive with that of state-of-the-art TFT technology for display backplanes, which rely on materials and processes with sizeable environmental footprints.
Although the majority of the work discussed in this dissertation focuses on discoveries and advancements in printed CNT-TFTs, the final work presented highlights the versatility of an aqueous graphene ink aerosol jet printed into 3D microstructures with broad applicability beyond transistors. By developing this 3D print process on an aerosol jet printer, which typically prints in 2D, conductive 3D graphene microstructures were achieved without any post-processing. After extensive characterization, it was found that pillars can be printed below a 45° angle with respect to the substrate without altering the angle of the nozzle or substrate, which is comparable to extrusion-based 3D printers. Additionally, 3D microstructures were functionalized onto a graphene humidity sensor, showing that the use of 3D graphene trusses yields a nearly twofold improvement in device sensitivity.
The work described herein marks a significant leap in reducing the environmental impact of printed electronics through use of aqueous inks for transistor and sensing applications as well as pushing the resolution limits of printing through the use of a capillary flow printer to create submicron transistors while minimizing material waste. Through this work, we showed that adopting environmentally conscious printing can continue to push electrical performance and resolution limits of printed electronics, ushering in a new era of printing. The world is eager for sustainable solutions weaved seamlessly into the manufacturing of electronics as opposed to dealing with the environmental impacts post-manufacturing. As the printed electronics industry works to integrate new practices such as those outlined in this work, there is still more work to be done exploring the vast range of materials, inks, and print processes that are suitable for not only well-established applications but also new uses for printed electronics that continue to emerge every day.