Browsing by Subject "Engineering"
Results Per Page
Sort Options
Item Open Access A Modular Multilevel Series/Parallel Converter for a Wide Frequency Range Operation(IEEE Transactions on Power Electronics, 2019-10-01) Li, Z; Ricardo Lizana, F; Yu, Z; Sha, S; Peterchev, AV; Goetz, SMWhen providing ac output, modular multilevel converters (MMCs) experience power fluctuation in the phase arms. The power fluctuation causes voltage ripple on the module capacitors, which grows with the output power and inversely to the output frequency. Thus, low-frequency operations of MMCs, e.g., for motor drives, require injecting common-mode voltages and circulating currents, and strict dc voltage output relative to ground is impossible. To address this problem, this paper introduces a novel module topology that allows parallel module connectivity in addition to the series and bypass states. The parallel state directly transfers power across the modules and arms to cancel the power fluctuations and hence suppresses the capacitor voltage ripple. The proposed series/parallel converter can operate at a wide frequency range down to dc without common-mode voltages or circulating currents; it also allows sensorless operation and full utilization of the components at higher output frequencies. We present detailed simulation and experiment results to characterize the advantages and limitations of the proposed solution.Item Open Access A New Approach to Model Order Reduction of the Navier-Stokes Equations(2012) Balajewicz, MaciejA new method of stabilizing low-order, proper orthogonal decomposition based reduced-order models of the Navier Stokes equations is proposed. Unlike traditional approaches, this method does not rely on empirical turbulence modeling or modification of the Navier-Stokes equations. It provides spatial basis functions different from the usual proper orthogonal decomposition basis function in that, in addition to optimally representing the solution, the new proposed basis functions also provide stable reduced-order models. The proposed approach is illustrated with two test cases: two-dimensional flow inside a square lid-driven cavity and a two-dimensional mixing layer.
Item Open Access A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications(2016) Lee, CurtisThe computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Item Open Access A Novel Integrated Biotrickling Filter -Anammox Bioreactor System for the Complete Treatment of Ammonia in Air with Nitrification and Denitrification(2020) Tang, LizhanAn integrated biotrickling filter (BTF)-Anammox bioreactor system was established for the complete treatment of ammonia. Shortcut nitrification process was successfully achieved in the biotrickling filter through free ammonia and free nitrous acid inhibition of nitrite oxidizing bacteria. During transients, while increasing nitrogen loading, free ammonia was the main factor that inhibited the activity of ammonia oxidizing bacteria (AOB) and nitrite oxidizing bacteria (NOB). During steady state operation, free nitrous acid was mainly responsible for inhibition of NOB due to the accumulation of nitrite at relatively low pH. Ammonia removal by the BTF reached up to 50 gN m-3 h-1 with 100% removal at an inlet concentration of 403 ppm and a gas residence time of 20.8 s. Average removal of ammonia during stable operation was 95%. The anammox bioreactor could remove 75% of total nitrogen discharged by the BTF when the two reactors were connected. The possibility of operating in complete closed loop mode for the liquid was investigated. However, due to the limited activity of the Anammox bioreactor or the fact that this reactor was undersized, recycling the Anammox effluent back to BTF caused accumulation of nitrite in the system which further inhibited activity of Anammox and progressively caused failure of the system.
A conceptual model of both bioreactors was also developed to optimize the integrated system. The model was developed by including mass balances of nitrogen in the system and inhibition factors in microbial kinetics. Parameters such as hydraulic residence time (HRT), empty bed residence time (EBRT) and pH had significant impact on the partial nitritation process in the BTF. Model simulations also indicated that implementing a recycle for the Anammox bioreactor was needed to reduce the inhibitory effect of nitrite on the performance of the system.
Item Embargo A Vertically Oriented Passive Microfluidic Device for Automated Point-Of-Care Testing Directly from Complex Samples(2023) Kinnamon, David StanleyDetection and quantification of biomarkers directly from complex clinical specimens is desired and often required by healthcare professionals for the effective diagnosis and screening of disease, and for general patient care. Current methodologies to accomplish this task have critical shortcomings. Laboratory immunoassays, most notably enzyme-linked immunosorbent assay (ELISA) require extensive clinical infrastructure and complex user intervention steps to generate results and often are accompanied by a lengthy time-to-result. Conversely, available point-of-care (POC) diagnostic solutions, most notably available lateral flow immunoassays (LFIAs), often struggle with sensitivity and specificity in complex fluids, lack quantitative output and are not easily multiplexed. In this dissertation I will discuss the design, fabrication, testing, and refinement of an all-in-one fluorescence microarray integrated into a passive microfluidic fluid handling system to create a versatile and automated POC platform that can detect biomarkers from complex samples for disease management with the relative ease-of-use of an LFIA and the performance of a laboratory-grade test. The platform is driven by capillary and gravitational forces and automates all intervention steps after the addition of the sample and running buffer at the start of testing. The microfluidic cassette is built on a (poly(oligo(ethylene glycol) methyl ether methacrylate) (POEGMA) polymer brush which imparts two key functionalities, (1) it eliminates cellular and protein binding, and when combined with the vertical orientation of the microfluidic cassette prevents settling of debris during all assay steps. This allows for impressive sensitivities and specificities to be obtained from samples as complex as undiluted whole blood even when relying on gentle capillary and hydrostatic pressures for cassette operation. (2) Paradoxically, printed biorecognition elements can be stably and non-covalently immobilized into the POEGMA allowing for all reagents needed to conduct a sandwich immunoassay in a single step to be easily inkjet printed as spatially discrete spots into the POEGMA brush, which also stabilizes them at room temperature. Additionally, the microfluidic cassette is compatible with the “D4Scope” a handheld fluorescence detector that can quantify the output of the microfluidic cassette in seconds at the POC and is the only piece of auxiliary equipment required to operate the test.
This dissertation discusses early cassette prototypes and characterizes the performance of major device iterations (Chapter 2) before moving into three clinical applications of the cassette. First, a multiplexed serological test to detect antibodies against different proteins of the SARS-CoV-2 virus was developed (Chapter 3). Second, a multiplexed COVID-19 diagnostic test that simultaneously differentiates which variant you are infected with was developed (Chapter 4). Third, a sensitive fungal infection test for the diagnosis of talaromycosis was developed (Chapter 5). Finally, a rapidly iterative yet highly scalable injection molding fabrication process flow was created and characterized to improve performance and translatability of the cassette (Chapter 6).
Item Open Access Accelerated Sepsis Diagnosis by Seamless Integration of Nucleic Acid Purification and Detection(2014) Hsu, BangNingBackground The diagnosis of sepsis is challenging because the infection can be caused by more than 50 species of pathogens that might exist in the bloodstream in very low concentrations, e.g., less than 1 colony-forming unit/ml. As a result, among the current sepsis diagnostic methods there is an unsatisfactory trade-off between the assay time and the specificity of the derived diagnostic information. Although the present qPCR-based test is more specific than biomarker detection and faster than culturing, its 6 ~ 10 hr turnaround remains suboptimal relative to the 7.6%/hr rapid deterioration of the survival rate, and the 3 hr hands-on time is labor-intensive. To address these issues, this work aims to utilize the advances in microfluidic technologies to expedite and automate the ``nucleic acid purification - qPCR sequence detection'' workflow.
Methods and Results This task is evaluated to be best approached by combining immiscible phase filtration (IPF) and digital microfluidic droplet actuation (DM) on a fluidic device. In IPF, as nucleic acid-bound magnetic beads are transported from an aqueous phase to an immiscible phase, the carryover of aqueous contaminants is minimized by the high interfacial tension. Thus, unlike a conventional bead-based assay, the necessary degree of purification can be attained in a few wash steps. After IPF reduces the sample volume from a milliliter-sized lysate to a microliter-sized eluent, DM can be used to automatically prepare the PCR mixture. This begins with compartmenting the eluent in accordance with the desired number of multiplex qPCR reactions, and then transporting droplets of the PCR reagents to mix with the eluent droplets. Under the outlined approach, the IPF - DM integration should lead to a notably reduced turnaround and a hands-free ``lysate-to-answer'' operation.
As the first step towards such a diagnostic device, the primary objective of this thesis is to verify the feasibility of the IPF - DM integration. This is achieved in four phases. First, the suitable assays, fluidic device, and auxiliary systems are developed. Second, the extent of purification obtained per IPF wash, and hence the number of washes needed for uninhibited qPCR, are estimated via off-chip UV absorbance measurement and on-chip qPCR. Third, the performance of on-chip qPCR, particularly the copy number - threshold cycle correlation, is characterized. Lastly, the above developments accumulate to an experiment that includes the following on-chip steps: DNA purification by IPF, PCR mixture preparation via DM, and target quantification using qPCR - thereby demonstrating the core procedures in the proposed approach.
Conclusions It is proposed to expedite and automate qPCR-based multiplex sparse pathogen detection by combining IPF and DM on a fluidic device. As a start, this work demonstrated the feasibility of the IPF - DM integration. However, a more thermally robust device structure will be needed for later quantitative investigations, e.g., improving the bead - buffer mixing. Importantly, evidences indicate that future iterations of the IPF - DM fluidic device could reduce the sample-to-answer time by 75% to 1.5 hr and decrease the hands-on time by 90% to approximately 20 min.
Item Open Access Acoustics-induced Fluid Motions(2021) Chen, ChuyiAcoustic waves, as a form of mechanical vibration, not only induces the force directly on the object, but also induces the motion of the medium that propagates throughout the system. The study of acoustofluidic mainly focuses on the exploration of the underlying mechanism of the acoustic waves and fluid motion and the methodology of applying this technique to practical applications. Featuring its contactless, versatile, and biocompatible capabilities, the acoustofluidic method makes itself an ideal tool for biosample handling. As the majority of the bio-related samples (e.g., cell, small organism, exosome) possess their native environment within liquids, there is an urgent need to study the acoustic induced fluid motion in order to cooperate with the development of the acoustic tweezing technique. While both the theoretical study and application exploration have been established for the combination of acoustics and microfluidics, the fluid motion on a larger scale is still under-developed. One reason is that, although the acoustofluidic methods hold great potential in various biomedical applications, there is a limited way to form an organized motion in a larger fluid domain, which may lead to the imprecise manipulation of the target. On the other hand, the theoretical study for the microfluidic domain is on the basis of a simplified model with certain assumptions, when applying to the larger fluid area, and significantly influences both the accuracy and computation cost. In this dissertation, we have first developed a series of theoretical and numerical methods in order to provide insights into the acoustofluidic phenomenon in different domain scales. Specifically, we explored the non-linear acoustic dynamics in fluids with the perturbation theory and Reynolds’ stress theory. Then we presented that the vortex streaming can be predicted and designed with our theoretical and numerical study, which can be utilized for various fluid systems and expanded to practical biomedical applications. The boundary-driven streaming and Reynolds’ stress-induced streaming are studied and applied to the digital acoustofluidic droplet handling platform and droplet spinning system, respectively. We demonstrated that within the digital acoustofluidic platform, the droplet can be manipulated on the oil layer in a dynamic and biocompatible manner. Meanwhile, in the droplet spinning system, we can predict and guide the periodic liquid-air interface deformation, as well as the particle motion inside the droplet. We demonstrated that with the theoretical and experimental study, this platform can be utilized for the nanoscale particle (e.g., DNA molecule and exosome) concentration, separation, and transport. Next, based on our study of the acoustically induced fluid motion, we developed an integrated acoustofluidic rotational tweezing platform that can be utilized for zebrafish larvae rapid rotation (~1s/rotation), multi-spectral imaging, and phenotyping. In this study, we have conducted a systematic study including theory development, acoustofluidic device design/fabrication, and flow system implementation. Moreover, we have explored the multidisciplinary expansion combining the acoustofluidic zebrafish phenotyping device with the computer-vision-based 3D model reconstruction and characterization. With this method, we can obtain substantial information from a single zebrafish sample, including the 3D model, volume, surface area, and deformation ratio. Moreover, with the design of the continuous flow system, a flow-cytometry-like system was developed for zebrafish larvae morphological phenotyping. In this study, a standard workflow is established which can directly transfer the groups of samples to a statistical digital readout and provide a new guideline for applying acoustofluidic techniques to biomedical applications. This work represents a complete fusion of acoustofluidic theory, experimental function, and practical application implementation.
Item Open Access Acoustofluidic Innovations for Cellular Processing On-Chip(2018) Ohiri, KorineThe advent of increasingly proficient cell handling tools has led to a drastic maturation of our understanding of life on the microscale. Thus far, impressive strides have been made towards creating efficient and compact systems for manipulating cells on-chip. Multiple exciting cell handling methodologies have been explored and incorporated into cell separation and analysis tools ranging from passive hydrodynamic based systems to active magnetic and optoelectronic systems. Of these tools, acoustofluidic technologies, which employ the use of sound waves to manipulate cells in microfluidic environments, show great promise. These technologies offer a myriad of benefits including labeled or unlabeled cell manipulation, gentle handling of cells, long rang manipulation of cells, and simplified fabrication compared to other active systems (e.g. magnetics, optoelectronics). Accordingly, in this dissertation I develop novel acoustofluidic tools that can combine with existing technologies and expand upon the array of systems available to scientists in biology and medicine for cell handling.
In my first experimental chapter, I characterize and develop elastomeric magnetic microparticles that can be used in future applications for multi-target cell separation from complex mixtures. These particles are comprised of a varying loading of magnetic nanoparticles evenly and stably distributed throughout a silicone matrix. Consequently, these particles uniquely exhibit a “dual contrast”, whereby they undergo positive magnetophoresis and negative acoustophoresis in water. Further, I show that these particles can be functionalized with biomolecules via chemical modification by linking a biotin group to the surface-accessible amine groups on the particles using carbodiimide chemistry. These functionalized particles can then non-specifically or non-covalently bind to streptavidin molecules. Additionally, I characterize both the magnetic and acoustic properties of these particles by quantifying the magnetic susceptibilities and extent of acoustic focusing after 3 seconds for each particle formulation (i.e. 3 wt. % magnetite, 6 wt. % magnetite, 12 wt. % magnetite, 24 wt. % magnetite, and 48 wt. % magnetite in solids), respectively. Finally, I demonstrate a simple ternary separation of my 12 wt. % formulation from unlabeled human umbilical vein cells (HUVEC, non-magnetic) and magnetic beads that exhibit a positive acoustic contrast using magnetic and acoustic-based separations to highlight the unique properties of the mNACPs.
In my second experimental chapter I characterize and develop a novel acoustofluidic chip that employs the use of a trap and transfer approach to organize a high-density array of single cells in spacious compartments. My approach is based on exploiting a combination of microfluidic weirs and acoustic streaming vortices to first trap single cells in specific locations of a microfluidic device, and then transfer the cells into adjacent low shear compartments with an acoustic switch. This highly adaptable, compact system allows for imaging with standard bright field and fluorescence microscopes, and can array more than 3,000 individual cells on a chip the size of a standard glass slide. I optimize the hydrodynamic resistance ratios through the primary trap site, the bypass channel, and the adjacent compartment region such that particles first enter the trap, subsequent particles enter the bypass, and particles enter the compartment regions of a clean acoustofluidic chip upon acoustic excitation. Further, I optimize the acoustic switching parameters (e.g. frequency and voltage), and prove that acoustic switching occurs due to the generation of steady streaming vortices using particle tracking methods. Uniquely, my system demonstrates for the first time the manipulation of single cells with an array of streaming vortices in a highly parallel format to compartmentalize cells and generate a single cell array.
Finally, for my third experimental chapter, I demonstrate the biological relevance of the acoustofluidic chip I designed in my third chapter. First, I determine the trapping and arraying efficiencies of cells in my acoustofluidic chip to be 80 and 67 % respectively. Here, the arraying efficiency represents the percentage of single cells in the compartment regions and is dependent on both the trapping efficiency and the acoustic switching efficiency (which is roughly 84 %). Additionally, I observe the adhesion, division, and escape of single PC9 cells from the compartment regions of my acoustofluidic chip at 8 hour increments over 24 hours and identify potential obstacles for quantitative analysis of cell behavior for motile populations. In these studies, I found that it is possible to incubate arrayed single cells on-chip. Finally, I demonstrate that single cells can be stained on chip in a rapid and facile manner with ~ 100 % efficiency either before or after adhesion to the surface of the microfluidic chip.
While the studies described herein address but a small fraction of the wider need for next- generation cellular manipulation and analysis tools, I present meaningful knowledge that can expand our understanding of the utility of acoustofluidic devices. Importantly, I (i) characterize for the first time a new type of particle that exhibits both negative acoustic contrast and positive magnetic contrast and (ii) develop a novel acoustofluidic chip that exploits the use of steady acoustic streaming vortices to generate a single cell array.
Item Embargo Advancing Compact, Multiplexed, and Wavefront-Controlled Designs for Coherent Optical Systems(2023) Hagan, Kristen ElizabethThe development of non-invasive retinal imaging systems has revolutionized the care and treatment of patients in ophthalmology clinics. Using high-resolution modalities such as scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT), physicians and vision scientists are able detect previously unseen features on the subject retina which can 1) provide information for diagnosis, 2) identify disease biomarkers, 3) inform treatment or clinical trial regimens, and 4) improve understanding of underlying disease processes. Traditional SLO and OCT devices are designed as tabletop systems which are unable to accommodate vulnerable populations including intrasurgical patients and young children. Thus, the miniaturization of these systems into compact, handheld form factors is of great interest in both biomedical optics/imaging and medical research fields as they are essential to the proper care of patients. Previous studies have shown that handheld systems are instrumental in assessing overall health of young children and disease progressions in subjects of all ages. However, handheld systems are limited in optical performance as hardware selection is restricted to components of small size and low weight. Additionally, aberrations induced by both the system optics and the human eye degrade the resolution of the images. This work focuses on the integration of adaptive optics (AO) technology into handheld form factors to correct for aberrations and provide in vivo visualization of single cells such as cone photoreceptors and retinal pigment epithelium cells. We present two devices which demonstrate the first ever dual-modality AO-SLO and AO-OCT handheld imaging devices that push the limits of comprehensive, cellular-resolution retinal imaging. Finally, we investigate the use of 3x3 fused fiber couplers as a simple, compact coherent receiver design. Our novel balanced-detection topology achieves shot-noise limited performance in the presence of excess noise and shows improved SNR as compared to previous implementations. We detail its ability to enable instantaneous quadrature projection for applications in LiDAR, phase imaging, and optical communications.
Item Open Access An Econophysics Approach to Short Time-Scale Dynamics of the Equities Markets(2017) Swingler, Ashleigh JaneFinancial markets have evolved drastically over the last decade due to the advent of high frequency trading and ubiquitous influence of algorithmic trading. Analyzing the equities markets has become an extremely data intensive and noisy undertaking. This work explores the information content of equity order book data outside of the inside price. First, an object-oriented library is presented to efficiently construct and maintain the order books of individual securities, by parsing and processing NASDAQ TotalView-ITCH data files. This library is part of the NASDAQ Order Processing software suite developed through this research effort. A framework for forecasting the returns of stock symbols that combines vector autoregression and principal component analysis is presented to determine if additional order book data, such as the volume of canceled orders and deleted orders, affects the price dynamics of stocks. Although the resulting model did not provide an adequate methodology for reliably forecasting prices, it was determined that information in the order book beyond returns and order volume should be included in market dynamics. This research also presents a novel visualization technique for viewing market dynamics and limit order book structure.
Item Open Access An Empirically Based Stochastic Turbulence Simulator with Temporal Coherence for Wind Energy Applications(2016) Rinker, Jennifer MarieIn this dissertation, we develop a novel methodology for characterizing and simulating nonstationary, full-field, stochastic turbulent wind fields.
In this new method, nonstationarity is characterized and modeled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components.
The empirical distributions of the phase differences can also be extracted from measured data, and the resulting temporal coherence parameters can quantify the occurrence of nonstationarity in empirical wind data.
This dissertation (1) implements temporal coherence in a desktop turbulence simulator, (2) calibrates empirical temporal coherence models for four wind datasets, and (3) quantifies the increase in lifetime wind turbine loads caused by temporal coherence.
The four wind datasets were intentionally chosen from locations around the world so that they had significantly different ambient atmospheric conditions.
The prevalence of temporal coherence and its relationship to other standard wind parameters was modeled through empirical joint distributions (EJDs), which involved fitting marginal distributions and calculating correlations.
EJDs have the added benefit of being able to generate samples of wind parameters that reflect the characteristics of a particular site.
Lastly, to characterize the effect of temporal coherence on design loads, we created four models in the open-source wind turbine simulator FAST based on the \windpact turbines, fit response surfaces to them, and used the response surfaces to calculate lifetime turbine responses to wind fields simulated with and without temporal coherence.
The training data for the response surfaces was generated from exhaustive FAST simulations that were run on the high-performance computing (HPC) facilities at the National Renewable Energy Laboratory.
This process was repeated for wind field parameters drawn from the empirical distributions and for wind samples drawn using the recommended procedure in the wind turbine design standard \iec.
The effect of temporal coherence was calculated as a percent increase in the lifetime load over the base value with no temporal coherence.
Item Open Access An Induced Pluripotent Stem Cell-derived Tissue Engineered Blood Vessel Model of Hutchinson-Gilford Progeria Syndrome for Disease Modeling and Drug Testing(2018) Atchison, Leigh JoanHutchison-Gilford Progeria Syndrome (HGPS) is a rare, accelerated aging disorder caused by nuclear accumulation of progerin, an altered form of the Lamin A gene. The primary causes of death are stroke and cardiovascular disease at an average age of 14 years. It is known that loss or malfunction of smooth muscle cells (SMCs) in the vasculature leads to cardiovascular defects, however, the exact mechanisms are still not understood. The contribution of other vascular cell types, such as endothelial cells, is still not known due to the current limitations of studying such a rare disorder. Due to limitations of 2D cell culture, mouse models, and the limited HGPS patient pool, there is a need to develop improved models of HGPS to better understand the development of the disease and discover novel therapeutics.
To address these limitations, we produced a functional, three-dimensional tissue model of HGPS that replicates an arteriole-scale tissue engineered blood vessel (TEBV) using induced pluripotent stem cell (iPSC)-derived cell sources from HGPS patients. To isolate the specific effects of HGPS SMCs, we initially used human cord blood-derived endothelial progenitor cells (hCB-EPCs) from a separate, healthy donor and iPSC-derived SMCs (iSMCs). TEBVs fabricated from HGPS patient iSMCs and hCB-EPCs (HGPS iSMC TEBVs) showed disease attributes such as reduced vasoactivity, increased medial wall thickness, increased calcification, excessive extracellular matrix protein deposition, and cell apoptosis relative to TEBVs fabricated from primary mesenchymal stem cells (MSCs) and hCB-EPCs or normal patient iSMCs with hCB-EPCs. Treatment of HGPS iSMC TEBVs for one week with the rapamycin analog Everolimus (RAD001), increased HGPS iSMC TEBV vasoactivity and iSMC differentiation in TEBVs.
To improve the sensitivity of our HGPS TEBV model and study the effects of endothelial cells on the HGPS cardiovascular phenotype, we adopted a modified differentiation protocol to produce iPSC-derived vascular smooth muscle cells (viSMCs) and endothelial cells (viECs) from normal and Progeria patient iPSC lines to create iPSC-derived vascular TEBVs (viTEBVs). Normal viSMCs and viECs showed structural and functional characteristics of vascular SMCs and ECs in 2D culture, while HGPS viSMCs and viECs showed various disease characteristics and reduced function compared to healthy controls. Normal viTEBVs had comparable structure and vasoactivity to MSC TEBVs, while HGPS viTEBVs showed reduced vasoactivity, increased vessel wall thickness, calcification, apoptosis and excess ECM deposition. In addition, HGPS viTEBVs showed markers of cardiovascular disease associated with the endothelium such as decreased response to acetylcholine, increased inflammation, and altered expression of flow-associated genes.
The treatment of viTEBVs with multiple Progeria therapeutics was evaluated to determine the potential of the HGPS viTEBV model to serve as a platform for drug efficacy and toxicity testing as well as to further elucidate the mechanisms behind each drugs mode of action. Treatment of viTEBVs with therapeutic levels of the farnesyl-transferase inhibitor (FTI), Lonafarnib, or Everolimus improved different aspects of HGPS viTEBV structure and function. Treatment with Everolimus alone increased response to phenylephrine, improved SMC differentiation and cleared progerin through autophagy. Lonafarnib improved acetylcholine response, decreased ECM deposition, decreased calcification and improved nitric oxide production. Most significantly, combined therapeutic treatment with both drugs showed an additive effect by improving overall vasoactivity, increasing cell density, increasing viSMC and viEC differentiation, and decreasing calcification and apoptosis in treated HGPS viTEBVs. On the other hand, toxic doses of both drugs combined resulted in significantly diminished HGPS viTEBV function through increased cell death. In summary, this work shows the ability of a tissue engineered vascular model to serve as an in vitro personalized medicine platform to study HGPS and potentially other rare diseases of the vasculature using iPSC-derived cell sources. It has also further identified a potential role of the endothelium in HGPS. Finally, this HGPS viTEBV model has proven effective as a drug testing platform to determine therapeutic and toxic doses of proposed therapeutics based on their specific therapeutic effects on HGPS viTEBV structure and function.
Item Open Access An Information-driven Approach for Sensor Path Planning(2011) Lu, WenjieThis thesis addresses the problem of information-driven sensor path planning for the purpose of target detection, measurement, and classification using non-holonomic mobile sensor agents (MSAs). Each MSA is equipped with two types of sensors. One is the measuring sensor with small FOV, while the other is the detecting sensor with large FOV. The measuring sensor could be ground penetrating radar (GPR), while the detecting sensor can be infrared radar (IR). The classification of a target can be reduced to the problem of estimating one or more random variables associated with this target from partial or imperfect measurements from sensorscite{stengel}, and can be represented by a probability mass function (PMF). Previous work shows the performance of MSAs can be greatly improved by planning their motion and control laws based on their sensing objectives. Because of the stochastic nature of sensing objective, the expected measurement benefit of a target, i.e, the information value, is defined as the expected entropy reduction of its classification PMF before the next measurement is taken of this target. The information value of targets is combined with other robot motion planning methods to address the sensor planning problem.
By definition, the entropy reduction can be represented by conditional mutual information of PMF given a measurement. MSAs are deployed in an obstacle-populated environment, and must avoid collisions with obstacles, as well as, in some cases, targets.
This thesis first presents a modified rapidly-exploring random trees (RRTs) approach with a novel milestone sampling method. The sampling function for RRTs takes into account the information value of targets, and sensor measurements of obstacle locations, as well as MSAs' configurations (e.g., position and orientation) and velocities to generate new milestones for expanding the trees online. By considering the information value, the sample function favors expansions toward targets with higher expected measurement benefit. After sampling, the MSAs navigate to the selected milestones based on the critic introduced later and take measurements of targets within their FOVs. Then, the thesis introduces an information potential method (IPM) approach which combined information values of targets with the potential functions. Targets with high information value have larger influence distance and tend to have high probability to be measured by the MSAs. Additionally, this information potential field is utilized to generate the milestones in a local probabilistic roadmap method to help MSAs escape their local minima.
The proposed two methods are applied to a landmine classification problem. It is assumed that geometries and locations of partial obstacles and targets are available as prior information, as well as previous measurements on targets concerning their classification. The experiments show that paths of MSAs using the modified RRTs and IPM take advantages of the information value by favoring targets with high information value. Furthermore, the results show that the IPM
outperforms other approaches such as the modified RRTs with information value and classical potential field method that does not take target information value into account.
Item Open Access Antisense Gene Silencing and Bacteriophages as Novel Disinfection Processes for Engineered Systems(2014) WorleyMorse, ThomasThe growth and proliferation of invasive bacteria in engineered systems is an ongoing problem. While there are a variety of physical and chemical processes to remove and inactivate bacterial pathogens, there are many situations in which these tools are no longer effective or appropriate for the treatment of a microbial target. For example, certain strains of bacteria are becoming resistant to commonly used disinfectants, such as chlorine and UV. Additionally, the overuse of antibiotics has contributed to the spread of antibiotic resistance, and there is concern that wastewater treatment processes are contributing to the spread of antibiotic resistant bacteria.
Due to the continually evolving nature of bacteria, it is difficult to develop methods for universal bacterial control in a wide range of engineered systems, as many of our treatment processes are static in nature. Still, invasive bacteria are present in many natural and engineered systems, where the application of broad acting disinfectants is impractical, because their use may inhibit the original desired bioprocesses. Therefore, to better control the growth of treatment resistant bacteria and to address limitations with the current disinfection processes, novel tools that are both specific and adaptable need to be developed and characterized.
In this dissertation, two possible biological disinfection processes were investigated for use in controlling invasive bacteria in engineered systems. First, antisense gene silencing, which is the specific use of oligonucleotides to silence gene expression, was investigated. This work was followed by the investigation of bacteriophages (phages), which are viruses that are specific to bacteria, in engineered systems.
For the antisense gene silencing work, a computational approach was used to quantify the number of off-targets and to determine the effects of off-targets in prokaryotic organisms. For the organisms of Escherichia coli K-12 MG1655 and Mycobacterium tuberculosis H37Rv the mean number of off-targets was found to be 15.0 + 13.2 and 38.2 + 61.4, respectively, which results in a reduction of greater than 90% of the effective oligonucleotide concentration. It was also demonstrated that there was a high variability in the number of off-targets over the length of a gene, but that on average, there was no general gene location that could be targeted to reduce off-targets. Therefore, this analysis needs to be performed for each gene in question. It was also demonstrated that the thermodynamic binding energy between the oligonucleotide and the mRNA accounted for 83% of the variation in the silencing efficiency, compared to the number of off-targets, which explained 43% of the variance of the silencing efficiency. This suggests that optimizing thermodynamic parameters must be prioritized over minimizing the number of off-targets. In conclusion for the antisense work, these results suggest that off-target hybrids can account for a greater than 90% reduction in the concentration of the silencing oligonucleotides, and that the effective concentration can be increased through the rational design of silencing targets by minimizing off-target hybrids.
Regarding the work with phages, the disinfection rates of bacteria in the presence of phages was determined. The disinfection rates of E. coli K12 MG1655 in the presence of coliphage Ec2 ranged up to 2 h-1, and were dependent on both the initial phage and bacterial concentrations. Increasing initial phage concentrations resulted in increasing disinfection rates, and generally, increasing initial bacterial concentrations resulted in increasing disinfection rates. However, disinfection rates were found to plateau at higher bacterial and phage concentrations. A multiple linear regression model was used to predict the disinfection rates as a function of the initial phage and bacterial concentrations, and this model was able to explain 93% of the variance in the disinfection rates. The disinfection rates were also modeled with a particle aggregation model. The results from these model simulations suggested that at lower phage and bacterial concentrations there are not enough collisions to support active disinfection rates, which therefore, limits the conditions and systems where phage based bacterial disinfection is possible. Additionally, the particle aggregation model over predicted the disinfection rates at higher phage and bacterial concentrations of 108 PFU/mL and 108 CFU/mL, suggesting other interactions were occurring at these higher concentrations. Overall, this work highlights the need for the development of alternative models to more accurately describe the dynamics of this system at a variety of phage and bacterial concentrations. Finally, the minimum required hydraulic residence time was calculated for a continuous stirred-tank reactor and a plug flow reactor (PFR) as a function of both the initial phage and bacterial concentrations, which suggested that phage treatment in a PFR is theoretically possible.
In addition to determining disinfection rates, the long-term bacterial growth inhibition potential was determined for a variety of phages with both Gram-negative and Gram-positive bacteria. It was determined, that on average, phages can be used to inhibit bacterial growth for up to 24 h, and that this effect was concentration dependent for various phages at specific time points. Additionally, it was found that a phage cocktail was no more effective at inhibiting bacterial growth over the long-term than the best performing phage in isolation.
Finally, for an industrial application, the use of phages to inhibit invasive Lactobacilli in ethanol fermentations was investigated. It was demonstrated that phage 8014-B2 can achieve a greater than 3-log inactivation of Lactobacillus plantarum during a 48 h fermentation. Additionally, it was shown that phages can be used to protect final product yields and maintain yeast viability. Through modeling the fermentation system with differential equations it was determined that there was a 10 h window in the beginning of the fermentation run, where the addition of phages can be used to protect final product yields, and after 20 h no additional benefit of the phage addition was observed.
In conclusion, this dissertation improved the current methods for designing antisense gene silencing targets for prokaryotic organisms, and characterized phages from an engineering perspective. First, the current design strategy for antisense targets in prokaryotic organisms was improved through the development of an algorithm that minimized the number of off-targets. For the phage work, a framework was developed to predict the disinfection rates in terms of the initial phage and bacterial concentrations. In addition, the long-term bacterial growth inhibition potential of multiple phages was determined for several bacteria. In regard to the phage application, phages were shown to protect both final product yields and yeast concentrations during fermentation. Taken together, this work suggests that the rational design of phage treatment is possible and further work is needed to expand on this foundation.
Item Open Access Application of Numerical Methods to Study Arrangement and Fracture of Lithium-Ion Microstructure(2016) Stershic, Andrew JosephThe focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.
We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.
We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.
The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.
Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.
The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.
Item Open Access Application of Stochastic Processes in Nonparametric Bayes(2014) Wang, YingjianThis thesis presents theoretical studies of some stochastic processes and their appli- cations in the Bayesian nonparametric methods. The stochastic processes discussed in the thesis are mainly the ones with independent increments - the Levy processes. We develop new representations for the Levy measures of two representative exam- ples of the Levy processes, the beta and gamma processes. These representations are manifested in terms of an infinite sum of well-behaved (proper) beta and gamma dis- tributions, with the truncation and posterior analyses provided. The decompositions provide new insights into the beta and gamma processes (and their generalizations), and we demonstrate how the proposed representation unifies some properties of the two, as these are of increasing importance in machine learning.
Next a new Levy process is proposed for an uncountable collection of covariate- dependent feature-learning measures; the process is called the kernel beta process. Available covariates are handled efficiently via the kernel construction, with covari- ates assumed observed with each data sample ("customer"), and latent covariates learned for each feature ("dish"). The dependencies among the data are represented with the covariate-parameterized kernel function. The beta process is recovered as a limiting case of the kernel beta process. An efficient Gibbs sampler is developed for computations, and state-of-the-art results are presented for image processing and music analysis tasks.
Last is a non-Levy process example of the multiplicative gamma process applied in the low-rank representation of tensors. The multiplicative gamma process is applied along the super-diagonal of tensors in the rank decomposition, with its shrinkage property nonparametrically learns the rank from the multiway data. This model is constructed as conjugate for the continuous multiway data case. For the non- conjugate binary multiway data, the Polya-Gamma auxiliary variable is sampled to elicit closed-form Gibbs sampling updates. This rank decomposition of tensors driven by the multiplicative gamma process yields state-of-art performance on various synthetic and benchmark real-world datasets, with desirable model scalability.
Item Open Access Assessing the Injury Tolerance of the Human Spine(2017) Schmidt, Allison LindseyChronic and acute back injuries are widespread, affecting people in environments where they are exposed to vibration and repeated shock. These issues have been widely reported among personnel on aircraft and small watercraft; operators of heavy industrial or construction equipment may also experience morbidity associated with cyclic loading. To prevent these types of injuries, an improved understanding is needed of the spine’s tolerance to fatigue injury and of the factors that affect fatigue tolerance.
These types of vibration and shock exposures are addressed by international standards that propose limitations on the length and severity of the accelerations to which an individual is subjected. However, the current standard, ISO 2631-5:2004, provides an imprecise health hazard assessment. In this dissertation, a detailed technical critique is presented to examine the assumptions on which ISO 2631-5:2004 is based. An original analysis of existing data yields an age-based regression of the ultimate strength of lumbar spinal units and demonstrates sources of error in the strength regression in the standard. This dissertation also demonstrates that, contradicting earlier assumptions, the ultimate strength of the spine does not lie on a power-law S-N curve, and fatigue tolerance cannot be extrapolated from ultimate strength tests.
An alternative approach is presented for estimating the injury risk due to repeated loading. Drawing from existing data in the literature, a large dataset of in vitro fatigue tests of lumbar spinal segments was assembled. Using this fatigue data, a survival analysis approach was used to estimate the risk of failure based on several factors. Number of cycles, load amplitude, sex, and age all were significant predictors of bony failure in the spinal column. The parameter described by ISO 2631-5:2004 to quantify repeated loading exposure was modified, and an injury risk model was developed based on this modified parameter which relates risk of vertebral failure to repeated compressive loading. Notably, the effect of sex on fatigue tolerance persisted after normalizing by area, emphasizing the need for men and women to be addressed separately in the creation of injury risk predictions and occupational guidelines.
Posture has also been implicated in altering the injury mechanisms and tolerance to fatigue loading. However, few previous investigations in cyclic loading have addressed non-neutral postures. To assess the influence of dynamic flexion on the fatigue tolerance of the lumbar spine, a series of tests were conducted which combined a cyclic compressive force with a dynamic flexing motion. A study of 17 spinal segments from six young male cadavers was conducted, with tests ranging from 1000 to 500 000 cycles. Of the 17 specimens, 7 failed during testing. These failures were analyzed using a Cox Proportional Hazards model. As in compressive fatigue behavior, significant factors were the magnitude of the applied load and the age of the specimen. However, when the dynamically flexed specimens in these tests were compared to the specimens in the axial fatigue dataset, the flexion condition did not have a detectable effect on fatigue tolerance.
The Hybrid III dummy is a critical tool the assessment of such loading. Although the Hybrid III was originally designed for automotive frontal impact testing, these dummies have since been used to measure exposures and estimate injury risks of a wide variety of scenarios. These scenarios often involve using the dummy under non-standard temperatures or with little recovery interval between tests. Series of tests were conducted on the Hybrid III neck and lumbar components to assess the effects of rest duration intervals and a range of temperatures. Variations in rest duration intervals had little effect on the response of either component. However, both components were extremely sensitive to changes in temperature. For the 50th percentile male HIII neck, the stiffness fell by 18% between 25°C and 37.5°C; at 0°C, the stiffness more than doubled, increasing by 115%. Temperature variation had an even more pronounced effect on the HIII lumbar. Compared to room temperature, the lumbar stiffness at 37.5°C fell by 40%, and at 12.5°C, the stiffness more than doubled, increasing by 115%.
This dissertation has advanced the state of knowledge about the fatigue characteristics of the spine. An injury risk function has been developed that can serve as a tool for health hazard assessment in occupational standards. It has also contributed a fatigue dataset with dynamic flexion. This work will improve the scientific community’s ability to prevent repeated loading injuries. This dissertation has also demonstrated the immense sensitivity to temperature of the Hybrid III spinal components. This finding has major implications for the interpretation of previously published work using the Hybrid III, for the conduct of future research, and for future dummy design.
Item Open Access Association of pre-treatment radiomic features with lung cancer recurrence following stereotactic body radiation therapy.(Physics in medicine and biology, 2019-01-08) Lafata, Kyle J; Hong, Julian C; Geng, Ruiqi; Ackerson, Bradley G; Liu, Jian-Guo; Zhou, Zhennan; Torok, Jordan; Kelsey, Chris R; Yin, Fang-FangThe purpose of this work was to investigate the potential relationship between radiomic features extracted from pre-treatment x-ray CT images and clinical outcomes following stereotactic body radiation therapy (SBRT) for non-small-cell lung cancer (NSCLC). Seventy patients who received SBRT for stage-1 NSCLC were retrospectively identified. The tumor was contoured on pre-treatment free-breathing CT images, from which 43 quantitative radiomic features were extracted to collectively capture tumor morphology, intensity, fine-texture, and coarse-texture. Treatment failure was defined based on cancer recurrence, local cancer recurrence, and non-local cancer recurrence following SBRT. The univariate association between each radiomic feature and each clinical endpoint was analyzed using Welch's t-test, and p-values were corrected for multiple hypothesis testing. Multivariate associations were based on regularized logistic regression with a singular value decomposition to reduce the dimensionality of the radiomics data. Two features demonstrated a statistically significant association with local failure: Homogeneity2 (p = 0.022) and Long-Run-High-Gray-Level-Emphasis (p = 0.048). These results indicate that relatively dense tumors with a homogenous coarse texture might be linked to higher rates of local recurrence. Multivariable logistic regression models produced maximum [Formula: see text] values of [Formula: see text], and [Formula: see text], for the recurrence, local recurrence, and non-local recurrence endpoints, respectively. The CT-based radiomic features used in this study may be more associated with local failure than non-local failure following SBRT for stage I NSCLC. This finding is supported by both univariate and multivariate analyses.Item Open Access Automatic Behavioral Analysis from Faces and Applications to Risk Marker Quantification for Autism(2018) Hashemi, JordanThis dissertation presents novel methods for behavioral analysis with a focus on early risk marker identification for autism. We present current contributions including a method for pose-invariant facial expression recognition, a self-contained mobile application for behavioral analysis, and a framework to calibrate a trained deep model with data synthesis and augmentation. First we focus on pose-invariant facial expression recognition. It is known that 3D features have higher discrimination power than 2D features; however, usually 3D features are not readily available at testing time. For pose-invariant facial expression recognition, we utilize multi-modal features at training and exploit the cross-modal relationship at testing. We extend our pose-invariant facial expression recognition method and present other methods to characterize a multitude of risk behaviors related to risk marker identification for autism. In practice, identification of children with neurodevelopmental disorders requires low specificity screening with questionnaires followed by time-consuming, in-person observational analysis by highly-trained clinicians. To alleviate the time and resource expensive risk identification process, we develop a self-contained, closed- loop, mobile application that records a child’s face while he/she is watching specific, expertly-curated movie stimuli and automatically analyzes the behavioral responses of the child. We validate our methods to those of expert human raters. Using the developed methods, we present findings on group differences for behavioral risk markers for autism and interactions between motivational framing context, facial affect, and memory outcome. Lastly, we present a framework to use face synthesis to calibrate trained deep models to deployment scenarios that they have not been trained on. Face synthesis involves creating novel realizations of an image of a face and is an effective method that is predominantly employed only at training and in a blind manner (e.g., blindly synthesize as much as possible). We present a framework that optimally select synthesis variations and employs it both during training and at testing, leading to more e cient training and better performance.
Item Open Access Automatic Identification of Training & Testing Data for Buried Threat Detection using Ground Penetrating Radar(2017) Reichman, DanielGround penetrating radar (GPR) is one of the most popular and successful sensing modalities that has been investigated for landmine and subsurface threat detection. The radar is attached to front of a vehicle and collects measurements on the path of travel. At each spatial location queried, a time-series of measurements is collected, and then the measured set of data are often visualized as images within which the signals corresponding to buried threats exhibit a characteristic appearance. This appearance is typically hyperbolic and has been leveraged to develop several automated detection methods. Many of the detection methods applied to this task are supervised, and therefore require labeled examples of threat and non-threat data for training. Labeled examples are typically obtained by collecting data over deliberately buried threats at known spatial locations. However, uncertainty exists with regards to the temporal locations in depth at which the buried threat signal exists in the imagery. This uncertainty is an impediment to obtaining labeled examples of buried threats to provide to the supervised learning model. The focus of this dissertation is on overcoming the problem of identifying training data for supervised learning models for GPR buried threat detection.
The ultimate goal is to be able to apply the lessons learned in order to improve the performance of buried threat detectors. Therefore, a particular focus of this dissertation is to understand the implications of particular data selection strategies, and to develop principled general strategies for selecting the best approaches. This is done by identifying three factors that are typically considered in the literature with regards to this problem. Experiments are conducted to understand the impact of these factors on detection performance. The outcome of these experiments provided several insights about the data that can help guide the future development of automated buried threat detectors.
The first set of experiments suggest that a substantial number of threat signatures are neither hyperbolic nor regular in their appearance. These insights motivated the development of a novel buried threat detector that improves over the state-of-the-art benchmark algorithms on a large collection of data. In addition, this newly developed algorithm exhibits improved characteristics of robustness over those algorithms. The second set of experiments suggest that automating the selection of data corresponding to the buried threats is possible and can be used to replace manually designed methods for this task.