Browsing by Subject "Optimization"
Results Per Page
Sort Options
Item Open Access 3D dynamic in vivo imaging of joint motion: application to measurement of anterior cruciate ligament function(2019) Englander, Zoë AlexandraMore than 400,000 anterior cruciate ligament (ACL) injuries occur annually in the United States, 70% of which are non-contact. A severe consequence of ACL injury is the increased risk of early-onset of osteoarthritis (OA). Importantly, the increased risk of OA persists even if the ACL is surgically reconstructed. Thus, due to the long term physical consequences and high financial burden of treatment, injury prevention and improved reconstruction techniques are critical. However, the causes of non-contact ACL injuries remain unclear, which has hindered efforts to develop effective training programs targeted at preventing these injuries. Improved understanding of the knee motions that increase the risk of ACL injury can inform more effective injury prevention strategies. Furthermore, there is presently limited in vivo data to describe the function of ACL under dynamic loading conditions. Understanding how the ACL functions to stabilize the knee joint under physiologic loading conditions can inform design criteria for grafts used in ACL reconstruction. Grafts that more accurately mimic the native function of the ACL may help prevent these severe long term degenerative changes in the knee joint after injury.
To this end, measurements of in vivo ACL function during knee motion are critical to understanding how non-contact ACL injuries occur and the function of the ACL in stabilizing the joint during activities of daily living. Specifically, identifying the knee motions that increase ACL length and strain can elucidate the mechanisms of non-contact ACL injury, as a taut ligament is more likely to fail. Furthermore, measuring ACL elongation patterns during dynamic activity can inform the design criteria for grafts used in reconstructive surgery. To obtain measurements, 3D imaging techniques that can be used to measure dynamic in vivo ACL elongation and strain at high temporal and spatial resolution are needed.
Thus, in this dissertation a method of measuring knee motion and ACL function during dynamic activity in vivo using high-speed biplanar radiography in combination with magnetic resonance (MR) imaging was developed. In this technique, 3D surface models of the knee joint are created from MR images and registered to high-speed biplanar radiographs of knee motion. The use of MR imaging to model the joint allows for visualization of bone and soft tissue anatomy, in particular the attachment site footprints of the ligaments. By registering the bone models to biplanar radiographs using software developed in this dissertation, the relative positions of the bones and associated ligament attachment site footprints at the time of radiographic imaging can be reproduced. Thus, measurements of knee kinematics and ligament function during dynamic activity can be obtained at high spatial and temporal resolution.
We have applied the techniques developed in this dissertation to obtain novel dynamic in vivo measurements of the mechanical function of the knee joint. Specifically, the physiologic elongation and strain behaviors of the ACL during gait and single-legged jumping were measured. Additionally, the dynamic function of the patellar tendon during single legged jumping was measured. The findings of this dissertation have helped to elucidate the knee kinematics that increase ACL injury vulnerability by identifying the dynamic motions that result in elongation and strain in the ACL. Furthermore, the findings of this dissertation have provided critical data to inform design criteria for grafts used in reconstructive surgery such that reconstructive techniques better mimic the physiologic function of the ACL.
The methodologies described in this dissertation can be applied to study the mechanical behavior of other joints such as the spine, and other soft tissues, such as articular cartilage, under various loading conditions. Therefore, these methods may have a significant impact on the field of biomechanics as a whole, and may have applicability to a number of musculoskeletal applications.
Item Embargo 3D Tissue Modelling: Laser-based Multi-modal Surface Reconstruction, Crater Shape Prediction and Pathological Mapping in Robotic Surgery(2023) Ma, GuangshenIn surgical robotics, fully-automated tumor removal is an important topic and it includes three main tasks: tissue classification for cancer diagnosis, pathological mapping for tumor localization and tissue resection by using a laser scalpel. Generating a three-dimensional (3D) pathological tissue model with fully non-contact sensors can provide invaluable information to assist surgeons in decision-making and enable the use of surgical robots for efficient tissue manipulation. To collect the comprehensive information of a biological tissue target, robotic laser systems with complementary sensors (e.g., Optical coherence tomography (OCT) sensor, and stereovision) can play important roles in providing non-contact laser scalpels (i.e., cutting laser scalpel) for tissue removal, applying photonics-based sensors for pathological tissue classification (i.e., laser-based endogenous fluorescence), and aligning multi-sensing information to generate a 3D pathological map. However, there are three main challenges with integrating multiple laser-based sensors into the robotic laser system, which includes: 1) Modelling the laser beam transmission in 3D free-space to achieve accurate laser-tissue manipulation under geometric constraints, 2) Studying the complex physics of laser-tissue interaction for tissue differentiation and 3D shape modelling to ensure safe tissue removal, and 3) Integrating information from multiple sensing devices under sensor noise and uncertainties from system calibration.
Targeting these three research problems separately, a computational framework is proposed to provide kinematics and calibration algorithms to control and direct the 3D laser beam through a system with multiple rotary mirrors (to transmit laser beam in free-space) and laser-based sensor inputs. This framework can serve as a base platform for optics-based robotic system designs and solving the motion planning problems related to laser-based robot systems. Simulation experiments have verified the feasibility of the proposed framework and actual experiments have been conducted with an existing robotic laser system on phantom and ex-vivo biological tissues.
To study the complex physics of laser-tissue interaction, a 3D data-driven method is developed to model the geometric relation between the laser energy distribution, laser incident angles, and the tissue deformation resulting from photoablation. The results of the phantom studies have demonstrated the feasibility of applying the trained model for laser crater shape predictions during the surgical planning.
Finally, a research platform, referred as ``TumorMapping", is developed to collect multimodal sensing information from complementary sensors to build a 3D pathological map of a mice tumor surface. This robot system includes a sensor module attached to a 6-DOF robot arm end-effector, based on the laser-induced fluorescence spectroscopy for tissue classification and a fiber couple cutting laser for tissue resection. A benchtop sensor platform is built with an OCT sensor and a stereovision system with two lens camera to collect the tissue information with a non-contact pattern. The robot-sensor and the complementary sensor sub-systems are integrated in a unified platform for the 3D pathological map reconstruction.
In summary, the research contributions include important advancements in laser-based sensor fusion for surgical decision-making which is enabling new capabilities for the use of 3D pathological mapping combined with intelligent robot planning and control algorithms for robotic surgery.
Item Open Access A Collimator Setting Optimization Algorithm for Dual-arc Volumetric Modulated Arc Therapy in Pancreas Stereotactic Body Radiation Therapy(2019) Li, XinyiPurpose: To develop an automatic collimator setting optimization algorithm to improve dosimetric quality of pancreas Volumetric Modulated Arc Therapy (VMAT) plans for Stereotactic Body Radiation Therapy (SBRT).
Methods: Fifty-five pancreas SBRT cases were retrospectively studied. Different from the conventional practice of initializing collimator settings manually, the proposed algorithm simultaneously optimizes the collimator angles and jaw positions which are customized to the patient geometry. This algorithm includes two key steps: an iterative optimization algorithm via simulated annealing that generates a set of collimator settings candidates, and a scoring system that choose the final collimator settings based on organs-at-risk (OARs) sparing criteria and dose prescription. The scoring system penalizes 3 factors: 1) jaw opening ratio on Y direction to X direction; 2) unmodulated MLC area within the jaw aperture in a dynamic MLC sequence; 3) OAR shielding capability by MLC with MLC aperture control constraints. For validation, the other 16 pancreas SBRT cases were analyzed. Two dual-arc plans were generated for each validation case, an optimized plan (Planopt) and a conventional plan (Planconv). Each plan was generated by a same set of auxiliary planning structures and dose-volume-histogram (DVH) constraints in inverse optimization. Dosimetric results were analyzed and compared. All results were tested by Wilcoxon signed-rank tests.
Results: Both plan groups had no statistical differences in target dose coverage V95% (p=0.84) and Root Conformity Index (p=0.30). Mean doses of OARs were improved or comparable. In comparison with Planconv, Planopt reduced maximum dose (D0.03cc) to stomach (-49.5cGy, p=0.03), duodenum (-63.5cGy, p<0.01), and bowel (-62.5cGy, p=0.01). Planopt also showed lower modulation complexity score (p=0.02), which implies its higher modulation complexity of the dynamic MLC sequence.
Conclusions: The proposed collimator settings optimization algorithm successfully improved dosimetric performance for dual-arc VMAT plans in pancreas SBRT. The proposed algorithm was demonstrated with great clinical feasibility and readiness.
Item Open Access Aerodynamic Optimization of Helicopter Rotors using a Harmonic Balance Lifting Surface Technique(2018) Tedesco, Matthew BraxtonThis thesis concerns the optimization of the aerodynamic performance of conventional helicopter rotors, given a set of design variables to control the rotor's pitching angle, twist and chord distributions. Two models are presented for use. The lifting line model is a vortex lattice model that uses assumptions on the size and shape of the blade to simplify the model, but is unable to account for unsteady and small aspect ratio effects. The lifting surface model removes these assumptions and allows for a wider variety of accurate solutions, at the cost of overall computational complexity. The lifting surface model is chosen for analysis, and then condensed using static condensation and harmonic balance. The final system is discretized and pertinent values of power, force, and moment calculated using Kelvin's theorem and the unsteady Bernoulli equation. This system is then optimized in one of two ways: using a direct linear solve if possible, or the open source package IPOPT where necessary. The results of single-point and multi-point optimization demonstrate for low speed forward flight, the lifting line model is sufficient for modeling purposes. As the speed of the rotor increases, more unsteady effects become prominent in the system, and therefore the lifting surface model becomes more necessary. When conducting a chord optimization on the rotor, hysteresis effects and local minima are calculated for the non-linear optimization. The global minima within the set of captured local minima can be found through simple data visualization, and the global minima is confirmed to have similar behavior to the results of lifting line; a large spike in induced power at a critical advance ratio, with a sharp decline in induced power as the rotor flies faster. Within the realm of practical forward flight speeds of a conventional rotor, smooth, continuous results are demonstrated.
Item Open Access Aeroelastic Modeling of Blade Vibration and its Effect on the Trim and Optimal Performance of Helicopter Rotors using a Harmonic Balance Approach(2020) Tedesco, MatthewThis dissertation concerns the optimization of the aeroelastic performance of conventional
helicopter rotors, considering various design variables such cyclic and higher
harmonic controls. A nite element model is introduced to model the structural
eects of the blade, and a coupled induced velocity/projected force model is used
to couple this structural model to the aerodynamic model constructed in previous
works. The system is then optimized using two separate objective functions: minimum
power and minimum vibrational loading at the hub. The model is validated
against several theoretical and experimental models, and good agreement is demonstrated
in each case. Results of the rotor in forward
ight demonstrate for realistic
advance ratios the original lifting surface model is sucient for modeling normalized
induced power. Through use of the dynamics model the vibrational loading minimization
is shown to be extremely signicant, especially when using more higher
harmonic control. However, this decrease comes at an extreme cost to performance
in the form of the normalized induced power nearly doubling. More realistic scenarios
can be created using multi-objective optimization, where it is shown that vibrational
loading can be decreased around 60% for a 5% increase in power.
Item Open Access Analysis of High-Temperature Solar Selective Coating(2018) Xiao, QingyuAbundant and widely available solar energy is one possible solution to the increasing demands for clean energy. The Thermodynamics and Sustainable Energy Laboratory (T-SEL) in Duke University has been dedicated to investigating methods to harness solar energy. Hybrid Solar System (HSS) is one of the promising methods to use solar energy, as it absorbs sunlight to produce hydrogen, which then can electrically power equipment through fuel cells. Hydrogen is produced through a biofuel reforming process, which occurs at a high temperature (over 700℃ for methane). Methods to design a high-temperature solar selective coating are investigated in this thesis.
The solar irradiance spectrum was assumed to be the same as Air Mass (AM) 1.5. A transfer-matrix method was adopted in this work to calculate the optical properties of the NREL #6, a design of nine-layer solar selective coating. Based on the design of NREL #6 coating, Differential Evolution (DE) algorithm was introduced to optimize this design. Two objective functions were considered: selectivity-oriented function and efficiency-oriented function, yielding the design of Revision #1 and Revision #2 respectively. The results showed a high selectivity (around 13) with low efficiency (66.6%) in Revision #1 and a high efficiency (82.6%) with moderate selectivity (around 9) in Revision #2.
Item Open Access Attack Countermeasure Trees: A Non-state-space Approach Towards Analyzing Security and Finding Optimal Countermeasure Set(2010) Roy, ArpanAttack tree (AT) is one of the widely used non-statespace
models in security analysis. The basic formalism of AT
does not take into account defense mechanisms. Defense trees
(DTs) have been developed to investigate the effect of defense
mechanisms usinghg measures such as attack cost, security
investment cost, return on attack (ROA) and return on investment
(ROI). DT, however, places defense mechanisms only at the
leaf nodes and the corresponding ROI/ROA analysis does not
incorporate the probabilities of attack. In attack response tree
(ART), attack and response are both captured but ART suffers
from the problem of state-space explosion, since solution of
ART is obtained by means of a state space model. In this
paper, we present a novel attack tree paradigm called attack
countermeasure tree (ACT) which avoids the generation and
solution of the state-space model and takes into account attacks as
well as countermeasures (in the form of detection and mitigation
events). In ACT, detection and mitigation are allowed not just at
the leaf node but also at the intermediate nodes while at the same
time the state-space explosion problem is avoided in its analysis.
We use single and multiobjective optimization to find optimal
countermeasures under different constraints. We illustrate the
features of ACT using several case studies.
Item Open Access Beam Optimization for Whole Breast Radiation Therapy Planning(2018) Wang, WentaoPurpose: To develop an automated program that can generate the optimal beams for whole breast radiation therapy (WBRT).
Methods and Materials: A total of twenty patients receiving WBRT were included in this study. The computed tomography (CT) simulation images and structures of all 20 patients were used to develop and validate the program. All patients had the breast planning target volume (PTV) contour drawn by physicians and radio-opaque catheters placed on the skin during CT simulation. First, an initial beam was calculated based on the CT images, the radio-opaque catheters, and the breast PTV contour. The beam includes five main parameters: the gantry angles, the isocenter location, the field size, the collimator angles, and the initial multi-leaf collimator (MLC) shape.
To optimize the beam parameters, a geometry-based objective function was constructed to optimize target coverage and organ-at-risk (OAR) sparing. The objective function is the weighted sum of the square of the relative volumes of the PTV outside the field and the ipsilateral lung inside the field. Due to the curvature of the chest wall, a portion of the ipsilateral lung will be included in the irradiated volume. The balance between PTV coverage and OAR sparing is embodied by the relative weight of the lung volume in the objective function, which was trained and validated from the clinical plans of the twenty patients. Two different optimization schemes were developed to minimize the objective function: the exhaustive search and the local search. The search was conducted in a 2-dimensional grid with the gantry angle (1° increments) and the isocenter location (1 mm increments) as two axes and the initial beam as the origin point. For the exhaustive search, the ranges of the gantry angle and the isocenter location are ±12° and ±21 mm. The local search does not require a search range. The beam with the minimal objective function value in the grid is considered optimal. The optimal beam was transferred to an in-house automatic fluence optimization program developed specifically for WBRT. The automatic plans were compared with the manually generated clinical plans for target coverage, dose conformity and homogeneity, and OAR dose.
Results: The calculation time of the beam optimization was under one minute for all cases. The local search (~15 s) took less time than the exhaustive search (~45 s), and the two methods produced the same result for the same patient. The automatic plans have overall comparable plan quality to the clinical plans, which usually take 1 to 4 hours to make. Generally, the PTV coverage is improved while the dose to the ipsilateral lung and the heart is similar. The breast PTV Eval V95% of all cases are above 95%, and the mean V95% (97.7%) is increased compared with the clinical plans (96.8%). The ipsilateral lung V16Gy is reduced for 14 out of 20 cases, and the mean V16Gy is decreased in the automatic plans (12.6% vs. 13.6%). The average heart mean dose is slightly increased in the automatic plans (2.06% vs. 1.99%).
Conclusion: Optimal beams for WBRT can be automatically generated in one minute given the patient’s simulation CT images and structures. The automated beam setup program offers a valuable tool for WBRT planning, as it provides clinically relevant solutions based on previous clinical practice as well as patient specific anatomy.
Item Open Access CFD Optimization of Small Gas Ejectors Used in Navy Diving Systems(2012) Cornman, Jacob KennethOptimization of small gas ejectors is typically completed by selecting a single set of operating conditions and optimizing the geometry for the specified conditions. The U.S. Navy is interested in utilizing a small gas ejector design in multiple diving systems with varying operational conditions. This thesis is directed at developing a Quasi Newton-Raphson Multivariate Optimization method using Computational Fluid Dynamics (CFD) to evaluate finite difference approximations. These approximations are then used as inputs to the gradient vector and the Hessian matrix of the standard Newton-Raphson multivariate optimization method. This optimization method was shown to be timely enough for use in the design phase of a multiple parameter system.
CFD investigation of the level curves of the simulation cost function hypersurface verified the success of the method presented at optimizing each independent parameter. Additional CFD simulations were used to investigate the ejector performance for operational conditions deviating from the operational conditions used during optimization. A correlation was developed for selecting the optimum throat diameter, and corresponding maximum efficiency, as functions of the input conditions only. Experimental models were manufactured using fused deposition modeling and evaluated with good agreement to the CFD simulation results.
Item Open Access Charged Particle Optics Simulations and Optimizations for Miniature Mass and Energy Spectrometers(2021) DiDona, ShaneComputer simulation and modeling is a powerful tool for the analysis of physical systems; in this work we consider the use of computer modeling and optimization in improving the focusing properties of a variety of charged particle optics systems. The combined use of several software packages and custom computer code allows for modeling electrostatic and magnetostatic fields and the trajectory of particles through them. Several applications of this functionality are shown. The pieces of code which are shown are the starting point of an integrated charged particle simulation and optimization tool with focus on optimization. The applications shown are mass spectrographs and electron energy spectrographs. Simulation allowed additional information about the systems in question to be determined.In this work, coded apertures are shown to be compatible with sector instruments, though architectural challenges exist. Next, simulation allowed for the discovery of a new class of mass spectrograph which addresses these challenges and is compatible with computational sensing, allowing for both high resolution and high sensitivity, with a 1.8x improvement in spot size. Finally, a portion of this new spectrograph was adapted for use as an electron energy spectrograph, with a resolution 9.1x and energy bandwidth 2.1x that of traditional instruments.
Item Open Access Computational Optical Imaging Systems: Sensing Strategies, Optimization Methods, and Performance Bounds(2012) Harmany, Zachary TaylorThe emerging theory of compressed sensing has been nothing short of a revolution in signal processing, challenging some of the longest-held ideas in signal processing and leading to the development of exciting new ways to capture and reconstruct signals and images. Although the theoretical promises of compressed sensing are manifold, its implementation in many practical applications has lagged behind the associated theoretical development. Our goal is to elevate compressed sensing from an interesting theoretical discussion to a feasible alternative to conventional imaging, a significant challenge and an exciting topic for research in signal processing. When applied to imaging, compressed sensing can be thought of as a particular case of computational imaging, which unites the design of both the sensing and reconstruction of images under one design paradigm. Computational imaging tightly fuses modeling of scene content, imaging hardware design, and the subsequent reconstruction algorithms used to recover the images.
This thesis makes important contributions to each of these three areas through two primary research directions. The first direction primarily attacks the challenges associated with designing practical imaging systems that implement incoherent measurements. Our proposed snapshot imaging architecture using compressive coded aperture imaging devices can be practically implemented, and comes equipped with theoretical recovery guarantees. It is also straightforward to extend these ideas to a video setting where careful modeling of the scene can allow for joint spatio-temporal compressive sensing. The second direction develops a host of new computational tools for photon-limited inverse problems. These situations arise with increasing frequency in modern imaging applications as we seek to drive down image acquisition times, limit excitation powers, or deliver less radiation to a patient. By an accurate statistical characterization of the measurement process in optical systems, including the inherent Poisson noise associated with photon detection, our class of algorithms is able to deliver high-fidelity images with a fraction of the required scan time, as well as enable novel methods for tissue quantification from intraoperative microendoscopy data. In short, the contributions of this dissertation are diverse, further the state-of-the-art in computational imaging, elevate compressed sensing from an interesting theory to a practical imaging methodology, and allow for effective image recovery in light-starved applications.
Item Embargo Computational Tools to Improve Stereo-EEG Implantation and Resection Surgery for Patients with Epilepsy(2024) Thio, BrandonApproximately 1 million Americans live with drug-resistant epilepsy. Surgical resection of the brain areas where seizures originate can be curative. However, successful surgical outcomes require delineation of the epileptogenic zone (EZ), the minimum amount of tissue that needs to be resected to eliminate a patient’s seizures. EZ localization is often accomplished using stereo-EEG where 5-30 wires are implanted into the brain through small holes drilled through the skull to map widespread regions of the epileptic network. However, despite the technical advances in surgical planning and epilepsy monitoring, seizure freedom rates following epilepsy surgery have remained at ~60% for decades. In part, seizure freedom rates have not increased because epilepsy neurologists do not have appropriate software tools to optimize stereo-EEG. In this dissertation, we report on the development and analysis of foundational models and software tools to improve the use of stereo-EEG technology and ultimately increase seizure-freedom rates following epilepsy surgery.We developed an automated image-based head-modeling pipeline to generate patient-specific models for stereo-EEG analysis. We assessed the key dipole source model assumption, which assumes that voltages generated by a population of active neurons can be simplified to a single dipole. We found that the dipole source model is appropriate to reproduce the spatial voltage distribution generated by neurons and for source localization applications. Our findings validate a key model parameter for stereo-EEG head-models, which are foundational to all computational tools developed to optimize stereo-EEG. Using the dipole source model, we systematically assessed the origin of recorded brain electrophysiological signals using computational models. We found that, counter to dogma, action potentials contribute appreciably to brain electrophysiological signals. Our findings reshape the cellular interpretation of brain electrophysiological signals and should impact modeling efforts to reproduce neural recordings. We also developed a recording sensitivity metric, which quantifies the cortical areas that are recordable by a set of stereo-EEG electrodes. We used the recording sensitivity metric to develop two software tools to visualize the recording sensitivity on patient-specific brain geometry and to optimize the trajectories of stereo-EEG electrodes. Using the same number of electrodes, our optimization approach identified trajectories that had greater recording sensitivity than clinician-defined trajectories. Using the same target recording sensitivity, our optimization approach found trajectories that mapped the same amount of cortex with fewer electrodes compared to the clinician-defined trajectories. Thus, our optimization approach can improve the outcomes following epilepsy surgery by increasing the chances that an electrode records from the EZ or reduce the risk of surgery by minimizing the number of necessary implanted electrodes. We finally developed a propagating source reconstruction algorithm using a novel TEmporally Dependent Iterative Expansion approach (TEDIE). TEDIE takes as inputs stereo-EEG recordings and patient-specific anatomical images, produces movies of dynamic (moving) neural activity displayed on patient-specific anatomy, and distills the immense intracranial stereo-EEG dataset into an objective reconstruction of the EZ. We validated TEDIE using seizure recordings from 40 patients from two centers. TEDIE consistently localized the EZ closer to the resected regions for patients who are currently seizure-free. Further, TEDIE identified new EZs in 13 of the 23 patients who are currently not seizure-free. Therefore, TEDIE is expected to improve the accuracy of the evaluation of surgical epilepsy candidates, result in increased numbers of patients advancing to surgery, and increase the proportion of patients who achieve seizure freedom through surgery. Together, our suite of software tools constitute important advances to optimize stereo-EEG implantation and analysis, which should lead to more patients achieving seizure freedom following epilepsy surgery.
Item Open Access Continuous-Time Models of Arrival Times and Optimization Methods for Variable Selection(2018) Lindon, Michael ScottThis thesis naturally divides itself into two sections. The first two chapters concern
the development of Bayesian semi-parametric models for arrival times. Chapter 2
considers Bayesian inference for a Gaussian process modulated temporal inhomogeneous Poisson point process, made challenging by an intractable likelihood. The intractable likelihood is circumvented by two novel data augmentation strategies which result in Gaussian measurements of the Gaussian process, connecting the model with a larger literature on modelling time-dependent functions from Bayesian non-parametric regression to time series. A scalable state-space representation of the Matern Gaussian process in 1 dimension is used to provide access to linear time filtering algorithms for performing inference. An MCMC algorithm based on Gibbs sampling with slice-sampling steps is provided and illustrated on simulated and real datasets. The MCMC algorithm exhibits excellent mixing and scalability.
Chapter 3 builds on the previous model to detect specific signals in temporal point patterns arising in neuroscience. The firing of a neuron over time in response to an external stimulus generates a temporal point pattern or ``spike train''. Of special interest is how neurons encode information from dual simultaneous external stimuli. Among many hypotheses is the presence multiplexing - interleaving periods of firing as it would for each individual stimulus in isolation. Statistical models are developed to quantify evidence for a variety of experimental hypotheses. Each experimental hypothesis translates to a particular form of intensity function for the dual stimuli trials. The dual stimuli intensity is modelled as a dynamic superposition of single stimulus intensities, defined by a time-dependent weight function that is modelled non-parametrically as a transformed Gaussian process. Experiments on simulated data demonstrate that the model is able to learn the weight function very well, but other model parameters which have meaningful physical interpretations less well.
Chapters 4 and 5 concern mathematical optimization and theoretical properties of Bayesian models for variable selection. Such optimizations are challenging due to non-convexity, non-smoothness and discontinuity of the objective. Chapter 4 presents advances in continuous optimization algorithms based on relating mathematical and statistical approaches defined in connection with several iterative algorithms for penalized linear
regression. I demonstrate the equivalence of parameter mappings using EM under
several data augmentation strategies - location-mixture representations, orthogonal data augmentation and LQ design matrix decompositions. I show that these
model-based approaches are equivalent to algorithmic derivation via proximal
gradient methods. This provides new perspectives on model-based and algorithmic
approaches, connects across several research themes in optimization and statistics,
and provides access, beyond EM, to relevant theory from the proximal gradient
and convex analysis literatures.
Chapter 5 presents a modern and technologically up-to-date approach to discrete optimization for variable selection models through their formulation as mixed integer programming models. Mixed integer quadratic and quadratically constrained programs are developed for the point-mass-Laplace and g-prior. Combined with warm-starts and optimality-based bounds tightening procedures provided by the heuristics of the previous chapter, the MIQP model developed for the point-mass-Laplace prior converges to global optimality in a matter of seconds for moderately sized real datasets. The obtained estimator is demonstrated to possess superior predictive performance over that obtained by cross-validated lasso in a number of real datasets. The MIQCP model for the g-prior struggles to match the performance of the former and highlights the fact that the performance of the mixed integer solver depends critically on the ability of the prior to rapidly concentrate posterior mass on good models.
Item Open Access Control of Vibratory Energy Harvesters in the Presence of Nonlinearities and Power-Flow Constraints(2012) Cassidy, Ian LernerOver the past decade, a significant amount of research activity has been devoted to developing electromechanical systems that can convert ambient mechanical vibrations into usable electric power. Such systems, referred to as vibratory energy harvesters, have a number of useful of applications, ranging in scale from self-powered wireless sensors for structural health monitoring in bridges and buildings to energy harvesting from ocean waves. One of the most challenging aspects of this technology concerns the efficient extraction and transmission of power from transducer to storage. Maximizing the rate of power extraction from vibratory energy harvesters is further complicated by the stochastic nature of the disturbance. The primary purpose of this dissertation is to develop feedback control algorithms which optimize the average power generated from stochastically-excited vibratory energy harvesters.
This dissertation will illustrate the performance of various controllers using two vibratory energy harvesting systems: an electromagnetic transducer embedded within a flexible structure, and a piezoelectric bimorph cantilever beam. Compared with piezoelectric systems, large-scale electromagnetic systems have received much less attention in the literature despite their ability to generate power at the watt--kilowatt scale. Motivated by this observation, the first part of this dissertation focuses on developing an experimentally validated predictive model of an actively controlled electromagnetic transducer. Following this experimental analysis, linear-quadratic-Gaussian control theory is used to compute unconstrained state feedback controllers for two ideal vibratory energy harvesting systems. This theory is then augmented to account for competing objectives, nonlinearities in the harvester dynamics, and non-quadratic transmission loss models in the electronics.
In many vibratory energy harvesting applications, employing a bi-directional power electronic drive to actively control the harvester is infeasible due to the high levels of parasitic power required to operate the drive. For the case where a single-directional drive is used, a constraint on the directionality of power-flow is imposed on the system, which necessitates the use of nonlinear feedback. As such, a sub-optimal controller for power-flow-constrained vibratory energy harvesters is presented, which is analytically guaranteed to outperform the optimal static admittance controller. Finally, the last section of this dissertation explores a numerical approach to compute optimal discretized control manifolds for systems with power-flow constraints. Unlike the sub-optimal nonlinear controller, the numerical controller satisfies the necessary conditions for optimality by solving the stochastic Hamilton-Jacobi equation.
Item Open Access Cumulon: Simplified Matrix-Based Data Analytics in the Cloud(2016) Huang, BotongCumulon is a system aimed at simplifying the development and deployment of statistical analysis of big data in public clouds. Cumulon allows users to program in their familiar language of matrices and linear algebra, without worrying about how to map data and computation to specific hardware and cloud software platforms. Given user-specified requirements in terms of time, monetary cost, and risk tolerance, Cumulon automatically makes intelligent decisions on implementation alternatives, execution parameters, as well as hardware provisioning and configuration settings -- such as what type of machines and how many of them to acquire. Cumulon also supports clouds with auction-based markets: it effectively utilizes computing resources whose availability varies according to market conditions, and suggests best bidding strategies for them. Cumulon explores two alternative approaches toward supporting such markets, with different trade-offs between system and optimization complexity. Experimental study is conducted to show the efficiency of Cumulon's execution engine, as well as the optimizer's effectiveness in finding the optimal plan in the vast plan space.
Item Open Access Design, Optimization, and Test Methods for Micro-Electrode-Dot-Array Digital Microfluidic Biochips(2017) Li, ZipengDigital microfluidic biochips (DMFBs) are revolutionizing many biochemical analysis procedures, e.g., high-throughput DNA sequencing and point-of-care clinical diagnosis. However, today's DMFBs suffer from several limitations: (1) constraints on droplet size and the inability to vary droplet volume in a fine-grained manner; (2) the lack of integrated sensors for real-time detection; (3) the need for special fabrication processes and the associated reliability/yield concerns.
To overcome the above limitations, DMFBs based on a micro-electrode-dot-array (MEDA) architecture have recently been proposed. Unlike conventional digital microfluidics, where electrodes of equal size are arranged in a regular pattern, the MEDA architecture is based on the concept of a sea-of-micro-electrodes. The MEDA architecture allows microelectrodes to be dynamically grouped to form a micro-component that can perform different microfluidic operations on the chip.
Design-automation tools can reduce the difficulty of MEDA biochip design and help to ensure that the manufactured biochips are versatile and reliable. In order to fully exploit MEDA-specific advantages (e.g., real-time droplet sensing), new design, optimization, and test problems are tackled in this dissertation.
The dissertation first presents a droplet-size aware synthesis approach that can configure the target bioassay on a MEDA biochip. The proposed synthesis method targets reservoir placement, operation scheduling, module placement, and routing of droplets of various sizes. An analytical model for droplet velocity is proposed and experimentally validated using a fabricated MEDA chip.
Next, this dissertation presents an efficient error-recovery strategy to ensure the correctness of assays executed on MEDA biochips. By exploiting MEDA-specific advances in droplet sensing, the dissertation presents a novel probabilistic timed automata (PTA)-based error-recovery technique to dynamically reconfigure the biochip using real-time data provided by on-chip sensors. An on-line synthesis technique and a control flow are also proposed to connect local-recovery procedures with global error recovery for the complete bioassay.
A potentially important application of MEDA biochips lies in sample preparation via a series of dilution steps. Sample preparation in digital microfluidic biochips refers to the generation of droplets with target concentrations for on-chip biochemical applications. The dissertation presents the first droplet size-aware and error-correcting sample-preparation method for MEDA biochips. In contrast to previous methods, the proposed approach considers droplet sizes and incorporates various mixing models in sample preparation.
In order to ensure high confidence in the outcome of biochemical experiments, MEDA biochips must be adequately tested before they can be used for bioassay execution. The dissertation presents efficient structural and functional test techniques for MEDA biochips. The proposed structural test techniques can effectively detect defects and identify faulty microcells, and the proposed functional test techniques address fundamental fluidic operations on MEDA biochips.
In summary, the dissertation tackles important problems related to key stages of MEDA chip design and usage. The results emerging from this dissertation provide the first set of comprehensive design-automation solutions for MEDA biochips. It is anticipated that MEDA chip users will also benefit from these optimization methods.
Item Open Access Efficient Design of Electricity Market Clearing Mechanisms with Increasing Levels of Renewable Generation and Carbon Price(2017) Daraeepour, AliIncreased use of wind energy in electricity systems can help reduce green house gas emissions and enhance energy security. However, the traditional scheduling and dispatching processes used to ensure the cost-effective and reliable supply of electricity in wholesale energy and ancillary service markets are not designed to deal with wind production uncertainty and variability. The growing variability and uncertainty of wind resources misinforms the scheduling and dispatching processes and ultimately causes economic and environmental inefficiencies. Various approaches have been proposed to integrate the wind uncertainty and variability into the electricity market clearing processes and enhance their economic and environmental efficiency. This dissertation develops a framework that enables quantifying the inefficiencies caused by the wind uncertainty and assessing the economic and environmental efficiency that could be gained by integrating the uncertainty into the market clearing design.
To assess the potential inefficiencies posed by wind uncertainty, three objectives are addressed. (1) Elucidate the incentives that wind uncertainty might create for electricity markets’ demand-side participants to develop market manipulation strategies and determine the factors that might contribute to or mitigate such market power. (2) Estimate the economic and environmental costs of wind uncertainty and the improvements that could be achieved by various approaches for integrating the wind uncertainty into the market clearing design. (3) Investigate how CO2 pricing policies that affect the priority order of generators in the supply curve and the grid’s overall flexibility impact the uncertainty costs and the improvements that could be achieved by integrating the uncertainty into the market clearing design.
First, in order to highlight the opportunities that wind uncertainty creates for the demand-side strategic behavior, this paper explores the effects of allowing large, price-responsive consumers to provide reserves in a power system with significant penetration of wind energy when the market is cleared using stochastic market clearing (SMC). The problem is formulated as a bilevel optimization problem representing a Stackelberg game between the large consumer and the other market participants. The study highlights how a large price-responsive consumer takes advantage of the wind uncertainty and leverages its ramp reserve deployment capability to understate its demand in the day-ahead market (DAM) and reduce the overall day-ahead (DA) and real-time (RT) prices to minimize the total daily cost of purchasing electricity in the DA and RT markets. The study also reveals how wind uncertainty, reserve deployment capacity, and transmission congestion contribute to the market power of large consumers that should be limited to mitigate their market power.
Next, to estimate the economic and environmental inefficiencies of the wind uncertainty, a framework is developed that replicates the operation of wholesale energy market clearing under the traditional design and adjusted designs that indirectly or directly integrate the uncertainty into the market clearing mechanisms. The indirect integration, referred to as augmented deterministic design, maintains the deterministic nature of market clearing mechanisms, i.e., DA unit commitment (DAUC) and economic dispatch (DAED), and deals with the uncertainty through scheduling ramp capability requirements, which are quantified exogenously to the market clearing processes based on the wind uncertainty characterization. The direct integration requires transition to the stochastic market clearing design in which stochastic optimization models are used for direct integration of the wind uncertainty characterization in the DAUC and DAED processes. The stochastic design allows endogenous quantification of the ramp capability requirements and optimizes energy and ramp capability reserve schedules by accounting for the expected cost of recourse actions taken to reconcile the RT balance mismatch caused by the deviation of wind energy producers from their DA production schedules.
The proposed framework resolves the differences of adjusted market clearing designs in terms of pricing, settlement, and reliability management to ensure a fair comparison of their dispatch, economic, and environmental outcomes. The comparative analysis reveals that the augmented deterministic and the stochastic designs enhance the economic and environmental outcomes, yet the stochastic design is superior as it offers more efficient and flexible energy and reserve schedules that are well coordinated with the anticipation of RT wind energy realizations. As a result, the stochastic design’s schedules can be adjusted more conveniently and cost-effectively to reconcile the deviations leading to greater operation and startup fuel cost savings; lower cycling of slow producers, higher wind integration and finally lower air emissions. Furthermore, stochastic design offers more efficient prices that reflect the system’s operation costs and wind uncertainty more effectively, provide greater remuneration of operational flexibility by producers, and reduce the revenue sufficiency guarantee payments that collectively improve the social surplus to a higher extent with respect to the augmented deterministic design.
Lastly, the developed market simulation framework is extended to include another adjusted deterministic design, referred to as hybrid deterministic design that uses stochastic optimization for direct integration of the wind uncertainty characterization to the residual unit commitment (RUC) stage. Then the economic and environmental outcomes of alternative market clearing designs are simulated under two carbon-pricing scenarios to evaluate their sensitivity to the introduction of a carbon price that alters the merit order of generation technologies in the supply curve.
The results imply that the stochastic market clearing design is superior to all adjusted deterministic designs. With introduction of a CO2 price, augmented and hybrid deterministic designs lose their effectiveness due to the shift in merit order of producers. However, stochastic market clearing maintains its superior performance that increases its superiority with respect to adjusted deterministic designs.
Item Open Access Generalized and Scalable Optimal Sparse Decision Trees(2020) Zhong, ChudiDecision tree optimization is notoriously difficult from a computational perspective but essential for the field of interpretable machine learning. Despite efforts over the past 40 years, only recently have optimization breakthroughs been made that have allowed practical algorithms to find \textit{optimal} decision trees. These new techniques have the potential to trigger a paradigm shift where it is possible to construct sparse decision trees to efficiently optimize a variety of objective functions without relying on greedy splitting and pruning heuristics that often lead to suboptimal solutions. The contribution in this work is to provide a general framework for decision tree optimization that addresses the two significant open problems in the area: treatment of imbalanced data and fully optimizing over continuous variables. We present techniques that produce optimal decision trees over a variety of objectives including F-score, AUC, and partial area under the ROC convex hull. We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables and speeds up decision tree construction by several orders of magnitude relative to the state-of-the art.
Item Open Access Interpretable Almost-Matching Exactly with Instrumental Variables(2019) Liu, YamengWe aim to create the highest possible quality of treatment-control matches for categorical data in the potential outcomes framework.
The method proposed in this work aims to match units on a weighted Hamming distance, taking into account the relative importance of the covariates; To match units on as many relevant variables as possible, the algorithm creates a hierarchy of covariate combinations on which to match (similar to downward closure), in the process solving an optimization problem for each unit in order to construct the optimal matches. The algorithm uses a single dynamic program to solve all of the units' optimization problems simultaneously. Notable advantages of our method over existing matching procedures are its high-quality interpretable matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible. We also adapt the matching framework by using instrumental variables (IV) to the presence of observed categorical confounding that breaks the randomness assumptions and propose an approximate algorithm which speedily generates high-quality interpretable solutions.We show that our algorithms construct better matches than other existing methods on simulated datasets, produce interesting results in applications to crime intervention and political canvassing.
Item Open Access Intrinsically Disordered Protein Polymer Libraries as Tools to Understand Protein Hydrophobicity(2019) Tang, Nicholas ChenIntrinsically disordered protein polymers (IDPPs) are repetitive biopolymers that, when enriched with prolines, glycines, and aliphatic amino acids, have observable lower critical solution temperature (LCST) phase transition behavior at physiologically relevant temperature and concentration ranges. This behavior is a striking feature of disordered proteins in nature, where chemical or physical stimuli lead to sharp conformational or phase transitions. Accordingly, protein-based polymers have been designed to mimic these behaviors, leading to a broad range of biotechnological applications. This work is driven by two approaches. In our science focused approach, we developed a polymer-physics based framework for understanding IDPP hydrophobicity using the relationship between phase transition temperature and globule surface tension. This physics-based framework has allowed us to better understand the unified contributions of chain length, concentration, temperature, and individual amino acid side chains to IDPP hydrophobicity by studying phase transition data. In our engineering focused approach, we developed novel tools that enable the high throughput discovery of new proteins that exhibit phase transitions, in order to increase the number of known stimuli responsive peptide sequence motifs beyond the limits of bioinspired design. The exhaustive discovery of new proteins that exhibit phase transitions consists of gene synthesis and protein screening. We developed two key technologies that has enabled (1) the scalable synthesis of repetitive gene libraries using a novel graph theoretic gene optimization approach (Codon Scrambling) and (2) the pooled synthesis of large complex gene libraries from libraries of oligonucleotides. Combined with pipelines for the screening of phase transition behavior, these technologies have enabled us to generate a diverse library of protein sequences necessary to validate our theoretical models. Finally, we developed an algorithm for the de novo design of nonrepetitive protein sequences that exhibit phase transition behavior, further broadening the sequence space of stimuli responsive synthetic IDPPs.
- «
- 1 (current)
- 2
- 3
- »