Nonlinear Energy Harvesting With Tools From Machine Learning
Energy harvesting is a process where self-powered electronic devices scavenge ambient energy and convert it to electrical power. Traditional linear energy harvesters which operate based on linear resonance work well only when excitation frequency is close to its natural frequency. While various control methods applied to an energy harvester realize resonant frequency tuning, they are either energy-consuming or exhibit low efficiency when operating under multi-frequency excitations. In order to overcome these limitations in a linear energy harvester, researchers recently suggested using "nonlinearity" for broad-band frequency response.
Based on existing investigations of nonlinear energy harvesting, this dissertation introduced a novel type of energy harvester designs for space efficiency and intentional nonlinearity: translational-to-rotational conversion. Two dynamical systems were presented: 1) vertically forced rocking elliptical disks, and 2) non-contact magnetic transmission. Both systems realize the translational-to-rotational conversion and exhibit nonlinear behaviors which are beneficial to broad-band energy harvesting.
This dissertation also explores novel methods to overcome the limitation of nonlinear energy harvesting -- the presence of coexisting attractors. A control method was proposed to render a nonlinear harvesting system operating on the desired attractor. This method is based on reinforcement learning and proved to work with various control constraints and optimized energy consumption.
Apart from investigations of energy harvesting, several techniques were presented to improve the efficiency for analyzing generic linear/nonlinear dynamical systems: 1) an analytical method for stroboscopically sampling general periodic functions with arbitrary frequency sweep rates, and 2) a model-free sampling method for estimating basins of attraction using hybrid active learning.
Energy
Computer science
Active Learning
Energy Harvesting
Machine Learning
Nonlinear Dynamics
Optimal Control
Reinforcement Learning

This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Rights for Collection: Duke Dissertations
Works are deposited here by their authors, and represent their research and opinions, not that of Duke University. Some materials and descriptions may include offensive content. More info