CMOS and Emerging Device Circuits for Biologically-Plausible Learning
dc.contributor.advisor | Li, Hai | |
dc.contributor.author | Taylor, Brady Garland | |
dc.date.accessioned | 2025-07-02T19:03:00Z | |
dc.date.available | 2025-07-02T19:03:00Z | |
dc.date.issued | 2025 | |
dc.department | Electrical and Computer Engineering | |
dc.description.abstract | Neuromorphic computing has embraced mixed-signal computing and emerging devices for speedy and efficient execution of machine learning models. In particular, edge computing, IoT, and mobile computing, where computing resources are limited, power budgets are low, and data is captured locally, are attractive application domains for mixed-signal and analog neuromorphic computing. Data can be processed quickly and at low-cost before signals are digitized. However, training machine learning models for edge computing is usually completed offline by centralized compute clusters. Updating a pre-trained model with new data or training a new model from scratch on an edge device requires extra on-chip compute circuits and energy to digitize data and compute model updates. Training could be performed offline by sending data to a larger server, but this incurs communication costs and raises privacy concerns. Offline training methods would furthermore need to consider possible hardware nonidealities that could adversely affect performance if deployed on analog or mixed-signal hardware. Enabling on-chip learning on neuromorphic computing platforms, particularly in resource-constrained or edge applications, could mitigate concerns about data transfer energy costs, privacy, and adapting to hardware nonidealities. Unfortunately, conventional machine learning training approaches are not synergistic with the hardware used for executing the models, and additional hardware is often necessary to propagate information backwards through neural networks and compute weight updates for on-chip training. Comparatively, biological neural systems, on which neuromorphic systems are based, are able to learn efficiently in noisy and nonideal "hardware". Taking inspiration from biology, I explore biologically-inspired learning rules implemented in mixed-signal hardware. Biologically-inspired learning rules in particular are synergistic with on-chip learning due to a focus on concepts like "local learning", using locally available information to update a model. Furthermore, the dynamics of analog devices already used for execution of machine learning models can be leveraged to efficiently compute and apply weight updates in-situ, or directly in memory, with minimal additional circuitry. While many biologically-plausible learning algorithms, particularly local learning rules, have been created with nonvolatile memories like those used for executing feedforward operations in mind, these works do little to explore practical circuit designs to implement the algorithms. Furthermore, the nonideal behaviors of nonvolatile memories are often not considered when simulating in-situ learning. In this dissertation, I explore how the dynamics of emerging nonvolatile memories can support efficient, biologically-inspired learning rules and design mixed-signal, CMOS circuits to implement these algorithms. I first implement a plasticity-based supervised learning rule in-situ in memristive memory. I also design an analog CMOS neuron circuit for performing a local learning algorithm for training a neural network. Using these results to inform how I could use nonvolatile memories for learning as well as the feasibility of a biologically-inspired local learning algorithm in analog hardware, I incorporate resistive memory dynamics in the learning algorithms and examine the effects of device nonidealities on accuracy and performance. I design corresponding input and output neuron circuits for implementing local learning rules and interfacing with the resistive synapses. Simulating local learning using these designs and with various sources of noise, variation, and quantization, I analyze which nonidealities and which aspects of local learning contribute to better or worse accuracy. Finally, I examine how high-quality stochastic spike-rates can be used for in-situ learning and how these spike-rates can be efficiently generated with microelectronic devices. The purpose of this research is not only to provide explicit circuit designs for biologically-plausible learning and examine the impact of memory nonidealities on learning accuracy, but also to inform researchers in the field where more work is needed; This work informs algorithm designers of potential weak-points in current local learning rules and informs device engineers of which nonidealities are most concerning for in-situ learning in nonvolatile memory. | |
dc.identifier.uri | ||
dc.rights.uri | ||
dc.subject | Computer engineering | |
dc.subject | Artificial intelligence | |
dc.title | CMOS and Emerging Device Circuits for Biologically-Plausible Learning | |
dc.type | Dissertation | |
duke.embargo.months | 7 | |
duke.embargo.release | 2026-05-19 |