Li, Hai HLLi, Ziru2024-06-062024-06-062024https://hdl.handle.net/10161/30953<p>The artificial intelligence (AI) algorithms have played critical roles in a variety of application scenarios in our daily life. The size of state-of-the-art large-scale AI models widely adopted in different domains have been proliferating to tens of billions of parameters. The dedicated AI hardware tailored for data-intensive and computation-intensive AI algorithms consume tremendous power due to data transmission of model parameters and massive computation. The solutions to boosting the power efficiency of AI hardware are two-fold. On the one hand, continuous research effort have been paid to search for more efficient computing paradigm of neural networks. For instance, the bio-inspired neuromorphic computing paradigm stems from the investigation of the natural neural system. The neuromorphic spiking-neural-networks (SNNs) emulate the human brain which transmits information efficiently through spike events. On the other hand, hardware designers have been seeking architecture- and circuit-level solutions to reducing the memory access and computation costs. Processing-in-memory (PIM) paradigm, which is one of the promising solutions, eliminates the power and latency of data transmission by performing data operations directly within the memory.</p><p>In this dissertation, my research work on power-efficient neuromorphic designs will be introduced. These neuromorphic designs harness the spike-based data processing and in-memory-computing paradigm. With the help of architecture-level techniques and dedicated circuits with CMOS and emerging memory devices, the proposed designs achieve significant improvement in terms of power efficiency and performance.</p>https://creativecommons.org/licenses/by-nc-nd/4.0/Computer engineeringASIC designIn-sensor-processingNeuromorphic computingProcessing-in-memorySpiking-neural-networkTime-to-first-spike encodingPower-efficient Spiking Neuromorphic Designs using CMOS and Emerging DevicesDissertation