Our group focuses on the design of energy-efficient, low-latency, and high-accuracy communications, signal processing, and machine learning systems in nanoscale silicon process technologies. Our research spans system design, resource-constrained algorithm development, efficient architectures, integrated circuit design and laboratory testing as a final proof-of-concept. In addition to such cross-layer optimizations, we employ a Shannon-inspired statistical computing framework, developed in our group over two decades, to push the limits of energy, latency and accuracy. At these limits, the circuit fabric exhibits low signal-to-noise ratio (SNR), i.e., the circuit fabric is makes computational errors. These errors are then corrected via the use of statistical techniques, i.e., statistical error compensation (SEC). Our research projects strive to take such a systems-to-circuits journey. Our research during 2013-17 was conducted under the SONIC umbrella. See the SONIC outcomes paper for more details. Our approach when applied to realizing machine learning systems in silicon is described in detail in this position paper and the accompanying presentation.

Research Projects

Resource-constrained Machine Learning Algorithms

We consider the problem of designing machine learning algorithms under stringent platform resource (computational, storage, and communication) constraints. Our goal in these projects is to maximize the information extraction and decision-making capabilities of embedded platforms such as human-centric (wearables), autonomous (drones), and IoT. Specific topics include - minimum precision requirements of deep neural networks, principled design of reduced complexity neural networks, learning algorithms for emerging deep in-memory and in-sensor architectures, and prototyping of these algorithms on resource-constrained platforms. See our 2017 ICML paper on obtaining analytical guarantees on the minimum precision requirements of deep neural networks.

Deep In-memory Architectures (DIMA)

DIMA addresses the high energy and latency costs of data movement between processor and memory by embedding analog computations deeply into the periphery of the memory core, i.e., the bitcell array (BCA). In doing so, DIMA addresses the dominant source of energy and latency costs in machine learning implementations in silicon. Silicon prototypes (three so far) of DIMA have demonstrated significant energy and throughput benefits, e.g., up to 56X reduction in energy-delay product (EDP) over their fixed-function digital counterparts due to its low-swing analog processing and multi-row read per precharge cycle. DIMA is inherently a low-SNR fabric and thus invites the application of SEC to further enhance energy efficiency and latency, without any loss in decision-making accuracy. See our multifunctional DIMA prototype chip paper and the recent IEEE Spectrum article highlighting our work.

Enabling Beyond CMOS Systems

Can Shannon-inspired information processing help compensate for the intrinsic stochastic behavior of beyond CMOS fabrics, and make these competitive with CMOS? In these projects, we apply SEC to enhance the energy-efficiency, latency, and accuracy, trade-off when realizing inference kernels in beyond CMOS fabrics such as spin and graphene. These projects are done in collaboration with device researchers in academia and industry. See our work on Shannon-inspired spintronics for more details.

Statistical Error Compensation (SEC)

Shannon-inspired information processing calls for error compensation of system-level errors caused by low-SNR circuit fabrics operating at the limits of energy-efficiency and latency. SEC leverages the signal and noise statistics to compensate for high raw error-rates (up to 80%) efficiently, i.e., with logic overheads between 5% to 20%. SEC is based on concepts from statistical inference and therefore, is much more efficient than conventional approaches based on fault-tolerant computing. Here, we investigate new approaches for compensating computational errors using Shannon (communications) theory, and use these to realize efficient machine learning, communications and signal processing systems in silicon. See our work on the design of a subthreshold ECG classifier for more details.


We gratefully acknowledge the sponsorship of our research by the National Science Foundation, the Defense Advanced Projects Agency, the Semiconductor Research Corporation, the Gigascale Systems Research Center, the Systems on Nanoscale Information fabriCs (SONIC) Center, Texas Instruments, Micron, IBM, GlobalFoundries, Intel Corporation, National Semiconductor, Rockwell, Analog Devices, and FutureWei Technologies.