Prospects of In- and Near-Memory Computing for Future AI Systems
Technical seminar with the following abstract:
Future data-intensive workloads, particularly from artificial intelligence, have pushed conventional computing architectures to their limits of energy efficiency and throughput, due to the scale of both computations and data they involve. In- and near-memory computing are breakthrough paradigms that provide approaches for overcoming this. But, in doing so, they instate new fundamental tradeoffs that span the device, circuit, and architectural levels. This presentation starts by describing the methods by which in/near-memory computing derive their gains, and then examines the critical tradeoffs, looking concretely at recent designs across memory technologies (SRAM, RRAM, MRAM). Then, its focus turns to key architectural considerations, and how these are likely to drive future technological needs and application alignments. Finally, this presentation analyzes the potential for leveraging application-level relaxations (e.g., noise sensitivity) through algorithmic approaches.
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
- 2356 Main Mall
- Vancouver, British Columbia
- Canada V6T 1Z4
- Building: MacLeod Building
- Room Number: MCLD 3038
- Click here for Map
- Contact Event Host
-
sudip@ece.ubc.ca
Speakers
Naveen Verma of Princeton University
Prospects of In- and Near-Memory Computing for Future AI Systems
Technical seminar with the following abstract:
Future data-intensive workloads, particularly from artificial intelligence, have pushed conventional computing architectures to their limits of energy efficiency and throughput, due to the scale of both computations and data they involve. In- and near-memory computing are breakthrough paradigms that provide approaches for overcoming this. But, in doing so, they instate new fundamental tradeoffs that span the device, circuit, and architectural levels. This presentation starts by describing the methods by which in/near-memory computing derive their gains, and then examines the critical tradeoffs, looking concretely at recent designs across memory technologies (SRAM, RRAM, MRAM). Then, its focus turns to key architectural considerations, and how these are likely to drive future technological needs and application alignments. Finally, this presentation analyzes the potential for leveraging application-level relaxations (e.g., noise sensitivity) through algorithmic approaches.
Biography:
Naveen Verma received the B.A.Sc. degree in Electrical and Computer Engineering from the UBC, Vancouver, Canada in 2003, and the M.S. and Ph.D. degrees in Electrical Engineering from MIT in 2005 and 2009 respectively. Since July 2009 he has been at Princeton University, where he is currently the Ralph H. and Freda I. Augustine Professor of Electrical and Computer Engineering. His research focuses on advanced sensing and computing systems. This includes research on large-area flexible sensors, energy-efficient computing architectures and circuits, and machine-learning and statistical-signal-processing algorithms. Prof. Verma has been involved in a number of technology transfer activities including founding start-up companies. Most recently, he co-founded EnCharge AI, together with industry leaders in AI computing systems, to commercialize foundational technology developed in his lab. Prof. Verma has served as a Distinguished Lecturer of the IEEE Solid-State Circuits Society, and on a number of conference program committees and advisory groups. Prof. Verma is the recipient of numerous teaching and research awards, including several best-paper awards, with his students.