Compute-in-Memory Designs: Trends and Prospects

#Deep #Neural #Network #compute-in-memory
Share


This talk will present trends in recent Compute-In-Memory designs and highlight fundamental principles utilized for performing multi-bit Multiply-Accumulate computations using analog and digitally-intensive approaches. The design trade-offs among bit-precision, throughput, energy efficiency, data converter overheads, and computational accuracies will be discussed. In addition, the prospects of compute-in-memory designs for applications beyond DNNs will also be presented.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 21 Apr 2022
  • Time: 05:00 PM to 06:00 PM
  • All times are (GMT-05:00) US/Eastern
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Contact Event Hosts
  • bshubha@ieee.org

    francis.x.oconnell@ieee.org

     

     

     

  • Starts 07 February 2022 07:17 PM
  • Ends 21 April 2022 05:17 PM
  • All times are (GMT-05:00) US/Eastern
  • No Admission Charge


  Speakers

Dr Kulkarni

Topic:

Compute-in-Memory Designs: Trends and Prospects

The unprecedented growth in Deep Neural Networks (DNN) model size has resulted in massive data movement from off-chip memory to on-chip processing cores in modern Machine Learning (ML) accelerators. Compute-In-Memory (CIM) designs performing DNN computations within memory arrays are being explored to mitigate this ‘Memory Wall’ bottleneck of latency and energy overheads. Multiple memory technologies with unique attributes are being explored to enable energy-efficient CIM designs.

This talk will present trends in recent CIM designs and highlight fundamental principles utilized for performing multi-bit Multiply-Accumulate (MAC) computations using analog and digitally-intensive approaches. The design trade-offs among bit-precision, throughput, energy efficiency, data converter overheads, and computational accuracies will be discussed. In addition, the prospects of compute-in-memory designs for applications beyond DNNs will also be presented.

Biography:

(S’03–M’09–SM’15) Jaydeep Kulkarni received a B.E. degree from the University of Pune, India, in 2002, an M. Tech degree from the Indian Institute of Science (IISc) in 2004, and a Ph.D. degree from Purdue University 2009. From 2009 to 2017, he worked as a Research Scientist at Intel Circuit Research Lab in Hillsboro, OR. Currently, he is an assistant professor in the electrical and computer engineering department at the University of Texas at Austin, a fellow of Silicon Labs Chair in electrical engineering, and a fellow of AMD chair in computer engineering. 

Dr. Kulkarni has filed 36 patents, published two book chapters, and 100+ papers in refereed journals and conferences. His research focuses on machine learning hardware accelerators, in-memory computing, DTCO for emerging nano-devices, heterogeneous and 3D integrated circuits, hardware security, and cryogenic computing. He received the best M. Tech student award from IISc Bangalore, Intel Foundation Ph.D. fellowship award, Purdue school of ECE outstanding doctoral dissertation award, 2015 IEEE Transactions on VLSI systems best paper award, SRC outstanding industrial liaison award, Micron Foundation Faculty Awards, 2020 Intel Rising Star Faculty Award, an NSF Career award. He has participated in technical program committees of CICC, A-SSCC, DAC, ICCAD, ISLPED, and AICAS conferences. He currently serves as an associate editor for IEEE Solid-State Circuit Letters and IEEE Transactions on VLSI Systems. He is also serving as a distinguished lecturer for the IEEE Solid-State Circuit Society, IEEE Electron Device Society, and the chair of IEEE Solid-State Circuits Society and Circuits and Systems Society central Texas joint chapter. He is a senior member of IEEE and the National Academy of Inventors.

Email:

Address:University of Texas at Austin, , austin