A Review of Computing in Memory: Devices and Architecture for Deep Learning

#computing
Share

Artificial Intelligence (AI) and Deep Learning (DL) are shaping modern world. AI is revolutionizing the advancement in computer vision, natural language processing, autonomous vehicles, security, and industrial production. However, the existing AI infrastructure is highly driven by von-Neumann computing technology, which is facing the barriers of memory wall, heat wall, and data transfer bottleneck for meeting the overwhelming demand for big data computing. Computing-in-Memory (CIM) is an emerging computing paradigm that addresses the data transfer bottleneck in designing modern computing architecture for DL and AI applications. CIM computing promises to improve the throughput and energy efficiency compared to existing computing architecture. Emerging non-volatile memory (NVM) devices such as Resistive Random Access Memory (RRAM) and Static Random Access Memory (SRAM) devices are mainly considered for CIM architectures. This review started with the introduction of memory devices used for CIM. Then, we have discussed the mode of operation for macro and system-level architectures. We have reviewed the single and multi-bit macro-operations. In the end, this review discusses the limitations of CIM architecture and the prospects of the CIM research area.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 30 Dec 2022
  • Time: 11:00 AM to 11:30 AM
  • All times are (UTC-05:00) Eastern Time (US & Canada)
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Contact Event Host
  • Starts 29 December 2022 05:00 PM
  • Ends 30 December 2022 11:20 AM
  • All times are (UTC-05:00) Eastern Time (US & Canada)
  • No Admission Charge


  Speakers

Shahanur Alam

Topic:

A Review of Computing in Memory: Devices and Architecture for Deep Learning

Artificial Intelligence (AI) and Deep Learning (DL) are shaping modern world. AI is revolutionizing the advancement in computer vision, natural language processing, autonomous vehicles, security, and industrial production. However, the existing AI infrastructure is highly driven by von-Neumann computing technology, which is facing the barriers of memory wall, heat wall, and data transfer bottleneck for meeting the overwhelming demand for big data computing. Computing-in-Memory (CIM) is an emerging computing paradigm that addresses the data transfer bottleneck in designing modern computing architecture for DL and AI applications. CIM computing promises to improve the throughput and energy efficiency compared to existing computing architecture. Emerging non-volatile memory (NVM) devices such as Resistive Random Access Memory (RRAM) and Static Random Access Memory (SRAM) devices are mainly considered for CIM architectures. This review started with the introduction of memory devices used for CIM. Then, we have discussed the mode of operation for macro and system-level architectures. We have reviewed the single and multi-bit macro-operations. In the end, this review discusses the limitations of CIM architecture and the prospects of the CIM research area. 

Biography:

Md Shahanur Alam is a PhD Candidate in the department of Electrical Engineering, University of Dayton. He received his MSEE from the Wright State University in May 2018. He has completed his BSc in Applied Physics and Electronics in 2010 from the University of Dhaka, Bangladesh. His research interest involves Computing-in-Memory, Analog Neuromorphic Architecture, and On-chip training systems utilizing emerging non-volatile memory devices. Mr. Alam is a student member of IEEE.