DEEP LEARNING AND NEUROMORPHIC COMPUTING – TECHNOLOGY, HARDWARE AND IMPLEMENTATION

#Neuromorphic #HW/SW #co-design #memristor
Share

Following technology advances in high performance computation systems and fast growth of data acquisition, machine learning, especially deep learning, made remarkable success in many research areas and applications. Such a success, to a great extent, is enabled by developing large-scale deep neural networks (DNN) that learn from a huge volume of data. The deployment of such a big model, however, is both computation-intensive and memory-intensive. Though the research on hardware acceleration for neural network has been extensively studied, the progress of hardware development still falls far behind the upscaling of DNN models at soft-ware level. We envision that hardware/software co-design for performance acceleration of deep neural networks is necessary. In this work, I will start with the trends of machine learning study in academia and industry, followed by our study on how to run sparse and low-precision neural networks, as well as the investigation on memristor-based computing engine.



  Date and Time

  Location

  Hosts

  Registration



  • Add_To_Calendar_icon Add Event to Calendar
  • 3140 Market St.
  • Drexel University
  • Philadelphia, Pennsylvania
  • United States 19104
  • Building: Bossone Research Enterprise Center
  • Room Number: 302
  • Click here for Map

  • Contact Event Host
  • Co-sponsored by ECE Dept. Drexel University
  • Starts 12 September 2019 04:01 AM UTC
  • Ends 24 October 2019 05:30 PM UTC
  • No Admission Charge


  Speakers

Hai "Helen" Li Hai "Helen" Li of Center for Computational Evolutionary Intelligenc, Duke University

Topic:

DEEP LEARNING AND NEUROMORPHIC COMPUTING – TECHNOLOGY, HARDWARE AND IMPLEMENTATION

Following technology advances in high performance computation systems and fast growth of data acquisition, machine learning, especially deep learning, made remarkable success in many research areas and applications. Such a success, to a great extent, is enabled by developing large-scale deep neural networks (DNN) that learn from a huge volume of data. The deployment of such a big model, however, is both computation-intensive and memory-intensive. Though the research on hardware acceleration for neural network has been extensively studied, the progress of hardware development still falls far behind the upscaling of DNN models at soft-ware level. We envision that hardware/software co-design for performance acceleration of deep neural networks is necessary. In this work, I will start with the trends of machine learning study in academia and industry, followed by our study on how to run sparse and low-precision neural networks, as well as the investigation on memristor-based computing engine.

Biography:

Dr. Hai “Helen” Li is Clare Boothe Luce Associate Professor with the Department of Electrical and Computer Engineering at Duke University. She received her B.S and M.S. from Tsinghua University and Ph.D. from Purdue University. At Duke, she co-directs Duke University Center for Computational Evolutionary Intelligence. Her research interests include machine learning acceleration and security, neuromorphic circuit and system for brain-inspired computing, conventional and emerging memory design and architecture, and software and hardware co-design. She received the NSF CAREER Award (2012), the DARPA Young Faculty Award (2013), TUM-IAS Hans Fisher Fellowship from Germany (2017), seven best paper awards and another eight best paper nominations. Dr. Li is a fellow of IEEE and a distinguished member of ACM. For more information, please see her webpage at http://cei.pratt.duke.edu/.

Email:

Address:701 W. Main St., Suite 400, Duke University, Durham, North Carolina, United States, 27708