Hardware/Software Co-Design of Deep Learning Accelerators

#"Hardware/Software #Co-Design #of #Deep #Learning #Accelerators" #By #Dr. #Yiyu #Shi #University #Notre #Dame
Share

The prevalence of deep neural networks today is supported by a variety of powerful hardware platforms including GPUs, FPGAs, and ASICs. A fundamental question lies in almost every implementation of deep neural networks: given a specific task, what is the optimal neural architecture and the tailor-made hardware in terms of accuracy and efficiency? Earlier approaches attempted to address this question through hardware-aware neural architecture search (NAS), where features of a fixed hardware design are taken into consideration when designing neural architectures. However, we believe that the best practice is through the simultaneous design of the neural architecture and the hardware to identify the best pairs that maximize both test accuracy and hardware efficiency. In this talk, we will present novel co-exploration frameworks for neural architecture and various hardware platforms including FPGA, NoC, ASIC and Computing-in-Memory, all of which are the first in the literature. We will demonstrate that our co-exploration concept greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. 

Please register for this event using the vTools Registration Link.

Please join the event using the Zoom meeting Link.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 10 Feb 2021
  • Time: 02:00 PM to 03:00 PM
  • All times are (GMT-05:00) US/Eastern
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • 1000 River Road
  • Teaneck , New Jersey
  • United States 07666
  • Building: Muscarelle Center, M105,
  • Room Number: M105

  • Contact Event Hosts
  • Co-sponsored by North Jersey Section, Signal Processing Chapter
  • Starts 01 January 2021 01:11 PM
  • Ends 10 February 2021 02:00 PM
  • All times are (GMT-05:00) US/Eastern
  • No Admission Charge


  Speakers

Dr. Yiyu Shi Dr. Yiyu Shi of University of Notre Dame

Topic:

Hardware/Software Co-Design of Deep Learning Accelerators

Biography:

Dr. Yiyu Shi is currently an associate professor in the Department of Computer Science and Engineering at the University of Notre Dame, the site director of NSF I/UCRC Alternative and Sustainable Intelligent Computing, and a visiting scientist at Boston Children’s Hospital, the primary pediatric program of Harvard Medical School. His current research interests focus on hardware intelligence with biomedical applications. He has published over 200 peer reviewed papers in premier venues such as Nature research journals, including more than a dozen best papers or nominations in top conferences. He was also the recipient of IBM Invention Achievement Award, Japan Society for the Promotion of Science (JSPS) Faculty Invitation Fellowship, Humboldt Research Fellowship, IEEE St. Louis Section Outstanding Educator Award, Academy of Science (St. Louis) Innovation Award, Missouri S&T Faculty Excellence Award, NSF CAREER Award, IEEE Region 5 Outstanding Individual Achievement Award, the Air Force Summer Faculty Fellowship, IEEE Computer Society Mid-Career Research Achievement Award, and Facebook Research Award. He has served on the technical program committee of many international conferences. He is on the executive committee of ACM SIGDA, deputy editor-in-chief of IEEE VLSI CAS Newsletter, and an associate editor of various IEEE and ACM journals.

Address:New Jersey, United States





Agenda

The prevalence of deep neural networks today is supported by a variety of powerful hardware platforms including GPUs, FPGAs, and ASICs. A fundamental question lies in almost every implementation of deep neural networks: given a specific task, what is the optimal neural architecture and the tailor-made hardware in terms of accuracy and efficiency? Earlier approaches attempted to address this question through hardware-aware neural architecture search (NAS), where features of a fixed hardware design are taken into consideration when designing neural architectures. However, we believe that the best practice is through the simultaneous design of the neural architecture and the hardware to identify the best pairs that maximize both test accuracy and hardware efficiency. In this talk, we will present novel co-exploration frameworks for neural architecture and various hardware platforms including FPGA, NoC, ASIC and Computing-in-Memory, all of which are the first in the literature. We will demonstrate that our co-exploration concept greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs.