IEEE SWISS CAS Distinguished Lecture / Accelerator Architectures for Deep Neural Networks: Inference and Training
Dear members,
We will be hosting a lecture given by Prof. Keshab K. Parthi, an IEEE CAS Distinguished Lecturer, at ETHZ from 13:00-14:00, 01.12.2021. Hope to see you there. A second lecture will be given by Prof. Parthi at EPFL. The room and date information will be sent out later.
Regards,
Shih-Chii Liu
Date and Time
Location
Hosts
Registration
- Date: 01 Dec 2021
- Time: 01:00 PM to 02:00 PM
- All times are (UTC+01:00) Bern
- Add Event to Calendar
- ETHZ
- Sternwartstrasse 7
- Zurich, Switzerland
- Switzerland 8092
- Building: ETF
- Room Number: E1
- Click here for Map
- Contact Event Host
-
Host for lecture: Prof. Christoph Studer (studer@iis.ee.ethz.ch).
Speakers
Prof. Keshab K. Parhi of University of Minnesota, USA
Accelerator Architectures for Deep Neural Networks: Inference and Training
Machine learning and data analytics continue to expand the fourth industrial revolution and affect many aspects of our lives. The talk will explore hardware accelerator architectures for deep neural networks (DNNs). I will present a brief review of history of neural networks (OJCAS-2020). I will talk about our recent work on Perm-DNN based on permuted-diagonal interconnections in deep convolutional neural networks and how structured sparsity can reduce energy consumption associated with memory access in these systems (MICRO-2018). I will then talk about reducing latency and memory access in accelerator architectures for training DNNs by gradient interleaving using systolic arrays (ISCAS-2020). Then I will present our recent work on LayerPipe, an approach for training deep neural networks that leads to simultaneous intra-layer and inter-layer pipelining (ICCAD-2021). This approach can increase processor utilization efficiency and increase speed of training without increasing communication costs.
Biography:
Keshab K. Parhi received the B.Tech. degree from the Indian Institute of Technology (IIT), Kharagpur, in 1982, the M.S.E.E. degree from the University of Pennsylvania, Philadelphia, in 1984, and the Ph.D. degree from the University of California, Berkeley, in 1988. He has been with the University of Minnesota, Minneapolis, since 1988, where he is currently Distinguished McKnight University Professor and Edgar F. Johnson Professor of Electronic Communication in the Department of Electrical and Computer Engineering. He has published over 650 papers, is the inventor of 32 patents, and has authored the textbook VLSI Digital Signal Processing Systems (Wiley, 1999) and coedited the reference book Digital Signal Processing for Multimedia Systems (Marcel Dekker, 1999). His current research addresses VLSI architecture design of machine learning systems, hardware security, data-driven neuroscience and molecular/DNA computing. Dr. Parhi is the recipient of numerous awards including the 2017 Mac Van Valkenburg award and the 2012 Charles A. Desoer Technical Achievement award from the IEEE Circuits and Systems Society, the 2004 F. E. Terman award from the American Society of Engineering Education, and the 2003 IEEE Kiyo Tomiyasu Technical Field Award. He served as the Editor-in-Chief of the IEEE Trans. Circuits and Systems, Part-I during 2004 and 2005. He is a Fellow of IEEE, ACM, AAAS and the National Academy of Inventors.
Email:
Address:Dept. of Electrical & Computer Engineering, University of Minnesota, Minnesota, United States
Agenda
Speaker: Prof. Keshab Parthi
Date: 01.12.2021
Time: 13:00-14:00
Place: ETF E1, ETH, Sternwartstrasse 7, 8092 Zurich
Abstract: Machine learning and data analytics continue to expand the fourth industrial revolution and affect many aspects of our lives. The talk will explore hardware accelerator architectures for deep neural networks (DNNs). I will present a brief review of history of neural networks (OJCAS-2020). I will talk about our recent work on Perm-DNN based on permuted-diagonal interconnections in deep convolutional neural networks and how structured sparsity can reduce energy consumption associated with memory access in these systems (MICRO-2018). I will then talk about reducing latency and memory access in accelerator architectures for training DNNs by gradient interleaving using systolic arrays (ISCAS-2020). Then I will present our recent work on LayerPipe, an approach for training deep neural networks that leads to simultaneous intra-layer and inter-layer pipelining (ICCAD-2021). This approach can increase processor utilization efficiency and increase speed of training without increasing communication costs.