BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Zurich
BEGIN:DAYLIGHT
DTSTART:20220327T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20211031T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20211211T140206Z
UID:62E0C828-3B08-40B4-9EA3-257381FE035C
DTSTART;TZID=Europe/Zurich:20211201T130000
DTEND;TZID=Europe/Zurich:20211201T140000
DESCRIPTION:Dear members\,\n\nWe will be hosting a lecture given by Prof. K
 eshab K. Parthi\, an IEEE CAS Distinguished Lecturer\, at ETHZ from 13:00-
 14:00\, 01.12.2021. Hope to see you there. A second lecture will be given 
 by Prof. Parthi at EPFL. The room and date information will be sent out la
 ter.\n\nRegards\,\n\nShih-Chii Liu\n\nSpeaker(s): Prof. Keshab K. Parhi\, 
 \n\nAgenda: \nSpeaker: Prof. Keshab Parthi\n\nDate: 01.12.2021\n\nTime: 13
 :00-14:00\n\nPlace: ETF E1\, ETH\, Sternwartstrasse 7\, 8092 Zurich\n\nAbs
 tract: Machine learning and data analytics continue to expand the fourth i
 ndustrial revolution and affect many aspects of our lives. The talk will e
 xplore hardware accelerator architectures for deep neural networks (DNNs).
  I will present a brief review of history of neural networks (OJCAS-2020).
  I will talk about our recent work on Perm-DNN based on permuted-diagonal 
 interconnections in deep convolutional neural networks and how structured 
 sparsity can reduce energy consumption associated with memory access in th
 ese systems (MICRO-2018). I will then talk about reducing latency and memo
 ry access in accelerator architectures for training DNNs by gradient inter
 leaving using systolic arrays (ISCAS-2020). Then I will present our recent
  work on LayerPipe\, an approach for training deep neural networks that le
 ads to simultaneous intra-layer and inter-layer pipelining (ICCAD-2021). T
 his approach can increase processor utilization efficiency and increase sp
 eed of training without increasing communication costs.\n\nRoom: E1\, Bldg
 : ETF \, ETHZ\, Sternwartstrasse 7\, Zurich\, Switzerland\, Switzerland\, 
 8092\, Virtual: https://events.vtools.ieee.org/m/289904
LOCATION:Room: E1\, Bldg: ETF \, ETHZ\, Sternwartstrasse 7\, Zurich\, Switz
 erland\, Switzerland\, 8092\, Virtual: https://events.vtools.ieee.org/m/28
 9904
ORGANIZER:shih@ini.uzh.ch
SEQUENCE:7
SUMMARY:IEEE SWISS CAS Distinguished Lecture / Accelerator Architectures fo
 r Deep Neural Networks: Inference and Training
URL;VALUE=URI:https://events.vtools.ieee.org/m/289904
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Dear members\,&lt;/p&gt;\n&lt;p&gt;We will be hosting 
 a lecture given by Prof. Keshab K. Parthi\, an IEEE CAS Distinguished Lect
 urer\, at ETHZ from 13:00-14:00\, 01.12.2021. Hope to see you there. A sec
 ond lecture will be given by Prof. Parthi at EPFL. The room and date infor
 mation will be sent out later.&lt;/p&gt;\n&lt;p&gt;Regards\,&lt;/p&gt;\n&lt;p&gt;Shih-Chii Liu&lt;/p&gt;
 &lt;br /&gt;&lt;br /&gt;Agenda: &lt;br /&gt;&lt;p&gt;Speaker: Prof. Keshab Parthi&lt;/p&gt;\n&lt;p&gt;Date: 01
 .12.2021&lt;/p&gt;\n&lt;p&gt;Time: 13:00-14:00&lt;/p&gt;\n&lt;p&gt;Place: ETF E1\, ETH\, Sternwart
 strasse 7\, 8092 Zurich&lt;/p&gt;\n&lt;p&gt;&lt;br /&gt;Abstract: Machine learning and data 
 analytics continue to expand the fourth industrial revolution and affect m
 any aspects of our lives. The talk will explore hardware accelerator archi
 tectures for deep neural networks (DNNs). I will present a brief review of
  history of neural networks (OJCAS-2020). I will talk about our recent wor
 k on Perm-DNN based on permuted-diagonal interconnections in deep convolut
 ional neural networks and how structured sparsity can reduce energy consum
 ption associated with memory access in these systems (MICRO-2018). I will 
 then talk about reducing latency and memory access in accelerator architec
 tures for training DNNs by gradient interleaving using systolic arrays (IS
 CAS-2020). Then I will present our recent work on LayerPipe\, an approach 
 for training deep neural networks that leads to simultaneous intra-layer a
 nd inter-layer pipelining&amp;nbsp\; (ICCAD-2021). This approach can increase 
 processor utilization efficiency and increase speed of training without in
 creasing communication costs.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

