BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Canada/Pacific
BEGIN:DAYLIGHT
DTSTART:20210314T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20211107T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20210427T000142Z
UID:3F9081D2-9450-42C7-9C87-973DE8EC01A4
DTSTART;TZID=Canada/Pacific:20210427T130000
DTEND;TZID=Canada/Pacific:20210427T150000
DESCRIPTION:Presenter: Professor Mieszko Lis (University of British Columbi
 a)\n\nAccelerating DNN inference in hardware has been extensively explored
  in the recent years\, and many accelerator architectures have been propos
 ed. How much performance and efficiency is still left for future researche
 rs?\n\nIn this talk\, Professor Mieszko will argue that extracting signifi
 cantly more performance requires us to design multiple system components 
 — algorithms\, models\, and hardware — to work together. Architects ne
 ed to understand not just the computations that DNN inference and training
  perform\, but also how models are designed and how optimization algorithm
 s train those models.\n\nHe will discuss examples of this approach: an eff
 icient sparse-from-scratch training algorithm\, and a technique that modif
 ies existing CNNs by fusing adjacent convolutional layers. Both of these a
 chieve 4–6× speedups at iso-accuracy\, but neither would be possible wi
 thout co-designing the algorithm\, the model\, and the hardware accelerato
 r.\n\nRegistration is required for access to the Zoom meeting link. This w
 ay we can ensure quality discussions of participants from industry and edu
 cation.\n\nVirtual: https://events.vtools.ieee.org/m/270843
LOCATION:Virtual: https://events.vtools.ieee.org/m/270843
ORGANIZER:Bob_Gill@bcit.ca
SEQUENCE:2
SUMMARY:Algorithm-model-hardware codesign for accelerating deep neural netw
 ork inference and training
URL;VALUE=URI:https://events.vtools.ieee.org/m/270843
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;strong&gt;Presenter:&lt;/strong&gt;&amp;nbsp\;Professo
 r Mieszko Lis (University of British Columbia)&lt;/p&gt;\n&lt;p&gt;Accelerating DNN in
 ference in hardware has been extensively explored in the recent years\, an
 d many accelerator architectures have been proposed. How much performance 
 and efficiency is still left for future researchers?&lt;/p&gt;\n&lt;p&gt;In this talk\
 , Professor Mieszko will argue that extracting significantly more performa
 nce requires us to design multiple system components &amp;mdash\; algorithms\,
  models\, and hardware &amp;mdash\; to work together. Architects need to under
 stand not just the computations that DNN inference and training perform\, 
 but also how models are designed and how optimization algorithms train tho
 se models.&lt;/p&gt;\n&lt;p&gt;He will discuss examples of this approach: an efficient
  sparse-from-scratch training algorithm\, and a technique that modifies ex
 isting CNNs by fusing adjacent convolutional layers. Both of these achieve
  4&amp;ndash\;6&amp;times\; speedups at iso-accuracy\, but neither would be possib
 le without co-designing the algorithm\, the model\, and the hardware accel
 erator.&lt;/p&gt;\n&lt;div class=&quot;dc-content&quot;&gt;\n&lt;div class=&quot;dc-modules&quot;&gt;\n&lt;div clas
 s=&quot;dc-modules__item&quot;&gt;\n&lt;div class=&quot;eds-l-mar-bot-12 eds-l-lg-mar-bot-14&quot;&gt;\
 n&lt;div class=&quot;dc-modules__item--text&quot;&gt;\n&lt;p&gt;&lt;strong&gt;Registration is required
  for access to the Zoom meeting link.&lt;/strong&gt;&amp;nbsp\;This way we can ensur
 e quality discussions of participants from industry and education.&lt;/p&gt;\n&lt;/
 div&gt;\n&lt;/div&gt;\n&lt;/div&gt;\n&lt;/div&gt;\n&lt;/div&gt;
END:VEVENT
END:VCALENDAR

