BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:DAYLIGHT
DTSTART:20220313T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20211107T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20211118T133533Z
UID:AF99FE39-9084-41CE-8DE8-9EF0D4BE8346
DTSTART;TZID=US/Pacific:20211123T180000
DTEND;TZID=US/Pacific:20211123T191500
DESCRIPTION:The current resurgence of artificial intelligence is due to adv
 ances in deep learning. Systems based on deep learning now exceed human ca
 pability in speech recognition\, object classification\, and playing games
  like Go. Deep learning has been enabled by powerful\, efficient computing
  hardware. The algorithms used have been around since the 1980s\, but it h
 as only been in the last decade - when powerful GPUs became available to t
 rain networks - that the technology has become practical. Advances in DL a
 re now gated by hardware performance. This talk will review the current st
 ate of deep learning hardware and explore a number of directions to contin
 ue performance scaling in the absence of Moore’s Law. Topics discussed w
 ill include number representation\, sparsity\, memory organization\, optim
 ized circuits\, and analog computation.\n\nSpeaker(s): Bill Dally\, \n\nVi
 rtual: https://events.vtools.ieee.org/m/290553
LOCATION:Virtual: https://events.vtools.ieee.org/m/290553
ORGANIZER:virbahu@ieee.org
SEQUENCE:1
SUMMARY:Industry Spotlight - Deep Learning Hardware: Past\, Present\, and F
 uture
URL;VALUE=URI:https://events.vtools.ieee.org/m/290553
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;The current resurgence of artificial intel
 ligence is due to advances in deep learning. Systems based on deep learnin
 g now exceed human capability in speech recognition\, object classificatio
 n\, and playing games like Go. Deep learning has been enabled by powerful\
 , efficient computing hardware. The algorithms used have been around since
  the 1980s\, but it has only been in the last decade - when powerful GPUs 
 became available to train networks - that the technology has become practi
 cal. Advances in DL are now gated by hardware performance. This talk will 
 review the current state of deep learning hardware and explore a number of
  directions to continue performance scaling in the absence of Moore&amp;rsquo\
 ;s Law. Topics discussed will include number representation\, sparsity\, m
 emory organization\, optimized circuits\, and analog computation.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

