BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:DAYLIGHT
DTSTART:20240331T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20241027T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240918T061548Z
UID:FB9DBBF2-58BF-4915-8821-F79B6D249520
DTSTART;TZID=Europe/Paris:20240917T110000
DTEND;TZID=Europe/Paris:20240917T140000
DESCRIPTION:The size and complexity of recent deep learning models continue
  to increase exponentially\, causing a serious amount of hardware overhead
 s for training those models. Contrary to inference-only hardware\, neural 
 network training is very sensitive to computation errors\; hence\, trainin
 g processors must support high-precision computation to avoid a large perf
 ormance drop\, severely limiting their processing efficiency. This talk wi
 ll introduce a comprehensive design approach to arrive at an optimal train
 ing processor design. More specifically\, the talk will discuss how we sho
 uld make important design decisions for training processors in more depth\
 , including i) hardware-friendly training algorithms\, ii) optimal data fo
 rmats\, and iii) processor architecture for high precision and utilization
 .\n\nRoom: Amphithéâtre Denis Papin\, INSA Centre Val de Loire\, 3 Rue d
 e la Chocolaterie\, Blois\, Centre\, France\, 41034
LOCATION:Room: Amphithéâtre Denis Papin\, INSA Centre Val de Loire\, 3 Ru
 e de la Chocolaterie\, Blois\, Centre\, France\, 41034
ORGANIZER:
SEQUENCE:12
SUMMARY:DL : Designing an optimal hardware solution for deep neural network
  training
URL;VALUE=URI:https://events.vtools.ieee.org/m/433172
X-ALT-DESC:Description: &lt;br /&gt;&lt;p class=&quot;MsoNormal&quot;&gt;&lt;span lang=&quot;EN-US&quot; style
 =&quot;mso-ansi-language: EN-US\;&quot;&gt;The size and complexity of recent deep learn
 ing models continue to increase exponentially\, causing a serious amount o
 f hardware overheads for training those models. Contrary to inference-only
  hardware\, neural network training is very sensitive to computation error
 s\; hence\, training processors must support high-precision computation to
  avoid a large performance drop\, severely limiting their processing effic
 iency. This talk will introduce a comprehensive design approach to arrive 
 at an optimal training processor design. More specifically\, the talk will
  discuss how we should make important design decisions for training proces
 sors in more depth\, including i) hardware-friendly training algorithms\,&amp;
 nbsp\;ii) optimal data formats\, and&amp;nbsp\;iii) processor architecture for
  high precision and utilization.&lt;/span&gt;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

