BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Asia/Jerusalem
BEGIN:DAYLIGHT
DTSTART:20250328T030000
TZOFFSETFROM:+0200
TZOFFSETTO:+0300
RRULE:FREQ=YEARLY;BYDAY=-1FR;BYMONTH=3
TZNAME:IDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20241027T010000
TZOFFSETFROM:+0300
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:IST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241121T161453Z
UID:4C09DB50-F8DE-4A9A-AA7B-88685B7A1635
DTSTART;TZID=Asia/Jerusalem:20241120T173000
DTEND;TZID=Asia/Jerusalem:20241120T190000
DESCRIPTION:Abstract:\n\nDeep learning can provide superior performance in 
 many fields of applications. However\, the cost of implementing deep learn
 ing models in practical applications is expensive. Deep learning models ar
 e both computation intensive and memory intensive. Computation is an impor
 tant aspect for deep learning. It can determine the latency that is how fa
 st the results can be obtained. In this seminar\, computer arithmetic for 
 deep learning will be discussed. This lecture will start with discussing t
 he computation requirements of deep learning models and layers. Then\, sev
 eral computer arithmetic designs for deep learning in the literature will 
 be used as examples. Finally\, future trends of computer arithmetic for de
 ep learning computation will be discussed.\n\nPosit is designed as an alte
 rnative to IEEE 754 floating-point format for many applications. It has no
 n-uniformed number distribution\, and it can provide a much larger dynamic
  range than IEEE floating-point format. These make posit especially suitab
 le for deep learning applications. In recent years\, more and more posit b
 ased deep learning hardware accelerators appear in the literature. In this
  lecture\, the basics of posit format and the corresponding posit-based ar
 ithmetic units available in the literature\, including adder\, multiplier\
 , multiply-accumulate unit\, and quire operator\, will be discussed. Then\
 , several posit-based deep learning processors for deep learning inference
  and training will be discussed. Finally\, the trends and challenges of po
 sit arithmetic units and posit based deep learning processors will be disc
 ussed to motivate more related research works.\n\nDeep learning applicatio
 ns will be shared with the audience.\n\nBio:\n\nSeokbum Ko is currently a 
 Professor at the Department of Electrical and Computer Engineering and the
  Division of Biomedical Engineering\, University of Saskatchewan\, Canada.
  He received his PhD from the University of Rhode Island\, USA in 2002.\n\
 nHis areas of research interest include computer architecture/arithmetic\,
  efficient hardware implementation of compute-intensive applications\, dee
 p learning processor architecture and biomedical engineering.\n\nHe is an 
 IEEE Cicuits and Systems Society Distinguished Lecturer (2024-2025)\, a se
 nior member of IEEE circuits and systems society and an associate editor f
 or IEEE TVLSI\, IEEE TCAS-II\, IEEE Access and IET Computers &amp; Digital Tec
 hniques. He is an active member of IEEE CAS Technical Committee\, IEEE P31
 09\, IEEE754-2029\, IEEE Domain-Specific Accelerators Standarads Committee
  and IEEE Emerging Processor Systems Standards Committee. He was an associ
 ate editor for IEEE TCASI (2019-2021).\n\nThe webinar is free but registra
 tion is required.\n\nZoom link will be sent after registration.\n\nVirtual
 : https://events.vtools.ieee.org/m/446950
LOCATION:Virtual: https://events.vtools.ieee.org/m/446950
ORGANIZER:shahar@ee.technion.ac.il
SEQUENCE:127
SUMMARY:Efficient Hardware Implementation of Deep Learning Computation and 
 its Application - Prof. Seokbum Ko\, University of Saskatchewan\, Canada
URL;VALUE=URI:https://events.vtools.ieee.org/m/446950
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;strong&gt;Abstract:&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;Deep le
 arning can provide superior performance in many fields of applications. Ho
 wever\, the cost of implementing deep learning models in practical applica
 tions is expensive. Deep learning models are both computation intensive an
 d memory intensive. Computation is an important aspect for deep learning. 
 It can determine the latency that is how fast the results can be obtained.
  In this seminar\, computer arithmetic for deep learning will be discussed
 . This lecture will start with discussing the computation requirements of 
 deep learning models and layers. Then\, several computer arithmetic design
 s for deep learning in the literature will be used as examples. Finally\, 
 future trends of computer arithmetic for deep learning computation will be
  discussed.&lt;/p&gt;\n&lt;p&gt;Posit is designed as an alternative to IEEE 754 floati
 ng-point format for many applications. It has non-uniformed number distrib
 ution\, and it can provide a much larger dynamic range than IEEE floating-
 point format. These make posit especially suitable for deep learning appli
 cations. In recent years\, more and more posit based deep learning hardwar
 e accelerators appear in the literature. In this lecture\, the basics of p
 osit format and the corresponding posit-based arithmetic units available i
 n the literature\, including adder\, multiplier\, multiply-accumulate unit
 \, and quire operator\, will be discussed. Then\, several posit-based deep
  learning processors for deep learning inference and training will be disc
 ussed. Finally\, the trends and challenges of posit arithmetic units and p
 osit based deep learning processors will be discussed to motivate more rel
 ated research works.&lt;/p&gt;\n&lt;p&gt;Deep learning applications will be shared wit
 h the audience.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Bio:&amp;nbsp\;&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;Seokbum Ko is
  currently a Professor at the Department of Electrical and Computer Engine
 ering and the Division of Biomedical Engineering\, University of Saskatche
 wan\, Canada. He received his PhD from the University of Rhode Island\, US
 A in 2002.&lt;/p&gt;\n&lt;p&gt;His areas of research interest include computer archite
 cture/arithmetic\, efficient hardware implementation of compute-intensive 
 applications\, deep learning processor architecture and biomedical enginee
 ring.&lt;/p&gt;\n&lt;p&gt;He is an IEEE Cicuits and Systems Society Distinguished Lect
 urer (2024-2025)\, a senior member of IEEE circuits and systems society an
 d an associate editor for IEEE TVLSI\, IEEE TCAS-II\, IEEE Access and IET 
 Computers &amp;amp\; Digital Techniques. He is an active member of IEEE CAS Te
 chnical Committee\, IEEE P3109\, IEEE754-2029\, IEEE Domain-Specific Accel
 erators Standarads Committee and IEEE Emerging Processor Systems Standards
  Committee. He was an associate editor for IEEE TCASI (2019-2021).&lt;/p&gt;\n&lt;p
 &gt;&lt;strong&gt;The webinar is free but registration is required.&lt;/strong&gt;&lt;/p&gt;\n&lt;
 p&gt;&lt;strong&gt;Zoom link will be sent after registration.&amp;nbsp\;&lt;/strong&gt;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

