BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20190310T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20191103T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20200317T183441Z
UID:FEAA0332-E098-4B17-AF77-BD9E022C60ED
DTSTART;TZID=US/Eastern:20191024T133000
DTEND;TZID=US/Eastern:20191024T143000
DESCRIPTION:Following technology advances in high performance computation s
 ystems and fast growth of data acquisition\, machine learning\, especially
  deep learning\, made remarkable success in many research areas and applic
 ations. Such a success\, to a great extent\, is enabled by developing larg
 e-scale deep neural networks (DNN) that learn from a huge volume of data. 
 The deployment of such a big model\, however\, is both computation-intensi
 ve and memory-intensive. Though the research on hardware acceleration for 
 neural network has been extensively studied\, the progress of hardware dev
 elopment still falls far behind the upscaling of DNN models at soft-ware l
 evel. We envision that hardware/software co-design for performance acceler
 ation of deep neural networks is necessary. In this work\, I will start wi
 th the trends of machine learning study in academia and industry\, followe
 d by our study on how to run sparse and low-precision neural networks\, as
  well as the investigation on memristor-based computing engine.\n\nCo-spon
 sored by: ECE Dept. Drexel University\n\nSpeaker(s): Hai &quot;Helen&quot; Li\, \n\n
 Room: 302\, Bldg: Bossone Research Enterprise Center\, 3140 Market St.\, D
 rexel University\, Philadelphia\, Pennsylvania\, United States\, 19104
LOCATION:Room: 302\, Bldg: Bossone Research Enterprise Center\, 3140 Market
  St.\, Drexel University\, Philadelphia\, Pennsylvania\, United States\, 1
 9104
ORGANIZER:ziauddin.ahmad.us@ieee.org
SEQUENCE:2
SUMMARY:DEEP LEARNING AND NEUROMORPHIC COMPUTING – TECHNOLOGY\, HARDWARE 
 AND IMPLEMENTATION
URL;VALUE=URI:https://events.vtools.ieee.org/m/204622
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Following technology advances in high perf
 ormance computation systems and fast growth of data acquisition\, machine 
 learning\, especially deep learning\, made remarkable success in many rese
 arch areas and applications. Such a success\, to a great extent\, is enabl
 ed by developing large-scale deep neural networks (DNN) that learn from a 
 huge volume of data. The deployment of such a big model\, however\, is bot
 h computation-intensive and memory-intensive. Though the research on hardw
 are acceleration for neural network has been extensively studied\, the pro
 gress of hardware development still falls far behind the upscaling of DNN 
 models at soft-ware level. We envision that hardware/software co-design fo
 r performance acceleration of deep neural networks is necessary. In this w
 ork\, I will start with the trends of machine learning study in academia a
 nd industry\, followed by our study on how to run sparse and low-precision
  neural networks\, as well as the investigation on memristor-based computi
 ng engine.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

