BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Zurich
BEGIN:DAYLIGHT
DTSTART:20190331T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20191027T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20190815T210435Z
UID:8BFE3A64-54BF-4846-9EFF-A087218ACF88
DTSTART;TZID=Europe/Zurich:20190726T140000
DTEND;TZID=Europe/Zurich:20190726T150000
DESCRIPTION:Seminar title: From Deep Scaling To Deep Intelligence\n\nMoore
 ’s law driving the advancement in semiconductor industry over decades ha
 s been coming to a screeching halt and many researchers are convinced that
  it is almost dead. After revival and promise of artificial intelligence (
 AI) due to increased computational performance and memory bandwidth aided 
 by Moore’s law there is overwhelming enthusiasm in researchers for incre
 asing the pace of VLSI industry. AI uses many neural network techniques fo
 r computation which involves training and inference. The advancement in AI
  requires energy efficient\, low power hardware systems. This is more so f
 or servers\, main processors\, Internet of Things (IoT) and System on chip
  (SOC) applications and newer applications in cognitive computing. In the 
 light of AI this talk focuses on advanced technology issues\, important ci
 rcuit techniques for lowering power\, improving performance and functional
 ity in nanoscale VLSI design in the midst of variability. The same techniq
 ues can be used for AI specific accelerators. Accelerator development for 
 reduction in power and throughput improvement for both edge and data centr
 ic accelerators compared to GPUs used for Convolutional Neural (CNN) and D
 eep Neural (DNN) Networks are described. The talk covers memory (volatile 
 and nonvolatile) solutions for CNN/DNN applications at extremely low Vmin.
  Finally the talk summarizes challenges and future directions for circuit 
 applications for edge and data-centric accelerators\n\nCo-sponsored by: Da
 vid Atienza\n\nSpeaker(s): Dr. Rajiv Joshi\, \n\nRoom: 328\, Bldg:  INF \,
  EPFL Lausanne\, Route Cantonale\, Lausanne\, Switzerland\, Switzerland\, 
 1015 Lausanne
LOCATION:Room: 328\, Bldg:  INF \, EPFL Lausanne\, Route Cantonale\, Lausan
 ne\, Switzerland\, Switzerland\, 1015 Lausanne
ORGANIZER:david.atienza@epfl.ch
SEQUENCE:4
SUMMARY:IEEE Swiss ED DL Lecture by Dr. Rajiv Joshi
URL;VALUE=URI:https://events.vtools.ieee.org/m/201225
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;u&gt;Seminar title: &lt;strong&gt;From Deep Scalin
 g To Deep Intelligence&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;\n&lt;p&gt;&lt;u&gt;&lt;/u&gt;Moore&amp;rsquo\;s law driv
 ing the advancement in semiconductor industry over decades has been coming
  to a screeching halt and many researchers are convinced that it is almost
  dead. After revival and promise of artificial intelligence (AI) due to in
 creased computational performance and memory bandwidth aided by Moore&amp;rsqu
 o\;s law there is overwhelming enthusiasm in researchers for increasing th
 e pace of VLSI industry. AI uses many neural network techniques for comput
 ation which involves training and inference. The advancement in AI require
 s energy efficient\, low power hardware systems. This is more so for serve
 rs\, main processors\, Internet of Things (IoT) and System on chip (SOC) a
 pplications and newer applications in cognitive computing. In the light of
  AI this talk focuses on advanced technology issues\, important circuit te
 chniques for lowering power\, improving performance and functionality in n
 anoscale VLSI design in the midst of variability. The same techniques can 
 be used for AI specific accelerators. Accelerator development for reductio
 n in power and throughput improvement for both edge and data centric accel
 erators compared to GPUs used for Convolutional Neural (CNN) and Deep Neur
 al (DNN) Networks are described. The talk covers memory (volatile and nonv
 olatile) solutions for CNN/DNN applications at extremely low Vmin. Finally
  the talk summarizes challenges and future directions for circuit applicat
 ions for edge and data-centric accelerators&lt;/p&gt;\n&lt;p&gt;&amp;nbsp\;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

