BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Canada/Pacific
BEGIN:DAYLIGHT
DTSTART:20220313T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20221106T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20220610T160629Z
UID:FA7897E6-FE32-4C3C-83C6-F09F95AC3A5F
DTSTART;TZID=Canada/Pacific:20220606T160000
DTEND;TZID=Canada/Pacific:20220606T173000
DESCRIPTION:Neural Networks\, graph analytics\, and other big-data applicat
 ions have become vastly important for many domains. This has led to a sear
 ch for proper computing systems that can efficiently utilize the tremendou
 s amount of data parallelism that is associated with these applications. G
 enerally\, we depend on data centers and high-performance computing (HPC) 
 clusters to run various big-data applications. However\, the design of dat
 a centers is dominated by power\, thermal\, and physical constraints. On t
 he contrary\, emerging heterogeneous manycore processing platforms that co
 nsist of CPU and GPU cores along with memory controllers (MCs) and acceler
 ators have small footprints. Moreover\, they offer power and area-efficien
 t tradeoffs for running big-data applications. Consequently\, heterogeneou
 s manycore computing platforms represent a powerful alternative to the dat
 a center-oriented type of computing. However\, typical Network-On-Chip (No
 C) infrastructures employed on conventional manycore platforms are highly 
 sub-optimal to handle specific needs CPUs\, GPUs\, and accelerators. To ad
 dress this challenge\, we need to come up with a holistic approach to desi
 gn an optimal network-on-chip (NoC) as the interconnection backbone for th
 e heterogeneous manycore platforms that can handle CPU\, GPU\, and applica
 tion-specific accelerator communication requirements efficiently. We will 
 discuss design of a hybrid NoC architecture suitable for heterogeneous man
 ycore platforms. We will also highlight effectiveness of machine learning-
 inspired multi-objective optimization (MOO) algorithms to quickly find a N
 oC that satisfies both CPU and GPU communication requirements. Widely used
  MOO techniques (e.g.\, NSGA-II or simulated annealing based AMOSA) can re
 quire significant amounts of time due to their exploratory nature. Therefo
 re\, more efficient\, and scalable ML-based optimization techniques are re
 quired. We are going to discuss various features of a generalized applicat
 ion-agnostic heterogeneous NoC design that achieves similar levels of perf
 ormance (latency\, throughput\, energy\, and temperature) as application-s
 pecific designs.\n\nSpeaker(s): Partha Pratim Pande\, \n\nRoom: KAIS 2020\
 , Bldg: Fred Kaiser Building \, University of British Columbia\, 2332 Main
  Mall\, Vancouver\, British Columbia\, Canada\, V6T 1Z4\, Virtual: https:/
 /events.vtools.ieee.org/m/313190
LOCATION:Room: KAIS 2020\, Bldg: Fred Kaiser Building \, University of Brit
 ish Columbia\, 2332 Main Mall\, Vancouver\, British Columbia\, Canada\, V6
 T 1Z4\, Virtual: https://events.vtools.ieee.org/m/313190
ORGANIZER:ljilja@cs.sfu.ca
SEQUENCE:10
SUMMARY:Interconnect Meets Architecture: On-Chip Communication in the Age o
 f Heterogeneity
URL;VALUE=URI:https://events.vtools.ieee.org/m/313190
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Neural Networks\, graph analytics\, and ot
 her big-data applications have become vastly important for many domains. T
 his has led to a search for proper computing systems that can efficiently 
 utilize the tremendous amount of data parallelism that is associated with 
 these applications. Generally\, we depend on data centers and high-perform
 ance computing (HPC) clusters to run various big-data applications. Howeve
 r\, the design of data centers is dominated by power\, thermal\, and physi
 cal constraints. On the contrary\, emerging heterogeneous manycore process
 ing platforms that consist of CPU and GPU cores along with memory controll
 ers (MCs) and accelerators have small footprints. Moreover\, they offer po
 wer and area-efficient tradeoffs for running big-data applications. Conseq
 uently\, heterogeneous manycore computing platforms represent a powerful a
 lternative to the data center-oriented type of computing. However\, typica
 l Network-On-Chip (NoC) infrastructures employed on conventional manycore 
 platforms are highly sub-optimal to handle specific needs CPUs\, GPUs\, an
 d accelerators. To address this challenge\, we need to come up with a holi
 stic approach to design an optimal network-on-chip (NoC) as the interconne
 ction backbone for the heterogeneous manycore platforms that can handle CP
 U\, GPU\, and application-specific accelerator communication requirements 
 efficiently. We will discuss design of a hybrid NoC architecture suitable 
 for heterogeneous manycore platforms. We will also highlight effectiveness
  of machine learning-inspired multi-objective optimization (MOO) algorithm
 s to quickly find a NoC that satisfies both CPU and GPU communication requ
 irements. Widely used MOO techniques (e.g.\, NSGA-II or simulated annealin
 g based AMOSA) can require significant amounts of time due to their explor
 atory nature. Therefore\, more efficient\, and scalable ML-based optimizat
 ion techniques are required. We are going to discuss various features of a
  generalized application-agnostic heterogeneous NoC design that achieves s
 imilar levels of performance (latency\, throughput\, energy\, and temperat
 ure) as application-specific designs.&amp;nbsp\;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

