Interconnect Meets Architecture: On-Chip Communication in the Age of Heterogeneity

#On-chip #communication #neural #networks #graph #analytics #big-data #applications
Share

Neural Networks, graph analytics, and other big-data applications have become vastly important for many domains. This has led to a search for proper computing systems that can efficiently utilize the tremendous amount of data parallelism that is associated with these applications. Generally, we depend on data centers and high-performance computing (HPC) clusters to run various big-data applications. However, the design of data centers is dominated by power, thermal, and physical constraints. On the contrary, emerging heterogeneous manycore processing platforms that consist of CPU and GPU cores along with memory controllers (MCs) and accelerators have small footprints. Moreover, they offer power and area-efficient tradeoffs for running big-data applications. Consequently, heterogeneous manycore computing platforms represent a powerful alternative to the data center-oriented type of computing. However, typical Network-On-Chip (NoC) infrastructures employed on conventional manycore platforms are highly sub-optimal to handle specific needs CPUs, GPUs, and accelerators. To address this challenge, we need to come up with a holistic approach to design an optimal network-on-chip (NoC) as the interconnection backbone for the heterogeneous manycore platforms that can handle CPU, GPU, and application-specific accelerator communication requirements efficiently. We will discuss design of a hybrid NoC architecture suitable for heterogeneous manycore platforms. We will also highlight effectiveness of machine learning-inspired multi-objective optimization (MOO) algorithms to quickly find a NoC that satisfies both CPU and GPU communication requirements. Widely used MOO techniques (e.g., NSGA-II or simulated annealing based AMOSA) can require significant amounts of time due to their exploratory nature. Therefore, more efficient, and scalable ML-based optimization techniques are required. We are going to discuss various features of a generalized application-agnostic heterogeneous NoC design that achieves similar levels of performance (latency, throughput, energy, and temperature) as application-specific designs. 



  Date and Time

  Location

  Hosts

  Registration



  • Date: 06 Jun 2022
  • Time: 04:00 PM to 05:30 PM
  • All times are (GMT-08:00) Canada/Pacific
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • University of British Columbia
  • 2332 Main Mall
  • Vancouver, British Columbia
  • Canada V6T 1Z4
  • Building: Fred Kaiser Building
  • Room Number: KAIS 2020
  • Click here for Map

  • Contact Event Host
  • Andre Ivanov <ivanov@ece.ubc.ca>

  • Starts 29 April 2022 04:00 PM
  • Ends 06 June 2022 05:30 PM
  • All times are (GMT-08:00) Canada/Pacific
  • No Admission Charge


  Speakers

Partha Pratim Pande of Washington State University

Topic:

Interconnect Meets Architecture: On-Chip Communication in the Age of Heterogeneity

Neural Networks, graph analytics, and other big-data applications have become vastly important for many domains. This has led to a search for proper computing systems that can efficiently utilize the tremendous amount of data parallelism that is associated with these applications. Generally, we depend on data centers and high-performance computing (HPC) clusters to run various big-data applications. However, the design of data centers is dominated by power, thermal, and physical constraints. On the contrary, emerging heterogeneous manycore processing platforms that consist of CPU and GPU cores along with memory controllers (MCs) and accelerators have small footprints. Moreover, they offer power and area-efficient tradeoffs for running big-data applications. Consequently, heterogeneous manycore computing platforms represent a powerful alternative to the data center-oriented type of computing. However, typical Network-On-Chip (NoC) infrastructures employed on conventional manycore platforms are highly sub-optimal to handle specific needs CPUs, GPUs, and accelerators. To address this challenge, we need to come up with a holistic approach to design an optimal network-on-chip (NoC) as the interconnection backbone for the heterogeneous manycore platforms that can handle CPU, GPU, and application-specific accelerator communication requirements efficiently. We will discuss design of a hybrid NoC architecture suitable for heterogeneous manycore platforms. We will also highlight effectiveness of machine learning-inspired multi-objective optimization (MOO) algorithms to quickly find a NoC that satisfies both CPU and GPU communication requirements. Widely used MOO techniques (e.g., NSGA-II or simulated annealing based AMOSA) can require significant amounts of time due to their exploratory nature. Therefore, more efficient, and scalable ML-based optimization techniques are required. We are going to discuss various features of a generalized application-agnostic heterogeneous NoC design that achieves similar levels of performance (latency, throughput, energy, and temperature) as application-specific designs. 

Biography:

Partha Pratim Pande is a professor and holder of the Boeing Centennial Chair in computer engineering at the school of Electrical Engineering and Computer Science, Washington State University, Pullman, USA. He is currently the director of the school. His current research interests are novel interconnect architectures for manycore chips, on-chip wireless communication networks, heterogeneous architectures, and ML for EDA. Dr. Pande currently serves as the Editor-in-Chief (EIC) of IEEE Design and Test (D&T). He is on the editorial boards of IEEE Transactions on VLSI (TVLSI) and ACM Journal of Emerging Technologies in Computing Systems (JETC) and IEEE Embedded Systems letters. He was/is the technical program committee chair of IEEE/ACM Network-on-Chip Symposium 2015 and CASES (2019-2020). He also serves on the program committees of many reputed international conferences. He has won the NSF CAREER award in 2009. He is the winner of the Anjan Bose outstanding researcher award from the college of engineering, Washington State University in 2013. He is a fellow of IEEE.

Email:

Address:Electrical Engineering and Computer Science, Washington State University, Pullman, United States