IEEE Sweden Signal Processing Society Chapter, SPS Day Event - Seminar by Professor Subhrakanti Dey

#processing #signal #sp #seminar
Share

Title: Speeding up distributed learning: towards low-complexity communication-efficient algorithms with superlinear convergence



  Date and Time

  Location

  Hosts

  Registration



  • Date: 10 Jun 2024
  • Time: 03:00 PM to 04:00 PM
  • All times are (UTC+02:00) Stockholm
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Contact Event Host
  • Zoom

    https://ltu-se.zoom.us/j/61281597741?pwd=ZE50WEgwaS9odEI5bjQ5Tmhpc1lGUT09

     

  • Starts 13 May 2024 12:00 PM
  • Ends 10 June 2024 12:00 AM
  • All times are (UTC+02:00) Stockholm
  • No Admission Charge


  Speakers

Subhrakanti Dey of Upsala University, Professor of Signal Processing Head, Division of Signals and Systems

Topic:

Title: Speeding up distributed learning: towards low-complexity communication-efficient algorithms with superlinear conv

Abstract:

Next generation of networked cyber-physical  systems will support a number of application domains e.g. connected autonomous vehicular networks, collaborative robotics in smart factories, and many other mission-critical applications. With the advent of massive machine-to-machine communication and IoT networks, huge volumes of data can be collected and processed with low latency through edge computing facilities.  Distributed machine learning  enables  cross-device  collaborative learning without exchanging raw data, ensuring privacy and reducing communication cost. Learning over wireless networks poses significant challenges due to limited communication bandwidth and channel variability, limited computational resources at the IoT devices, the heterogeneous nature of distributed data, and also randomly time-varying network topologies. In this talk, we will present  (i) low-complexity communication efficient Federated Learning (FL) algorithms based on approximate Newton-type optimization techniques employed at the local agents, which achieve superlinear convergence rate as opposed to linear rates achieved by state-of-the-art gradient descent based algorithms, and (ii) fully distributed network Newton type algorithms based on a distributed version of the well-known GIANT algorithm. While consensus based distributed optimization algorithms are naturally limited to linear convergence rates, we will show that one can design finite-time consensus based distributed network-Newton type algorithms that can achieve superlinear convergence, albeit at the cost of increased numbers of consensus rounds. We will conclude with some new directions on results on zeroth order techniques that can also achieve superlinear convergence rates in Federated Learning.

 

Biography:

Subhrakanti Dey
Professor of Signal Processing,
Head, Division of Signals and Systems
Dept. of Electrical Engineering, Uppsala University

Bio:

Subhrakanti Dey received the Ph.D. degree from the Department of Systems Engineering, Research School of Information Sciences and Engineering, Australian National University, Canberra, in 1996.

He is currently a Professor and Head of the Signals and Systems division in the Dept of Electrical Engineering at Uppsala University, Sweden. He has also held professorial positions at National University of Ireland (NUI) Maynooth, Ireland, and University of Melbourne, Australia. His current research interests include networked control systems, distributed machine learning and optimization, and detection and estimation theory for wireless sensor networks. He is a Senior Editor for IEEE Transactions of Control of Network Systems and IEEE Control Systems Letters, and an Associate Editor for Automatica. He is a Fellow of the IEEE.

 

homepage:

http://www.signal.uu.se/Staff/sd/sd.html

 

Email:

Address:Uppsala University, Division of Signals and Systems, Uppsala , Sodermanlands lan, Sweden, 65





Agenda

Title: Speeding up distributed learning: towards low-complexity communication-efficient algorithms with superlinear convergence

Abstract:

Next generation of networked cyber-physical  systems will support a number of application domains e.g. connected autonomous vehicular networks, collaborative robotics in smart factories, and many other mission-critical applications. With the advent of massive machine-to-machine communication and IoT networks, huge volumes of data can be collected and processed with low latency through edge computing facilities.  Distributed machine learning  enables  cross-device  collaborative learning without exchanging raw data, ensuring privacy and reducing communication cost. Learning over wireless networks poses significant challenges due to limited communication bandwidth and channel variability, limited computational resources at the IoT devices, the heterogeneous nature of distributed data, and also randomly time-varying network topologies. In this talk, we will present  (i) low-complexity communication efficient Federated Learning (FL) algorithms based on approximate Newton-type optimization techniques employed at the local agents, which achieve superlinear convergence rate as opposed to linear rates achieved by state-of-the-art gradient descent based algorithms, and (ii) fully distributed network Newton type algorithms based on a distributed version of the well-known GIANT algorithm. While consensus based distributed optimization algorithms are naturally limited to linear convergence rates, we will show that one can design finite-time consensus based distributed network-Newton type algorithms that can achieve superlinear convergence, albeit at the cost of increased numbers of consensus rounds. We will conclude with some new directions on results on zeroth order techniques that can also achieve superlinear convergence rates in Federated Learning.