BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
BEGIN:DAYLIGHT
DTSTART:20240331T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20241027T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240614T091223Z
UID:313355A2-2EE2-4864-9117-19B612CC650D
DTSTART;TZID=Europe/Stockholm:20240610T150000
DTEND;TZID=Europe/Stockholm:20240610T160000
DESCRIPTION:Title: Speeding up distributed learning: towards low-complexity
  communication-efficient algorithms with superlinear convergence\n\nSpeake
 r(s): Subhrakanti Dey\n\nAgenda: \nTitle: Speeding up distributed learning
 : towards low-complexity communication-efficient algorithms with superline
 ar convergence\n\nAbstract:\n\nNext generation of networked cyber-physical
  systems will support a number of application domains e.g. connected auton
 omous vehicular networks\, collaborative robotics in smart factories\, and
  many other mission-critical applications. With the advent of massive mach
 ine-to-machine communication and IoT networks\, huge volumes of data can b
 e collected and processed with low latency through edge computing faciliti
 es. Distributed machine learning enables cross-device collaborative learni
 ng without exchanging raw data\, ensuring privacy and reducing communicati
 on cost. Learning over wireless networks poses signiﬁcant challenges due
  to limited communication bandwidth and channel variability\, limited comp
 utational resources at the IoT devices\, the heterogeneous nature of distr
 ibuted data\, and also randomly time-varying network topologies. In this t
 alk\, we will present (i) low-complexity communication efficient Federated
  Learning (FL) algorithms based on approximate Newton-type optimization te
 chniques employed at the local agents\, which achieve superlinear converge
 nce rate as opposed to linear rates achieved by state-of-the-art gradient 
 descent based algorithms\, and (ii) fully distributed network Newton type 
 algorithms based on a distributed version of the well-known GIANT algorith
 m. While consensus based distributed optimization algorithms are naturally
  limited to linear convergence rates\, we will show that one can design fi
 nite-time consensus based distributed network-Newton type algorithms that 
 can achieve superlinear convergence\, albeit at the cost of increased numb
 ers of consensus rounds. We will conclude with some new directions on resu
 lts on zeroth order techniques that can also achieve superlinear convergen
 ce rates in Federated Learning.\n\nVirtual: https://events.vtools.ieee.org
 /m/420689
LOCATION:Virtual: https://events.vtools.ieee.org/m/420689
ORGANIZER:amlkel@utu.fi, elisa.barney@ltu.se
SEQUENCE:14
SUMMARY:IEEE Sweden Signal Processing Society Chapter\, SPS Day Event - Sem
 inar by Professor Subhrakanti Dey 
URL;VALUE=URI:https://events.vtools.ieee.org/m/420689
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;strong&gt;&lt;span style=&quot;font-size: 12.0pt\; c
 olor: black\;&quot;&gt;Title&lt;/span&gt;&lt;/strong&gt;&lt;span style=&quot;font-size: 12.0pt\; color
 : black\;&quot;&gt;: Speeding up distributed learning: towards low-complexity comm
 unication-efficient algorithms with superlinear convergence&lt;/span&gt;&lt;/p&gt;&lt;br 
 /&gt;&lt;br /&gt;Agenda: &lt;br /&gt;&lt;p&gt;&lt;strong&gt;&lt;span style=&quot;font-size: 12.0pt\; color: b
 lack\;&quot;&gt;Title&lt;/span&gt;&lt;/strong&gt;&lt;span style=&quot;font-size: 12.0pt\; color: black
 \;&quot;&gt;: Speeding up distributed learning: towards low-complexity communicati
 on-efficient algorithms with superlinear convergence&lt;/span&gt;&lt;/p&gt;\n&lt;p&gt;&lt;stron
 g&gt;&lt;span style=&quot;font-size: 12.0pt\; color: black\;&quot;&gt;Abstract&lt;/span&gt;&lt;/strong
 &gt;&lt;span style=&quot;font-size: 12.0pt\; color: black\;&quot;&gt;: &lt;/span&gt;&lt;/p&gt;\n&lt;p class=
 &quot;MsoNormal&quot;&gt;&lt;span lang=&quot;EN-GB&quot; style=&quot;font-size: 12.0pt\; color: black\; m
 so-ansi-language: EN-GB\; mso-fareast-language: EN-US\;&quot;&gt;Next generation o
 f networked cyber-physical &amp;nbsp\;systems will support a number of applica
 tion domains e.g. connected autonomous vehicular networks\, collaborative 
 robotics in smart factories\, and many other mission-critical applications
 . With the advent of massive machine-to-machine communication and IoT netw
 orks\, huge volumes of data can be collected and processed with low latenc
 y through edge computing facilities.&amp;nbsp\; Distributed machine learning&amp;n
 bsp\; enables&amp;nbsp\; cross-device&amp;nbsp\; collaborative learning without ex
 changing raw data\, ensuring privacy and reducing communication cost. Lear
 ning over wireless networks poses signiﬁcant challenges due to limited c
 ommunication bandwidth and channel variability\, limited computational res
 ources at the IoT devices\, the heterogeneous nature of distributed data\,
  and also randomly time-varying network topologies. In this talk\, we will
  present&amp;nbsp\; (i) low-complexity communication efficient Federated Learn
 ing (FL) algorithms based on approximate Newton-type optimization techniqu
 es employed at the local agents\, which achieve superlinear convergence ra
 te as opposed to linear rates achieved by state-of-the-art gradient descen
 t based algorithms\, and (ii) fully distributed network Newton type algori
 thms based on a distributed version of the well-known GIANT algorithm. Whi
 le consensus based distributed optimization algorithms are naturally limit
 ed to linear convergence rates\, we will show that one can design finite-t
 ime consensus based distributed network-Newton type algorithms that can ac
 hieve superlinear convergence\, albeit at the cost of increased numbers of
  consensus rounds. We will conclude with some new &lt;/span&gt;&lt;span lang=&quot;EN-GB
 &quot; style=&quot;font-size: 12.0pt\; color: black\; mso-ansi-language: EN-GB\; mso
 -fareast-language: EN-US\;&quot;&gt;directions on results on zeroth order techniqu
 es that can also achieve superlinear convergence rates in Federated Learni
 ng.&lt;/span&gt;&lt;/p&gt;\n&lt;p class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;font-size: 12.0pt\; colo
 r: black\;&quot;&gt;&amp;nbsp\;&lt;/span&gt;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

