BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
BEGIN:STANDARD
DTSTART:19451014T230000
TZOFFSETFROM:+0630
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241015T051014Z
UID:0FF9C4BB-27BC-4CE9-B766-DFC751E18C91
DTSTART;TZID=Asia/Kolkata:20241006T170000
DTEND;TZID=Asia/Kolkata:20241006T180000
DESCRIPTION:Training Large Language Models (LLMs) requires using very large
 -scale infrastructure\, such as dedicated clouds or even a group of clouds
 . In addition\, training such models in a reasonable time requires the use
  of dedicated Hardware. In my talk\, I will emphasize why specialized Hard
 ware for training is needed\, present some of the currently proposed solut
 ions\, and extend the discussion on the challenges future systems still ne
 ed to address.\n\nSpeaker(s): Avi Mendelson\, \n\nVirtual: https://events.
 vtools.ieee.org/m/436195
LOCATION:Virtual: https://events.vtools.ieee.org/m/436195
ORGANIZER:vinitkumargunjan@ieee.org
SEQUENCE:16
SUMMARY:Computer Architectures for training LLM systems – Past Present an
 d Challenges of Future Systems
URL;VALUE=URI:https://events.vtools.ieee.org/m/436195
X-ALT-DESC:Description: &lt;br /&gt;&lt;p class=&quot;MsoNormal&quot; style=&quot;text-align: justi
 fy\;&quot;&gt;&lt;span lang=&quot;IT&quot;&gt;Training Large Language Models (LLMs) requires using
  very large-scale infrastructure\, such as dedicated clouds or even a grou
 p of clouds. In addition\, training such models in a reasonable time requi
 res the use of dedicated Hardware.&amp;nbsp\;&lt;/span&gt;&lt;span lang=&quot;IT&quot;&gt;In my talk
 \, I will emphasize why specialized Hardware for training is needed\, pres
 ent some of the currently proposed solutions\, and extend the discussion o
 n the challenges future systems still need to address.&lt;/span&gt;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

