BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
DTSTART:20240310T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20231105T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240314T200107Z
UID:7A1C59EE-0DEA-4B39-BF39-2772C1D8DFD3
DTSTART;TZID=America/New_York:20240307T180000
DTEND;TZID=America/New_York:20240307T190000
DESCRIPTION:Special Presentation on “LMMs as Universal Foundation Models 
 for AI-Native Wireless Systems”\n\nby Dr. Christo K. Thomas (Virginia Te
 ch\, USA)\n\nHosted by Future Networks Artificial Intelligence &amp; Machine L
 earning (AIML) Working Group\n\nDate/Time: Thursday\, March 7th\, 2024 @ 6
  PM EST (3 PM PST)\n\nTopic:\n\nLarge Multi-Modal Models (LMMs) as Univers
 al Foundation Models for AI-Native Wireless Systems\n\nAbstract:\n\nFounda
 tion models such as large language models (LLMs) have recently been touted
  as game-changers for 6G systems. However\, previous efforts on LLMs for w
 ireless networks are limited to directly applying existing language models
  designed for natural language processing (NLP) applications. Contrary to 
 this\, in this talk\, we present a comprehensive vision of how to design u
 niversal foundation models that are tailored towards the unique needs of n
 ext-generation wireless systems\, thereby paving the way towards the deplo
 yment of artificial intelligence (AI)-native networks. These LMMs are driv
 en by three distinct characteristics: 1) integration of multi-modal sensin
 g data\, 2) grounding sensory input via causal reasoning and retrieval-aug
 mented generation (RAG)\, and 3) instructibility to environmental feedback
  through logical and mathematical reasoning enabled by neuro-symbolic AI. 
 These attributes are crucial for developing &quot;universal foundation models&quot; 
 capable of addressing interconnected cross-layer networking challenges in 
 AI-native wireless systems while ensuring alignment of objectives across d
 iverse domains. We also discuss preliminary results from experimental eval
 uation that demonstrate the efficacy of grounding using RAG in LMMs\, and 
 showcase the alignment of LMMs with wireless system designs. Furthermore\,
  compared to vanilla LLMs\, the enhanced rationale exhibited in the respon
 ses to mathematical questions by LMMs demonstrates the logical and mathema
 tical reasoning capabilities inherent in LMMs. Building on those results\,
  we present a sequel of open questions and challenges for LMMs\, including
  intent-based networks\, resilient wireless systems\, semantic communicati
 ons\, and many more.\n\nCo-sponsored by: IEEE Future Networks\n\nSpeaker(s
 ): Dr. Christo K. Thomas \n\nVirtual: https://events.vtools.ieee.org/m/407
 262
LOCATION:Virtual: https://events.vtools.ieee.org/m/407262
ORGANIZER:c.polk@comsoc.org
SEQUENCE:12
SUMMARY:LMMs for AI-Native Wireless Systems - AI/ML webinar
URL;VALUE=URI:https://events.vtools.ieee.org/m/407262
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Special Presentation on &amp;ldquo\;&lt;strong&gt;LM
 Ms as Universal Foundation Models for AI-Native Wireless Systems&lt;/strong&gt;&amp;
 rdquo\;&lt;/p&gt;\n&lt;p&gt;by&lt;strong&gt; Dr. Christo K. Thomas (Virginia Tech\, USA)&lt;/st
 rong&gt;&lt;/p&gt;\n&lt;p&gt;Hosted by Future Networks&lt;strong&gt; Artificial Intelligence &amp;a
 mp\; Machine Learning (AIML) Working Group&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Date/T
 ime&lt;/strong&gt;: &lt;strong&gt;Thursday\, March 7&lt;sup&gt;th&lt;/sup&gt;\, 2024 @ 6 PM EST (3
  PM PST)&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;&lt;u&gt;Topic&lt;/u&gt;&lt;/strong&gt;&lt;strong&gt;:&lt;/strong&gt;&lt;
 /p&gt;\n&lt;p&gt;&lt;strong&gt;Large Multi-Modal Models (LMMs) as Universal Foundation Mo
 dels for AI-Native Wireless Systems&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;&lt;u&gt;Abstract&lt;/
 u&gt;&lt;/strong&gt;&lt;strong&gt;:&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;Foundation models such as large lang
 uage models (LLMs) have recently been touted as game-changers for 6G syste
 ms. However\, previous efforts on LLMs for wireless networks are limited t
 o directly applying existing language models designed for natural language
  processing (NLP) applications. Contrary to this\, in this talk\, we prese
 nt a comprehensive vision of how to design universal foundation models tha
 t are tailored towards the unique needs of next-generation wireless system
 s\, thereby paving the way towards the deployment of artificial intelligen
 ce (AI)-native networks. These LMMs are driven by three distinct character
 istics: 1) integration of multi-modal sensing data\, 2) grounding sensory 
 input via causal reasoning and retrieval-augmented generation (RAG)\, and 
 3) instructibility to environmental feedback through logical and mathemati
 cal reasoning enabled by neuro-symbolic AI. These attributes are crucial f
 or developing &quot;universal foundation models&quot; capable of addressing intercon
 nected cross-layer networking challenges in AI-native wireless systems whi
 le ensuring alignment of objectives across diverse domains. We also discus
 s preliminary results from experimental evaluation that demonstrate the ef
 ficacy of grounding using RAG in LMMs\, and showcase the alignment of LMMs
  with wireless system designs. Furthermore\, compared to vanilla LLMs\, th
 e enhanced rationale exhibited in the responses to mathematical questions 
 by LMMs demonstrates the logical and mathematical reasoning capabilities i
 nherent in LMMs. Building on those results\, we present a sequel of open q
 uestions and challenges for LMMs\, including intent-based networks\, resil
 ient wireless systems\, semantic communications\, and many more.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

