BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
DTSTART:20250309T030000
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251102T010000
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250312T200934Z
UID:CFD2E7A3-6BFC-44BF-BA16-7BA827280528
DTSTART;TZID=America/Chicago:20250312T120000
DTEND;TZID=America/Chicago:20250312T130000
DESCRIPTION:Garrett Hall\, a Research Engineer at the Southwest Research In
 stitute\, will deliver an introductory presentation on Large Language Mode
 ls. This talk is the first in a two-part series and covers several fundame
 ntal concepts\, including tokenization\, vector embeddings\, and positiona
 l encoding.\n\nTokenization is the process of converting words or phrases 
 into numerical values that machine learning models can understand. By brea
 king down text into smaller units called tokens\, the model can more effec
 tively process and analyze the data.\n\nVector embeddings are a crucial ne
 xt step\, where these tokens are transformed into dense vector representat
 ions. These vectors capture semantic meaning\, enabling the model to under
 stand relationships between words based on their contextual usage. Embeddi
 ngs essentially map tokens into high-dimensional space where similar words
  are located closer together.\n\nPositional encoding provides additional i
 nformation about the order of the words in a sentence\, establishing a fou
 ndation for sentence structure. It embeds positional information within th
 e tokenized data so that the model can recognize the sequence and context 
 of words\, which is essential for understanding the meaning of the text as
  a whole.\n\nFinally\, the presentation will illustrate Retrieval-Augmente
 d Generation (RAG) processes. RAG combines retrieval-based and generative 
 models to enhance the generation of relevant and accurate text by incorpor
 ating external information sources. This section will demonstrate how the 
 preceding concepts of tokenization\, embeddings\, and positional encoding 
 come together in RAG to create more coherent and contextually appropriate 
 text.\n\nCookies and refreshments will be served.\n\nTalk is restricted to
  US citizens.\n\nRegistration required by COB Monday 3/10 for admittance t
 o SwRI grounds on day of event.\n\nSpeaker(s): Garrett\n\nBldg: Building 5
 1\, 6220 Culebra Rd\, San Antonio\, Texas\, United States\, 78238
LOCATION:Bldg: Building 51\, 6220 Culebra Rd\, San Antonio\, Texas\, United
  States\, 78238
ORGANIZER:garrett.hall@swri.org
SEQUENCE:35
SUMMARY:IEEE AESS - Part I: LLM Basics
URL;VALUE=URI:https://events.vtools.ieee.org/m/472758
X-ALT-DESC:Description: &lt;br /&gt;&lt;p class=&quot;mb-2 whitespace-pre-wrap&quot;&gt;Garrett H
 all\, a Research Engineer at the Southwest Research Institute\, will deliv
 er an introductory presentation on Large Language Models. This talk is the
  first in a two-part series and covers several fundamental concepts\, incl
 uding tokenization\, vector embeddings\, and positional encoding.&lt;/p&gt;\n&lt;p 
 class=&quot;mb-2 whitespace-pre-wrap&quot;&gt;Tokenization is the process of converting
  words or phrases into numerical values that machine learning models can u
 nderstand. By breaking down text into smaller units called tokens\, the mo
 del can more effectively process and analyze the data.&lt;/p&gt;\n&lt;p class=&quot;mb-2
  whitespace-pre-wrap&quot;&gt;Vector embeddings are a crucial next step\, where th
 ese tokens are transformed into dense vector representations. These vector
 s capture semantic meaning\, enabling the model to understand relationship
 s between words based on their contextual usage. Embeddings essentially ma
 p tokens into high-dimensional space where similar words are located close
 r together.&lt;/p&gt;\n&lt;p class=&quot;mb-2 whitespace-pre-wrap&quot;&gt;Positional encoding p
 rovides additional information about the order of the words in a sentence\
 , establishing a foundation for sentence structure. It embeds positional i
 nformation within the tokenized data so that the model can recognize the s
 equence and context of words\, which is essential for understanding the me
 aning of the text as a whole.&lt;/p&gt;\n&lt;p class=&quot;mb-2 whitespace-pre-wrap&quot;&gt;Fin
 ally\, the presentation will illustrate Retrieval-Augmented Generation (RA
 G) processes. RAG combines retrieval-based and generative models to enhanc
 e the generation of relevant and accurate text by incorporating external i
 nformation sources. This section will demonstrate how the preceding concep
 ts of tokenization\, embeddings\, and positional encoding come together in
  RAG to create more coherent and contextually appropriate text.&lt;/p&gt;\n&lt;p cl
 ass=&quot;mb-2 whitespace-pre-wrap&quot;&gt;&amp;nbsp\;&lt;/p&gt;\n&lt;p class=&quot;mb-2 whitespace-pre-
 wrap&quot;&gt;Cookies and refreshments will be served.&lt;/p&gt;\n&lt;p class=&quot;mb-2 whitesp
 ace-pre-wrap&quot;&gt;Talk is restricted to US citizens.&lt;/p&gt;\n&lt;p class=&quot;mb-2 white
 space-pre-wrap&quot;&gt;Registration required by COB Monday 3/10 for admittance to
  SwRI grounds on day of event.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

