BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
DTSTART:20260308T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20261101T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260414T190818Z
UID:5D5CC7B4-9FA1-4872-83EF-7B6BB2488A13
DTSTART;TZID=America/New_York:20260410T120000
DTEND;TZID=America/New_York:20260410T130000
DESCRIPTION:Large Language Models (LLMs) are powerful but suffer from two p
 rimary limitations: knowledge cutoff (they only know what they were traine
 d on) and hallucinations (they confidently invent facts). Retrieval-Augmen
 ted Generation (RAG) solves this by grounding the model in external\, veri
 fiable data. Retrieval-Augmented Generation (RAG) is emerging as a core ar
 chitectural pattern for building production-ready AI assistants because it
  overcomes the closed-world and staleness limitations of stand-alone large
  language models (LLMs) by grounding generation in external knowledge sour
 ces. Instead of relying solely on pre-training\, a RAG system ingests hete
 rogeneous documents\, indexes them in a vector database\, retrieves the mo
 st relevant snippets at query time\, and injects them into the prompt to e
 nsure responses are accurate\, up-to-date\, and aligned with private or do
 main-specific data. This talk presents a practical\, end-to-end blueprint 
 for RAG pipelines\, emphasizing that most failures stem from the retrieval
  layer rather than from the LLM itself.\n\nSpeaker(s): Dr. Deepak Garg\n\n
 Room: 205\, Bldg: Becton Hall\, 960 River Road\, TEANECK\, New Jersey\, Un
 ited States\, 07666\, Virtual: https://events.vtools.ieee.org/m/546172
LOCATION:Room: 205\, Bldg: Becton Hall\, 960 River Road\, TEANECK\, New Jer
 sey\, United States\, 07666\, Virtual: https://events.vtools.ieee.org/m/54
 6172
ORGANIZER:avatsa@fdu.edu
SEQUENCE:15
SUMMARY:Building Production Grade Agentic Retrieval-Augmented Generation (R
 AG)
URL;VALUE=URI:https://events.vtools.ieee.org/m/546172
X-ALT-DESC:Description: &lt;br /&gt;&lt;p class=&quot;MsoNormal&quot; style=&quot;mso-add-space: au
 to\; text-align: justify\; mso-pagination: widow-orphan\; background: whit
 e\; text-autospace: ideograph-numeric ideograph-other\; margin: 0in 63.0pt
  .0001pt .5in\;&quot;&gt;&lt;span style=&quot;font-size: 12.0pt\; color: black\; mso-color
 -alt: windowtext\;&quot;&gt;Large Language Models (LLMs) are powerful but suffer f
 rom two primary limitations: knowledge cutoff (they only know what they we
 re trained on) and hallucinations (they confidently invent facts). Retriev
 al-Augmented Generation (RAG) solves this by grounding the model in extern
 al\, verifiable data. Retrieval-Augmented Generation (RAG) is emerging as 
 a core architectural pattern for building production-ready AI assistants b
 ecause it overcomes the closed-world and staleness limitations of stand-al
 one large language models (LLMs) by grounding generation in external knowl
 edge sources. Instead of relying solely on pre-training\, a RAG system ing
 ests heterogeneous documents\, indexes them in a vector database\, retriev
 es the most relevant snippets at query time\, and injects them into the pr
 ompt to ensure responses are accurate\, up-to-date\, and aligned with priv
 ate or domain-specific data. This talk presents a practical\, end-to-end b
 lueprint for RAG pipelines\, emphasizing that most failures stem from the 
 retrieval layer rather than from the LLM itself&lt;/span&gt;&lt;span style=&quot;font-si
 ze: 12.0pt\; color: #242424\; background: white\;&quot;&gt;.&lt;/span&gt;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

