BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:DAYLIGHT
DTSTART:20230312T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20231105T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20231025T054622Z
UID:F63B1097-D0F7-46F9-ADA7-1B0461B48DCF
DTSTART;TZID=US/Pacific:20231024T170000
DTEND;TZID=US/Pacific:20231024T180000
DESCRIPTION:Leveraging Large Language Models (LLMs) has marked a significan
 t milestone in recent months\, notably with the introduction of ChatGPT in
  early 2023. These models have demonstrated remarkable potential in addres
 sing straightforward queries and tasks. However\, to fully exploit their c
 apabilities in handling routine inquiries\, adept prompt engineering is es
 sential. Furthermore\, the adaptability of LLMs to novel tasks and domains
  is pivotal. It is crucial to recognize that each company or research fiel
 d possesses unique requirements\, necessitating tailored adaptations of LL
 Ms. The specificity of these needs often hinges on domain-specific data\, 
 demanding meticulous consideration. How can these models be tailored to cl
 assify your data effectively? What strategies can be employed when dealing
  with a limited dataset? Complex scenarios\, such as querying vast reposit
 ories of textual files stored in directories\, underscore the challenges. 
 These files encompass diverse modalities\, formats\, and structures\, rang
 ing from structured to entirely unstructured content.\n\nIn this research 
 presentation\, we will delve into an exploration of LLM capabilities and p
 inpoint the areas where they encounter limitations. Subsequently\, we will
  elucidate various techniques for fine-tuning these models\, especially in
  scenarios where data availability is constrained. By addressing these cha
 llenges\, we aim to provide valuable insights into harnessing the full pot
 ential of LLMs\, ensuring their optimal performance in diverse and data-in
 tensive applications.\n\nSpeaker(s): Fatemeh Hendijani Fard\, PhD\n\nRoom:
  EME 112\, Bldg: EME\, UBC Okanagan\, Kelowna\, Kelowna\, British Columbia
 \, Canada\, V1V 1V7\, Virtual: https://events.vtools.ieee.org/m/378215
LOCATION:Room: EME 112\, Bldg: EME\, UBC Okanagan\, Kelowna\, Kelowna\, Bri
 tish Columbia\, Canada\, V1V 1V7\, Virtual: https://events.vtools.ieee.org
 /m/378215
ORGANIZER:youry@ieee.org
SEQUENCE:6
SUMMARY:Unleashing Potential: Harnessing Foundation Models (Large Language 
 Models) in Business and Research
URL;VALUE=URI:https://events.vtools.ieee.org/m/378215
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Leveraging Large Language Models (LLMs) ha
 s marked a significant milestone in recent months\, notably with the intro
 duction of ChatGPT in early 2023. These models have demonstrated remarkabl
 e potential in addressing straightforward queries and tasks. However\, to 
 fully exploit their capabilities in handling routine inquiries\, adept pro
 mpt engineering is essential. Furthermore\, the adaptability of LLMs to no
 vel tasks and domains is pivotal. It is crucial to recognize that each com
 pany or research field possesses unique requirements\, necessitating tailo
 red adaptations of LLMs. The specificity of these needs often hinges on do
 main-specific data\, demanding meticulous consideration. How can these mod
 els be tailored to classify your data effectively? What strategies can be 
 employed when dealing with a limited dataset? Complex scenarios\, such as 
 querying vast repositories of textual files stored in directories\, unders
 core the challenges. These files encompass diverse modalities\, formats\, 
 and structures\, ranging from structured to entirely unstructured content.
 &lt;/p&gt;\n&lt;p&gt;In this research presentation\, we will delve into an exploration
  of LLM capabilities and pinpoint the areas where they encounter limitatio
 ns. Subsequently\, we will elucidate various techniques for fine-tuning th
 ese models\, especially in scenarios where data availability is constraine
 d. By addressing these challenges\, we aim to provide valuable insights in
 to harnessing the full potential of LLMs\, ensuring their optimal performa
 nce in diverse and data-intensive applications.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

