BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
DTSTART:20250309T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251102T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250925T121425Z
UID:5797C9E4-49EE-41AE-8CA4-65FDFF524C2B
DTSTART;TZID=America/New_York:20250923T190000
DTEND;TZID=America/New_York:20250923T200000
DESCRIPTION:Large language models (LLMs) have shown remarkable potential in
  various domains\, particularly in code optimization and generation for ma
 chine programming. However\, their performance in low-resource scenarios\,
  such as parallel code generation\, remains limited. In this talk\, I will
  discuss our recent advancements in enhancing LLMs&#39; capabilities for these
  challenges through agent-based approaches\, fine-tuning techniques\, and 
 strategic data curation. Additionally\, I will introduce our work on repre
 sentation learning for code data\, focusing on tasks like compiler optimiz
 ation. I will also demonstrate the use of machine learning methods for eff
 icient workload partitioning in system optimization\, which can also facil
 itate scalable distributed training and inference of large-scale machine l
 earning models. Finally\, I will share insights into future research direc
 tions in these areas.\n\nSpeaker(s): Guixiang \n\nVirtual: https://events.
 vtools.ieee.org/m/473029
LOCATION:Virtual: https://events.vtools.ieee.org/m/473029
ORGANIZER:ieee.lvs.wie@gmail.com
SEQUENCE:10
SUMMARY:Women in AI Series 2025 - Empowering LLMs for Scalable and Efficien
 t Machine Programming: Guixiang Ma
URL;VALUE=URI:https://events.vtools.ieee.org/m/473029
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Large language models (LLMs) have shown re
 markable potential in various domains\, particularly in code optimization 
 and generation for machine programming. However\, their performance in low
 -resource scenarios\, such as parallel code generation\, remains limited. 
 In this talk\, I will discuss our recent advancements in enhancing LLMs&#39; c
 apabilities for these challenges through agent-based approaches\, fine-tun
 ing techniques\, and strategic data curation. Additionally\, I will introd
 uce our work on representation learning for code data\, focusing on tasks 
 like compiler optimization. I will also demonstrate the use of machine lea
 rning methods for efficient workload partitioning in system optimization\,
  which can also facilitate scalable distributed training and inference of 
 large-scale machine learning models. Finally\, I will share insights into 
 future research directions in these areas.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

