Women in AI Series 2025 - Empowering LLMs for Scalable and Efficient Machine Programming: Guixiang Ma
Large language models (LLMs) have shown remarkable potential in various domains, particularly in code optimization and generation for machine programming. However, their performance in low-resource scenarios, such as parallel code generation, remains limited. In this talk, I will discuss our recent advancements in enhancing LLMs' capabilities for these challenges through agent-based approaches, fine-tuning techniques, and strategic data curation. Additionally, I will introduce our work on representation learning for code data, focusing on tasks like compiler optimization. I will also demonstrate the use of machine learning methods for efficient workload partitioning in system optimization, which can also facilitate scalable distributed training and inference of large-scale machine learning models. Finally, I will share insights into future research directions in these areas.
Date and Time
Location
Hosts
Registration
- Date: 23 Sep 2025
- Time: 11:00 PM UTC to 12:00 AM UTC
-
Add Event to Calendar
Speakers
Guixiang of Intel
Biography:
Guixiang Ma is a Senior AI Research Scientist at Intel. She has 6 years of industrial research experience in artificial intelligence and machine learning. She has led multiple initiatives on machine learning and systems research at Intel Labs. Her research is focused on the foundations of large-scale machine learning algorithms, with emphasis on the modeling and analysis of relational graph data, text data, dynamic / spatio-temporal data, multimodal data, and system performance optimization. She has published over 40 peer-reviewed research papers in leading AI conferences and journals such as NeurIPS, MLSys, KDD, AAAI, ICML, Nature Communications, and others. She also holds 2 patents.