IEEE WIE Virtual Seminar

#WIE #human-in-the-loop #computer-science #computer-vision #data-science #bioinformatics
Share

Title: Multimodal Knowledge Generation through Team Data Science

Abstract:

In this presentation I will discuss my experience working with diverse teams to advance human-in-the-loop computing capabilities on multimodal data. I will review use cases and challenges using large repositories of text, genomic, sensor, and imaging data in secure environments.

Bio:
Dr. Ioana Danciu is a research scientist and group leader at the Oak Ridge National Laboratory (ORNL). Her research focuses on adapting scalable computational methods using multimodal data (imaging, text, sensor, and omics) for domain science use cases. Her interest areas are domain-specific machine learning, interpretable AI, large scale computation, learning with less labeled data, human-computer interaction and healthcare privacy and security. Before joining ORNL in 2018, she worked as an engineer and researcher at Vanderbilt University and Vanderbilt University Medical Center in Nashville, TN for over a decade.

 

Title: Reliable AI for Safety-Critical Applications

Abstract: 

The increasing adoption of large language models (LLMs) and other transformer-based systems in safety-critical domains such as national security and healthcare raises fundamental challenges related to reliability, robustness, and trustworthiness. This talk will present some of her recent work on developing reliable AI methodologies for safety-critical applications, focusing on the use of LLMs for information extraction, question-answering, and classification, where hallucinations and miscalibrated confidence pose significant risks. The presentation will discuss mitigation strategies for improving factual consistency, confidence calibration, and error detection, as well as retrieval-augmented generation (RAG) architectures for grounding model outputs in external evidence and the associated challenges in evaluating RAG systems. Beyond nominal settings, the talk will explore adversarial robustness issues, including data poisoning, adversarial patch generation using LLMs, and jailbreak attacks that expose critical failure modes in real-world deployment. Finally, the presentation will briefly address interpretable predictive modeling approaches that combine transformer architectures with attribution methods to support transparency and accountability, positioning reliability as a multi-faceted technical challenge for deploying AI in safety-critical environments.

Bio: 

Dr. Maria Mahbub is a Research Associate in the Cyber Resilience and Intelligence Division of the National Security Sciences Directorate at ORNL. She earned her PhD in Computer Science from the University of Tennessee, Knoxville, in August 2023. At ORNL, she leads research in multimodal AI, large language models, and natural language processing for high-stakes applications. Her work focuses on developing modern AI systems and strengthening their reliability in deployment, combining expertise in information extraction, interpretable predictive modeling, and retrieval-augmented generation. She works extensively with multimodal data, including structured records, unstructured text, and genetic information, across national security and bioinformatics applications. Her research also spans adversarial robustness and computer vision, with a focus on model behavior in real-world settings. She has received the People's Choice Award as well as Second Place in the FY25 Your Science in a Nutshell Competition, and she is an awardee of the FY26 LDRD Early Career Competition. She aims to advance AI systems that are trustworthy, robust, and empirically grounded.



  Date and Time

  Location

  Hosts

  Registration



  • Add_To_Calendar_icon Add Event to Calendar

Loading virtual attendance info...

  • Contact Event Hosts


  Speakers

Ioana

Topic:

Multimodal Knowledge Generation through Team Data Science

Abstract:

In this presentation I will discuss my experience working with diverse teams to advance human-in-the-loop computing capabilities on multimodal data. I will review use cases and challenges using large repositories of text, genomic, sensor, and imaging data in secure environments.

 

Biography:

Dr. Ioana Danciu is a research scientist and group leader at the Oak Ridge National Laboratory (ORNL). Her research focuses on adapting scalable computational methods using multimodal data (imaging, text, sensor, and omics) for domain science use cases. Her interest areas are domain-specific machine learning, interpretable AI, large scale computation, learning with less labeled data, human-computer interaction and healthcare privacy and security. Before joining ORNL in 2018, she worked as an engineer and researcher at Vanderbilt University and Vanderbilt University Medical Center in Nashville, TN for over a decade.

Email:

Maria

Topic:

Reliable AI for Safety-Critical Applications

The increasing adoption of large language models (LLMs) and other transformer-based systems in safety-critical domains such as national security and healthcare raises fundamental challenges related to reliability, robustness, and trustworthiness. This talk will present some of her recent work on developing reliable AI methodologies for safety-critical applications, focusing on the use of LLMs for information extraction, question-answering, and classification, where hallucinations and miscalibrated confidence pose significant risks. The presentation will discuss mitigation strategies for improving factual consistency, confidence calibration, and error detection, as well as retrieval-augmented generation (RAG) architectures for grounding model outputs in external evidence and the associated challenges in evaluating RAG systems. Beyond nominal settings, the talk will explore adversarial robustness issues, including data poisoning, adversarial patch generation using LLMs, and jailbreak attacks that expose critical failure modes in real-world deployment. Finally, the presentation will briefly address interpretable predictive modeling approaches that combine transformer architectures with attribution methods to support transparency and accountability, positioning reliability as a multi-faceted technical challenge for deploying AI in safety-critical environments.

Biography:

Dr. Maria Mahbub is a Research Associate in the Cyber Resilience and Intelligence Division of the National Security Sciences Directorate at ORNL. She earned her PhD in Computer Science from the University of Tennessee, Knoxville, in August 2023. At ORNL, she leads research in multimodal AI, large language models, and natural language processing for high-stakes applications. Her work focuses on developing modern AI systems and strengthening their reliability in deployment, combining expertise in information extraction, interpretable predictive modeling, and retrieval-augmented generation. She works extensively with multimodal data, including structured records, unstructured text, and genetic information, across national security and bioinformatics applications. Her research also spans adversarial robustness and computer vision, with a focus on model behavior in real-world settings. She has received the People's Choice Award as well as Second Place in the FY25 Your Science in a Nutshell Competition, and she is an awardee of the FY26 LDRD Early Career Competition. She aims to advance AI systems that are trustworthy, robust, and empirically grounded.

Email: