IEEE CIS & CS Schenectady Chapters Technical Lecture on "Towards Data-Efficient, Trustworthy & Generalizable Neural Symbolic AI Models"

#TechEvent #AIevent #TroyNY #SchenectadyNY #CapitalRegionTech #UpstateNY #RPI #RensselaerPolytechnic #IEEE #IEEELecture #ComputerSociety #ComputationalIntelligence #ArtificialIntelligence #MachineLearning #DeepLearning #TechTalk #STEM #Engineering #ComputerScience #DataScience #TrustworthyAI #ExplainableAI #XAI #DataEfficientAI #GeneralizableAI #NeuralSymbolicAI #AI #NeuralNetworks #BayesianDeepLearning #CausalAI #CausalInference #SymbolicAI #AIModels #UncertaintyQuantification
Share

"Towards Data-Efficient, Trustworthy & Generalizable Neural Symbolic AI Models"


AI has achieved remarkable progress and is increasingly integrated into a wide range of fields, fueling what many call the fourth industrial revolution. However, behind the widespread enthusiasm lie fundamental limitations. Today’s AI systems face three major challenges: (1) an insatiable need for large-scale data, (2) limited trustworthiness due to inadequate uncertainty quantification, and (3) poor generalization across domains. These challenges cannot be overcome by simply scaling compute and data; instead, they require foundational advances in theory and methodology.

In this talk, the speaker will present recent research from his lab that addresses these challenges for various computer vision tasks. To improve data efficiency and generalization, he will introduce the work on knowledge-augmented deep learning, where prior knowledge from diverse sources is systematically identified, encoded, and integrated with data-driven neural networks. This results in hybrid neural-symbolic models that are both data-efficient and generalizable.   To enhance model trustworthiness and explainability, he will discuss the advances in Bayesian deep learning. First, he will describe recent efforts to improve the accuracy and efficiency of uncertainty quantification in deep models. Next, he will present his work on uncertainty attribution, which identifies the sources of uncertainty and uses this information for uncertainty mitigation to improve model performance.  Finally, he will highlight the work in causal deep learning aimed at addressing domain generalization. In particular, he will introduce the neural causal model that learns domain-invariant representations by eliminating spurious correlations resulted from data biases.



  Date and Time

  Location

  Hosts

  Registration



  • Add_To_Calendar_icon Add Event to Calendar

Loading virtual attendance info...

  • 51 College Ave
  • Troy, New York
  • United States 12180
  • Building: DCC 337 (Rensselaer Polytechnic Institute)
  • Room Number: 337
  • Click here for Map

  • Contact Event Hosts
  • Co-sponsored by IEEE CIS and CS Schenectady Chapter
  • Starts 07 October 2025 04:00 AM UTC
  • Ends 29 October 2025 08:30 PM UTC
  • No Admission Charge


  Speakers

Qiang Ji of RENSSELAER POLYTECHNIC INSTITUTE

Topic:

Towards Data-Efficient, Trustworthy & Generalizable Neural Symbolic AI Models

AI has achieved remarkable progress and is increasingly integrated into a wide range of fields, fueling what many call the fourth industrial revolution. However, behind the widespread enthusiasm lie fundamental limitations. Today’s AI systems face three major challenges: (1) an insatiable need for large-scale data, (2) limited trustworthiness due to inadequate uncertainty quantification, and (3) poor generalization across domains. These challenges cannot be overcome by simply scaling compute and data; instead, they require foundational advances in theory and methodology.

In this talk, the speaker will present recent research from his lab that addresses these challenges for various computer vision tasks. To improve data efficiency and generalization, he will introduce the work on knowledge-augmented deep learning, where prior knowledge from diverse sources is systematically identified, encoded, and integrated with data-driven neural networks. This results in hybrid neural-symbolic models that are both data-efficient and generalizable.   To enhance model trustworthiness and explainability, he will discuss the advances in Bayesian deep learning. First, he will describe recent efforts to improve the accuracy and efficiency of uncertainty quantification in deep models. Next, he will present his work on uncertainty attribution, which identifies the sources of uncertainty and uses this information for uncertainty mitigation to improve model performance.  Finally, he will highlight the work in causal deep learning aimed at addressing domain generalization. In particular, he will introduce the neural causal model that learns domain-invariant representations by eliminating spurious correlations resulted from data biases.

Biography:

Dr. Qiang Ji received his Ph. D degree in Electrical Engineering from the University of Washington. He is currently a Professor with the Department of Electrical, Computer, and Systems Engineering at Rensselaer Polytechnic Institute (RPI).  From 2009 to 2010, he served as a program director at the National Science Foundation (NSF), where he managed NSF’s computer vision and machine learning programs.  He also held teaching and research positions at University of Illinois at Urbana-Champaign; Carnegie Mellon University; University of Nevada, and the Air Force Research Laboratory.

Prof. Ji's research interests are in computer vision, probabilistic graphical models, probabilistic deep learning, and their applications in various fields.  He has published over 300 papers in peer-reviewed journals and conferences and has received multiple awards for his work.   Prof.  Ji  has served as an editor on several related IEEE and international journals and as a general chair, program chair, technical area chair, and program committee member for numerous international conferences/workshops.   Prof. Ji is a fellow of the IEEE and the IAPR.  

Email: