Morality and Social Emergence in Agentic AI Systems

#AI #agents #multi-agent-systems
Share


As AI agents become more autonomous and capable of interacting in complex environments, they begin to exhibit emergent social behaviors such as cooperation, hierarchy formation, and negotiation. However, these interactions also raise profound questions about morality: Can agentic AI systems develop ethical norms? How do emergent behaviors align-or conflict-with human moral values? This talk explores the intersection of morality and social emergence in AI, examining how multi-agent systems, from reinforcement learning environments to LLM-based agents, navigate trust, fairness, and collective decision- making. We will discuss theoretical models of moral emergence, real-world AI applications, and the implications for AI governance and alignment. Understanding these dynamics is essential to ensuring that Al systems evolve in ways that are not only intelligent but also aligned with ethical principles.

Local Date / Time:
28 April 2025 at 06:00 PM (GMT-05:00) America/New_York

 



  Date and Time

  Location

  Hosts

  Registration



  • Date: 28 Apr 2025
  • Time: 10:00 PM UTC to 12:00 AM UTC
  • Add_To_Calendar_icon Add Event to Calendar
  • Fordham University, Lincoln Center Campus
  • 150 W 62nd St
  • New York, New York
  • United States 10023
  • Building: Fordham Law School
  • Room Number: 3-02

  • Contact Event Host
  • Dr. D. Frank Hsu
    hsu@fordham.edu







Agenda

Speaker: Djallel Bouneffouf, Ph.D., IEEE Fellow, Senior Research Scientist, IBM

Biography:
Dr. Djallel Bouneffouf has dedicated many years to the field of online machine learning and data mining, with a primary research focus on developing autonomous systems that can learn, adapt, and make decisions in uncertain environments. His work spans both the public and private sectors, contributing to advancements in artificial intelligence, reinforcement learning, and trustworthy Al.
He spent five years at Nomalys, a mobile app development company in Paris, France, where he developed a risk-aware recommender system leveraging data mining techniques to enhance user experience. He then spent a year at Orange Labs in Lannion, France, where he worked on active learning for dialogue systems, optimizing data-driven decision-making in human-computer interactions.
During his two years at the BC Cancer Agency in Vancouver, Canada, he played a crucial role in assisting biologists with analyzing vast amounts of unstructured data, applying data mining and clustering algorithms to extract meaningful insights from complex biomedical datasets.
Over the past decade at IBM in the USA and Ireland, Dr. Bouneffouf has contributed to a wide range of projects involving reinforcement learning, brain modeling, and the development of Al systems with enhanced trustworthiness and interpretability. His work in data mining has been instrumental in improving decision-making in large-scale AI systems by efficiently extracting patterns and knowledge from diverse datasets.
According to Google Scholar, Dr. Bouneffouf is the 10th most cited scientist in his field, with over 100 publications in top-tier conferences, more than 3,000 citations, and service as a program committee member for over 20 conferences. His contributions continue to shape the landscape of AI, machine learning, and data mining.