Toward Trustworthy AI/ML in 6G Networks through Explainable Reasoning

#AI #ML #6G #XAI #explainableAI #reasoning #trustworthyAI #responsibleAI #transparency #communications #networking #futurenetworks #ieee
Share

Special Presentation by Dr. Farhad Rezazadeh (CTTC, Spain)

Hosted by the Future Networks Artificial Intelligence & Machine Learning (AIML) Working Group

Date/Time: Thursday, November 21st, 2024 @ 12:00 UTC

Topic:

Toward Trustworthy AI/ML in 6G Networks through Explainable Reasoning

Abstract:

This talk emphasizes the importance of trustworthy Artificial Intelligence (AI) in 6G networks in response to growing global attention on AI governance. Notable initiatives such as the White House's Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, DARPA's Assured Neuro-Symbolic Learning and Reasoning and eXplainable AI (XAI) programs, and the European Union's AI Act, highlight the increasing regulatory focus on AI transparency and responsibility. As 6G networks transition from AI-native to automation-native, the need for explainability and trustworthiness becomes critical, especially in mission-critical and high-stakes applications. Traditional post-hoc explainability methods, which aim to explain AI decisions after they are made, are no longer adequate in complex network environments. Instead, in-hoc explainability or explanation-guided techniques – where explanations guide the learning process itself – is emerging as a crucial approach for establishing trust in AI systems from the ground up. Indeed, integrating explanatory mechanisms directly within AI learning models enables transparent decisions and enhances learning. Furthermore, incorporating neuro-symbolic approaches, which combine neural networks with symbolic reasoning, provides a robust framework to tackle the increasing complexity of 6G networks. By integrating these approaches, AI systems can make more explainable, contextually guided decisions, boosting trust and performance while mitigating risks associated with black-box AI models.

Speaker:

Farhad Rezazadeh received his Ph.D. degree (Excellent Cum Laude) in Signal Theory and Communications from the Technical University of Catalonia (UPC), Barcelona, Spain. He is currently a researcher (Sr. Applied AI Engineer) at the Telecommunications Technological Center of Catalonia (CTTC), Barcelona, Spain. He participated in 8 European and National 5G/B5G/6G R&D projects with leading and technical tasks in the areas of Applied AI. His AI innovation in B5G/6G resource allocation was recognized as a great EU-funded Innovation by the European Commission's Innovation Radar. He was awarded the first patent connected to the H2020 5G-SOLUTIONS project. He was a secondee at NEC Lab Europe and had scientific missions at TUM, Germany, TUHH, Germany, and UdG, Spain. He is a Marie Sklodowska-Curie Ph.D. grantee, winning five different IEEE/IEEE ComSoc grants, two European Cooperation in Science and Technology grants, and a Catalan Government Ph.D. Grant. He is an active member of ACM Professional, IEEE Young Professionals, and IEEE Spain - Technical Activities and Standards, with more than 29 top-tier journals/conferences and book chapters. He actively serves as Organizing, Chair, Reviewer, and TPC member in IEEE and Guest Editor for Elsevier. He has over 140 verified reviews for peer-reviewed publications. He coordinates the IEEE Trustworthy Internet of Things (TRUST-IoT) working group within the IEEE IoT Community.

 



  Date and Time

  Location

  Hosts

  Registration



  • Date: 21 Nov 2024
  • Time: 12:00 PM to 01:00 PM
  • All times are (UTC+00:00) UTC
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Contact Event Hosts
  • Baw Chng [baw@ieee.org]

  • Co-sponsored by Toward Trustworthy AI/ML in 6G Networks through Explainable Reasoning
  • Starts 29 October 2024 12:00 AM
  • Ends 21 November 2024 01:00 PM
  • All times are (UTC+00:00) UTC
  • No Admission Charge