Responsible AI in Malware Detection: Balancing Innovation and Ethics

#ReponsibleAI #ArtificialIntelligence #AI #MalwareDetection #DetectionSystems #Ethics #Malware #Knowledge #Reasoning #DecisionMaking #Cybersecurity
Share

Malware continues to evolve in sophistication and scale, targeting platforms and users across diverse digital ecosystems. Artificial Intelligence (AI) has emerged as a powerful tool for malware detection, offering scalable, adaptive, and automated defence mechanisms. Yet, as these systems advance, they raise pressing questions around responsibility, ethics, and trust. Challenges such as adversarial evasion, bias in training data, privacy concerns, and the lack of explainability in AI-driven decisions can undermine both technical effectiveness and public confidence. This talk will examine how responsible and ethical AI principles can be embedded into the design of malware detection systems, combining advances in cyber security with frameworks of transparency, fairness, and accountability. It will also explore how bridging technical innovation with ethical safeguards can lead to malware detection systems that are not only effective but also trustworthy and socially responsible.



  Date and Time

  Location

  Hosts

  Registration



  • Add_To_Calendar_icon Add Event to Calendar

Loading virtual attendance info...

  • Contact Event Host
  • m.garcia-constantino@ulster.ac.uk

  • Co-sponsored by Ulster University


  Speakers

Fauzia of Royal Holloway, University of London, UK

Topic:

Responsible AI in Malware Detection: Balancing Innovation and Ethics

Malware continues to evolve in sophistication and scale, targeting platforms and users across diverse digital ecosystems. Artificial Intelligence (AI) has emerged as a powerful tool for malware detection, offering scalable, adaptive, and automated defence mechanisms. Yet, as these systems advance, they raise pressing questions around responsibility, ethics, and trust. Challenges such as adversarial evasion, bias in training data, privacy concerns, and the lack of explainability in AI-driven decisions can undermine both technical effectiveness and public confidence. This talk will examine how responsible and ethical AI principles can be embedded into the design of malware detection systems, combining advances in cyber security with frameworks of transparency, fairness, and accountability. It will also explore how bridging technical innovation with ethical safeguards can lead to malware detection systems that are not only effective but also trustworthy and socially responsible.

Biography:

Dr. Fauzia Idrees Abro is an Associate Professor and Programme Director of Cyber Security Distance Learning programme at Royal Holloway, University of London. She holds a PhD in Information Security Engineering, an MSc in Information Security, and an MBA in Entrepreneurship, alongside her background as an Electronics Engineer. With over 25 years of professional experience spanning industry and academia, Dr. Abro has developed broad expertise in the field of cybersecurity. Her research focuses on malware analysis, network security, and secure software development. She also serves as the Global Ambassador for Responsible AI for the UK on the Global Council for Responsible AI.