Advancing Speech Enhancement with Machine Learning: Lightweight Models

#learning #machine-learning #IIT #IEEE_IIT_SB #SMC#IEEE_SYSTEMS_CONCIL
Share

IEEE SMC IIT SBC x IEEE Systems Council Tunisia Section

 

IEEE SMC IIT SBC, in collaboration with the IEEE Systems Council Tunisia Section, organized a technical talk entitled “Advancing Speech Enhancement with Machine Learning: Lightweight Models,” delivered by Dr. Nasir Saleem. The session focused on recent advances in speech enhancement for resource-constrained environments such as smartphones, hearing aids, and embedded systems. The main points discussed during the talk included: 

  •   lightweight and efficient machine learning models for speech enhancement 
  • model compression techniques for reducing computational complexity 
  • efficient neural network architectures for real-time applications 
  • low-complexity time-frequency representations 
  • speech enhancement in noisy and real-world acoustic conditions 
  • recent developments in audio-visual speech enhancement 
  • methods for achieving low-latency and energy-efficient performance 
  • maintaining speech quality and intelligibility in practical deployment scenarios 

 The talk provided valuable insights for students and researchers interested in speech processing, machine learning, and intelligent audio systems. 



  Date and Time

  Location

  Hosts

  Registration



  • Add_To_Calendar_icon Add Event to Calendar

Loading virtual attendance info...

  • Contact Event Hosts
  • Starts 09 April 2026 03:25 PM UTC
  • Ends 10 April 2026 11:00 PM UTC
  • No Admission Charge


  Speakers

Nassir Saleem

Biography:

Dr. Nasir Saleem is an Associate Professor at Hainan Bielefeld University of Applied Sciences, China.
Before, he was an Associate Professor at Gomal University, Pakistan, Research Fellow at Edinburgh
Napier University, UK, working on a UKRI EPSRC-funded research programme focused on next-
generation AI-driven hearing and communication systems, and Postdoc Fellow at IIUM Malysia. His
research expertise lies in machine learning for speech processing, with a particular emphasis on
lightweight and real-time speech enhancement models for resource-constrained environments. He
has extensive experience in designing efficient deep neural architectures for noisy and real-world
acoustic conditions, including multimodal audio-visual speech enhancement, federated learning, and
privacy-preserving AI. Dr. Saleem has authored numerous publications in leading journals such as
IEEE Transactions and Applied Acoustics, and has contributed to international projects on edge-AI,
assistive technologies, and secure cyber-physical systems.