Distinguished Industry Speaker - Dr. Tomohiro Nakatani (@Northeast Brazil SPS Chapter)
📌 Brought to you by the IEEE Signal Processing Society – Northeast Brazil Chapter
Federal University of Ceará (UFC), Fortaleza
DISTINGUISHED INDUSTRY SPEAKER PROGRAM
SPEAKER:
Dr. Tomohiro Nakatani, NTT Communication Science Laboratories, Japan
📢 IEEE SPS Northeast Brazil Chapter – Distinguished Industry Speaker Program
The IEEE Signal Processing Society Northeast Brazil Chapter is pleased to invite you to an insightful seminar on:
Convolutional Beamformer for Joint Denoising, Dereverberation, and Source Separation
🎙️ Distinguished Speaker:
Dr. Tomohiro Nakatani
NTT Communication Science Laboratories, NTT, Inc., Japan
Dr. Nakatani is a leading expert in source separation and speech enhancement. In this session, he will present advanced approaches to convolutional beamforming, highlighting their applications in denoising, dereverberation, and source separation.
This event offers a valuable opportunity for students, researchers, and professionals to gain cutting-edge insights into signal processing and interact with a distinguished industry expert.
✨ Don’t miss this chance to learn and network!
Join Zoom Meeting
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
Loading virtual attendance info...
- Federal University of Ceara
- LESC
- Fortaleza, Ceara
- Brazil 60455-970
- Building: Department of Teleinformatics Engineering
- Room Number: BLOCO 723
- Click here for Map
- Contact Event Host
-
For any queries, please get in touch with fazalasim@ieee.org
Speakers
Dr. Tomohiro Nakatani of NTT Communication Science Laboratories, NTT, Inc., Japan
Convolutional Beamformer for Joint Denoising, Dereverberation, and Source Separation
When speech is captured by distant microphones in everyday environments, the signals are often contaminated by
background noise, reverberation, and overlapping voices. The convolutional beamformer (CBF) is a signal processing
technique that recovers clean, close-microphone-quality speech from such complex mixtures. By jointly performing
denoising, dereverberation, and source separation, CBF enhances both human listening experiences and automatic speech
recognition (ASR) accuracy. Potential applications include hearing assistive devices, meeting transcription systems, and other
real-world speech technologies.
This talk begins by introducing the concept of CBF, including its formal definition, mechanism for joint enhancement, and
optimization via maximum likelihood estimation. CBF is defined as a series of beamformers estimated at each frequency in
the short-time Fourier transform (STFT) domain and convolved with the observed signal to achieve the desired
enhancement. The presentation then describes that CBF can be factorized into Multichannel Linear Prediction (MCLP) for
dereverberation and Beamforming (BF) for denoising and separation, highlighting the practical advantages of this
decomposition. Related work is reviewed, including Weighted Prediction Error (WPE) dereverberation, mask-based
beamforming, and guided source separation, with emphasis on strong results in challenging tasks such as the CHiME-8
distant ASR challenge.
Further extensions are presented, including blind CBF for unknown recording conditions, switching CBF for enhanced
performance with a limited number of microphones, and integration with neural networks - notably the DiffCBF framework,
which combines CBF with diffusion-based speech enhancement models. Experimental results demonstrate state-of-the-art
speech quality, even with relatively few microphones and limited training data.
Biography:
Tomohiro Nakatani received the B.E., M.E., and Ph.D. degrees from Kyoto University, Kyoto, Japan, in 1989, 1991, and 2002,
respectively.
He is currently a Senior Distinguished Researcher at NTT Communication Science Laboratories, NTT, Inc., Japan. In 2005, he
was a Visiting Scholar at the Georgia Institute of Technology, USA, and from 2008 to 2017, he served as a Visiting Associate
Professor in the Department of Media Science at Nagoya University, Japan. Since joining NTT as a Researcher in 1991, he has
focused on developing audio signal processing technologies for intelligent human–machine interfaces, including
dereverberation, denoising, source separation, and robust automatic speech recognition (ASR).
Dr. Nakatani served as an Associate Editor for the IEEE Transactions on Audio, Speech, and Language Processing from 2008
to 2010. He was a member of the IEEE SPS Audio and Acoustic Signal Processing Technical Committee from 2009 to 2014,
the IEEE SPS Speech and Language Processing Technical Committee from 2016 to 2021, and the IEEE SPS Fellow Evaluating
Committee in 2024 and 2025. He has been serving as an IEEE SPS Distinguished Industry Speaker since 2025. He was co-
Chair of the 2014 REVERB Challenge Workshop and General Co-Chair of IEEE ASRU 2017. His accolades include the 2005
IEICE Best Paper Award, the 2009 ASJ Technical Development Award, the 2012 Japan Audio Society Award, an Honorable
Mention for the 2015 IEEE ASRU Best Paper Award, the 2017 Maejima Hisoka Award, and the 2018 IWAENC Best Paper
Award. He has been an IEEE Fellow since 2021 and an IEICE Fellow since 2022.
Email:
Address:Japan
This event is sponsored by: