October DISTINGUISHED INDUSTRY SPEAKER Talk: Convolutional Beamformer for Joint Denoising, Dereverberation, and Source Separation (HYBRID)
Abstract
When speech is captured by distant microphones in everyday environments, the signals are often contaminated by background noise, reverberation, and overlapping voices. The convolutional beamformer (CBF) is a signal processing technique that recovers clean, close-microphone-quality speech from such complex mixtures. By jointly performing denoising, dereverberation, and source separation, CBF enhances both human listening experiences and automatic speech recognition (ASR) accuracy. Potential applications include hearing assistive devices, meeting transcription systems, and other real-world speech technologies. This talk begins by introducing the concept of CBF, including its formal definition, mechanism for joint enhancement, and optimization via maximum likelihood estimation. CBF is defined as a series of beamformers estimated at each frequency in the short-time Fourier transform (STFT) domain and convolved with the observed signal to achieve the desired enhancement. The presentation then describes that CBF can be factorized into Multichannel Linear Prediction (MCLP) for dereverberation and Beamforming (BF) for denoising and separation, highlighting the practical advantages of this decomposition. Related work is reviewed, including Weighted Prediction Error (WPE) dereverberation, mask-based beamforming, and guided source separation, with emphasis on strong results in challenging tasks such as the CHiME-8 distant ASR challenge. Further extensions are presented, including blind CBF for unknown recording conditions, switching CBF for enhanced performance with a limited number of microphones, and integration with neural networks - notably the DiffCBF framework, which combines CBF with diffusion-based speech enhancement models. Experimental results demonstrate state-of-the-art speech quality, even with relatively few microphones and limited training data.
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
Loading virtual attendance info...
- 300 3rd Ave SW
- Rochester, Minnesota
- United States 55902
- Building: Medical Sciences Building
- Room Number: Mann Hall
Speakers
Tomohiro Nakatani, Ph.D.
Biography:
Tomohiro Nakatani received the B.E., M.E., and Ph.D. degrees from Kyoto University, Kyoto, Japan, in 1989, 1991, and 2002, respectively. He is currently a Senior Distinguished Researcher at NTT Communication Science Laboratories, NTT, Inc., Japan. In 2005, he was a Visiting Scholar at the Georgia Institute of Technology, USA, and from 2008 to 2017, he served as a Visiting Associate Professor in the Department of Media Science at Nagoya University, Japan. Since joining NTT as a Researcher in 1991, he has focused on developing audio signal processing technologies for intelligent human–machine interfaces, including dereverberation, denoising, source separation, and robust automatic speech recognition (ASR).
Agenda
6:30 - 7:00 Social half hour to grab food and drink
7:00 - 8:00 Technical talk