Skip to main content

HiFi-GAN: High-Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks

Author(s): Su, Jiaqi; Jin, Zeyu; Finkelstein, Adam

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1kz73
Abstract: Real-world audio recordings are often degraded by factors such as noise, reverberation, and equalization distortion. This paper introduces HiFi-GAN, a deep learning method to transform recorded speech to sound as though it had been recorded in a studio. We use an end-to-end feed-forward WaveNet architecture, trained with multi-scale adversarial discriminators in both the time domain and the time-frequency domain. It relies on the deep feature matching losses of the discriminators to improve the perceptual quality of enhanced speech. The proposed model generalizes well to new speakers, new speech content, and new environments. It significantly outperforms state-of-the-art baseline methods in both objective and subjective experiments.
Publication Date: 2020
Citation: Su, Jiaqi, Zeyu Jin, and Adam Finkelstein. "HiFi-GAN: High-Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks." Proc. Interspeech (2020): pp. 4506-4510. doi:10.21437/Interspeech.2020-2143
DOI: 10.21437/Interspeech.2020-2143
Pages: 4506 - 4510
Type of Material: Conference Article
Journal/Proceeding Title: Proc. Interspeech
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.