Skip to main content

Perceptually-motivated Environment-specific Speech Enhancement

Author(s): Su, Jiaqi; Finkelstein, Adam; Jin, Zeyu

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1wn86
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSu, Jiaqi-
dc.contributor.authorFinkelstein, Adam-
dc.contributor.authorJin, Zeyu-
dc.date.accessioned2021-10-08T19:45:39Z-
dc.date.available2021-10-08T19:45:39Z-
dc.date.issued2019en_US
dc.identifier.citationSu, Jiaqi, Adam Finkelstein, and Zeyu Jin. "Perceptually-motivated Environment-specific Speech Enhancement." IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2019): pp. 7015-7019. IEEE, 2019. doi:10.1109/ICASSP.2019.8683654en_US
dc.identifier.issn1520-6149-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1wn86-
dc.description.abstractThis paper introduces a deep learning approach to enhance speech recordings made in a specific environment. A single neural network learns to ameliorate several types of recording artifacts, including noise, reverberation, and non-linear equalization. The method relies on a new perceptual loss function that combines adversarial loss with spectrogram features. Both subjective and objective evaluations show that the proposed approach improves on state-of-the-art baseline methods.en_US
dc.format.extent7015 - 7019en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)en_US
dc.rightsAuthor's manuscripten_US
dc.titlePerceptually-motivated Environment-specific Speech Enhancementen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/ICASSP.2019.8683654-
dc.identifier.eissn2379-190X-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
PerceptuallyMotivatedSpeechEnhancement.pdf267.63 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.