Skip to main content

A Differentiable Perceptual Audio Metric Learned from Just Noticeable Differences

Author(s): Manocha, Pranay; Finkelstein, Adam; Zhang, Richard; Bryan, Nicholas J; Mysore, Gautham J; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1c846
Full metadata record
DC FieldValueLanguage
dc.contributor.authorManocha, Pranay-
dc.contributor.authorFinkelstein, Adam-
dc.contributor.authorZhang, Richard-
dc.contributor.authorBryan, Nicholas J-
dc.contributor.authorMysore, Gautham J-
dc.contributor.authorJin, Zeyu-
dc.date.accessioned2021-10-08T19:51:07Z-
dc.date.available2021-10-08T19:51:07Z-
dc.date.issued2020en_US
dc.identifier.citationManocha, Pranay, Adam Finkelstein, Richard Zhang, Nicholas J. Bryan, Gautham J. Mysore, and Zeyu Jin. "A Differentiable Perceptual Audio Metric Learned from Just Noticeable Differences." Proc. Interspeech (2020): pp. 2852-2856. doi:10.21437/Interspeech.2020-1191en_US
dc.identifier.urihttps://arxiv.org/pdf/2001.04460.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1c846-
dc.description.abstractMany audio processing tasks require perceptual assessment. The “gold standard” of obtaining human judgments is time-consuming, expensive, and cannot be used as an optimization criterion. On the other hand, automated metrics are efficient to compute but often correlate poorly with human judgment, particularly for audio differences at the threshold of human detection. In this work, we construct a metric by fitting a deep neural network to a new large dataset of crowdsourced human judgments. Subjects are prompted to answer a straightforward, objective question: are two recordings identical or not? These pairs are algorithmically generated under a variety of perturbations, including noise, reverb, and compression artifacts; the perturbation space is probed with the goal of efficiently identifying the just-noticeable difference (JND) level of the subject. We show that the resulting learned metric is well-calibrated with human judgments, outperforming baseline methods. Since it is a deep network, the metric is differentiable, making it suitable as a loss function for other tasks. Thus, simply replacing an existing loss (e.g., deep feature loss) with our metric yields significant improvement in a denoising network, as measured by subjective pairwise comparison.en_US
dc.format.extent2852 - 2856en_US
dc.language.isoen_USen_US
dc.relation.ispartofProc. Interspeechen_US
dc.rightsAuthor's manuscripten_US
dc.titleA Differentiable Perceptual Audio Metric Learned from Just Noticeable Differencesen_US
dc.typeConference Articleen_US
dc.identifier.doi10.21437/Interspeech.2020-1191-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
DifferentiablePerceptual.pdf1.03 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.