Skip to main content

A New Paradigm for Sound Design

Author(s): Misra, Ananya; Cook, Perry R; Wang, Ge

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1pr88
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMisra, Ananya-
dc.contributor.authorCook, Perry R-
dc.contributor.authorWang, Ge-
dc.date.accessioned2021-10-08T19:46:03Z-
dc.date.available2021-10-08T19:46:03Z-
dc.date.issued2006en_US
dc.identifier.citationMisra, Ananya, Perry R. Cook, and Ge Wang. "A New Paradigm for Sound Design." Proceedings of the International Conference on Digital Audio Effects (DAFx-06) (2006): pp. 319-324.en_US
dc.identifier.issn2413-6700-
dc.identifier.urihttps://www.dafx.de/paper-archive/2006/papers/p_319.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1pr88-
dc.description.abstractA sound scene can be defined as any “environmental” sound that has a consistent background texture, with one or more potentially recurring foreground events. We describe a data-driven framework for analyzing, transforming, and synthesizing high-quality sound scenes, with flexible control over the components of the synthesized sound. Given one or more sound scenes, we provide well-defined means to: (1) identify points of interest in the sound and extract them into reusable templates, (2) transform sound components independently of the background or other events, (3) continually re-synthesize the background texture in a perceptually convincing manner, and (4) controllably place event templates over the background, varying key parameters such as density, periodicity, relative loudness, and spatial positioning. Contributions include: techniques and paradigms for template selection and extraction, independent sound transformation and flexible re-synthesis; extensions to a wavelet-based background analysis/synthesis; and user interfaces to facilitate the various phases. Given this framework, it is possible to completely transform an existing sound scene, dynamically generate sound scenes of unlimited length, and construct new sound scenes by combining elements from different sound scenes. URL: http://taps.cs.princeton.edu/en_US
dc.format.extent319 - 324en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of the International Conference on Digital Audio Effects (DAFx)en_US
dc.rightsAuthor's manuscripten_US
dc.titleA New Paradigm for Sound Designen_US
dc.typeConference Articleen_US
dc.identifier.eissn2413-6689-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
NewParadigmSoundDesign.pdf2.25 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.