To refer to this page use:
|Abstract:||A sound scene can be defined as any “environmental” sound that has a consistent background texture, with one or more potentially recurring foreground events. We describe a data-driven framework for analyzing, transforming, and synthesizing high-quality sound scenes, with flexible control over the components of the synthesized sound. Given one or more sound scenes, we provide well-defined means to: (1) identify points of interest in the sound and extract them into reusable templates, (2) transform sound components independently of the background or other events, (3) continually re-synthesize the background texture in a perceptually convincing manner, and (4) controllably place event templates over the background, varying key parameters such as density, periodicity, relative loudness, and spatial positioning. Contributions include: techniques and paradigms for template selection and extraction, independent sound transformation and flexible re-synthesis; extensions to a wavelet-based background analysis/synthesis; and user interfaces to facilitate the various phases. Given this framework, it is possible to completely transform an existing sound scene, dynamically generate sound scenes of unlimited length, and construct new sound scenes by combining elements from different sound scenes. URL: http://taps.cs.princeton.edu/|
|Citation:||Misra, Ananya, Perry R. Cook, and Ge Wang. "A New Paradigm for Sound Design." Proceedings of the International Conference on Digital Audio Effects (DAFx-06) (2006): pp. 319-324.|
|Pages:||319 - 324|
|Type of Material:||Conference Article|
|Journal/Proceeding Title:||Proceedings of the International Conference on Digital Audio Effects (DAFx)|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.