To refer to this page use:
|Abstract:||The integration of the visual and auditory modalities during human speech perception is the default mode of speech processing. That is, visual speech perception is not a capacity that is "piggybacked" on to auditory-only speech perception. Visual information from the mouth and other parts of the face is used by all perceivers to enhance auditory speech. This integration is ubiquitous and automatic and is similar across all individuals across all cultures. The two modalities seem to be integrated even at the earliest stages of human cognitive development. If multisensory speech is the default mode of perception, then this should be reflected in the evolution of vocal communication. The purpose of this review is to describe the data that reveal that human speech is not uniquely multisensory. In fact, the default mode of communication is multisensory in nonhuman primates as well but perhaps emerging with a different developmental trajectory. Speech production, however, exhibits a unique bimodal rhythmic structure in that both the acoustic output and the movements of the mouth are rhythmic and tightly correlated. This structure is absent in most monkey vocalizations. One hypothesis is that the bimodal speech rhythm may have evolved through the rhythmic facial expressions of ancestral primates, as indicated by mounting comparative evidence focusing on the lip-smacking gesture.|
|Electronic Publication Date:||21-Feb-2013|
|Citation:||Ghazanfar, Asif A. (2013). Multisensory vocal communication in primates and the evolution of rhythmic speech. Behavioral Ecology and Sociobiology, 67 (9), 1441 - 1448. doi:10.1007/s00265-013-1491-z|
|Pages:||1441 - 1448|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||Behavioral Ecology and Sociobiology|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.