Skip to main content

Iterated Learning in Dynamic Social Networks

Author(s): Chazelle, Bernard; Wang, Chu

To refer to this page use:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChazelle, Bernard-
dc.contributor.authorWang, Chu-
dc.identifier.citationChazelle, Bernard, and Chu Wang. "Iterated Learning in Dynamic Social Networks." Journal of Machine Learning Research 20, no. 29 (2019): pp. 1-28.en_US
dc.description.abstractA classic finding by (Kalish et al., 2007) shows that no language can be learned iteratively by rational agents in a self-sustained manner. In other words, if A teaches a foreign language to B, who then teaches what she learned to C, and so on, the language will quickly get lost and agents will wind up teaching their own common native language. If so, how can linguistic novelty ever be sustained? We address this apparent paradox by considering the case of iterated learning in a social network: we show that by varying the lengths of the learning sessions over time or by keeping the networks dynamic, it is possible for iterated learning to endure forever with arbitrarily small loss.en_US
dc.format.extent1 - 28en_US
dc.relation.ispartofJournal of Machine Learning Researchen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleIterated Learning in Dynamic Social Networksen_US
dc.typeJournal Articleen_US

Files in This Item:
File Description SizeFormat 
IteratedLearningDynamicSocialNetworks.pdf368.19 kBAdobe PDFView/Download

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.