To refer to this page use:
|Abstract:||We introduce a new embedding model to represent movie characters and their interactions in a dialogue by encoding in the same representation the language used by these characters as well as information about the other participants in the dialogue. We evaluate the performance of these new character embeddings on two tasks: (1) character relatedness, using a dataset we introduce consisting of a dense character interaction matrix for 4,378 unique character pairs over 22 hours of dialogue from eighteen movies; and (2) character relation classification, for fine- and coarse-grained relations, as well as sentiment relations. Our experiments show that our model significantly outperforms the traditional Word2Vec continuous bag-of-words and skip-gram models, demonstrating the effectiveness of the character embeddings we introduce. We further show how these embeddings can be used in conjunction with a visual question answering system to improve over previous results.|
|Citation:||Azab, Mahmoud, Noriyuki Kojima, Jia Deng, and Rada Mihalcea. "Representing Movie Characters in Dialogues." Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL) (2019): pp. 99-109. doi:10.18653/v1/K19-1010|
|Pages:||99 - 109|
|Type of Material:||Conference Article|
|Journal/Proceeding Title:||Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.