Skip to main content

SpanBERT: Improving Pre-training by Representing and Predicting Spans

Author(s): Joshi, Mandar; Chen, Danqi; Liu, Yinhan; Weld, Daniel S; Zettlemoyer, Luke; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr15c2f
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJoshi, Mandar-
dc.contributor.authorChen, Danqi-
dc.contributor.authorLiu, Yinhan-
dc.contributor.authorWeld, Daniel S-
dc.contributor.authorZettlemoyer, Luke-
dc.contributor.authorLevy, Omer-
dc.date.accessioned2021-10-08T19:50:04Z-
dc.date.available2021-10-08T19:50:04Z-
dc.date.issued2020en_US
dc.identifier.citationJoshi, Mandar, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. "SpanBERT: Improving Pre-training by Representing and Predicting Spans." Transactions of the Association for Computational Linguistics 8 (2020): 64-77. doi:10.1162/tacl_a_00300en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr15c2f-
dc.description.abstractWe present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERTlarge, our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE.en_US
dc.format.extent64 - 77en_US
dc.language.isoen_USen_US
dc.relation.ispartofTransactions of the Association for Computational Linguisticsen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleSpanBERT: Improving Pre-training by Representing and Predicting Spansen_US
dc.typeJournal Articleen_US
dc.identifier.doi10.1162/tacl_a_00300-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
SpanBert.pdf303.34 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.