Skip to main content

Can Rationalization Improve Robustness?

Author(s): Chen, Howard; He, Jacqueline; Narasimhan, Karthik; Chen, Danqi

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr18s4jp4r
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, Howard-
dc.contributor.authorHe, Jacqueline-
dc.contributor.authorNarasimhan, Karthik-
dc.contributor.authorChen, Danqi-
dc.date.accessioned2023-12-14T14:31:28Z-
dc.date.available2023-12-14T14:31:28Z-
dc.date.issued2022-07en_US
dc.identifier.citationChen, Howard, He, Jacqueline, Narasimhan, Karthik and Chen, Danqi. "Can Rationalization Improve Robustness?" Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022): 3792-3805. doi:10.18653/v1/2022.naacl-main.278en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr18s4jp4r-
dc.description.abstractA growing line of work has investigated the development of neural NLP models that can produce rationales–subsets of input that can explain their model predictions. In this paper, we ask whether such rationale models can provide robustness to adversarial attacks in addition to their interpretable nature. Since these models need to first generate rationales (“rationalizer”) before making predictions (“predictor”), they have the potential to ignore noise or adversarially added text by simply masking it out of the generated rationale. To this end, we systematically generate various types of ‘AddText’ attacks for both token and sentence-level rationalization tasks and perform an extensive empirical evaluation of state-of-the-art rationale models across five different tasks. Our experiments reveal that the rationale models promise to improve robustness over AddText attacks while they struggle in certain scenarios–when the rationalizer is sensitive to position bias or lexical choices of attack text. Further, leveraging human rationale as supervision does not always translate to better performance. Our study is a first step towards exploring the interplay between interpretability and robustness in the rationalize-then-predict framework.en_US
dc.format.extent3792 - 3805en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologiesen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleCan Rationalization Improve Robustness?en_US
dc.typeConference Articleen_US
dc.identifier.doi10.18653/v1/2022.naacl-main.278-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
RationalizationImproveRobustness.pdf846.68 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.