Short Intro:

This is a collection of 218 NLP Explanation studies:
library_books   Each paper includes: Paper Tile, Paper Link, Published Year, Published Venue, Form Annotations.
sort_by_alpha   You can click the Table Headers to sort each column to find your need.
cloud_download   If you need the table as a '.csv' file, please download it here.

Cite the Forms:
@article{human-centered-exnlp:2021:hxcai,
        author = {Shen, Hua and Huang, Ting-Hao (Kenneth)},
        title = {Explaining the Road Not Taken},
        journal = {ACM CHI Workshop on Operationalizing Human-Centered Perspectives in Explainable AI},
        year = {2021}
        }

ID Title Year Venue (Abbreviation List) Feature Attribution (FAT) Probing (PRB) Tuple/Graph (TUP) Projection Space (PSP) Rule/Grammar (RUL) Free Text (FRT) Concept/Sense (CPT) Example (EXP) Trigger (TRG) Word Cloud (WCL) Images (IMG) Confidence Score (CFD)
1 " Why should I trust you?" Explaining the predictions of any classifier 2016 KDD Quote - - - - - - - - - - -
2 A causal framework for explaining the predictions of black-box sequence-to-sequence models 2017 EMNLP Quote - Quote - - - - - - - - -
3 A Diagnostic Study of Explainability Techniques for Text Classification 2020 EMNLP Quote - - - - - - - - - - -
4 A Meaning-based English Math Word Problem Solver with Understanding, Reasoning and Explanation 2016 COLING - - - - Quote Quote - - - - - -
5 A primer in bertology: What we know about how bert works 2020 TACL Quote - Quote - - - - - - - - -
6 A Shared Attention Mechanism for Interpretation of Neural Automatic Post-Editing Systems 2018 ACL Quote - - - - - - - - - - -
7 A structural probe for finding syntax in word representations 2019 NAACL - - Quote - - - - - - - - -
8 A Survey of the State of Explainable AI for Natural Language Processing 2020 AACL-IJCNLP Quote - - - Quote Quote - - - - - -
9 Allennlp interpret: A framework for explaining predictions of nlp models 2019 EMNLP Quote - - - Quote - - - - - - -
10 An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction 2020 EMNLP Quote - - - - - - - - - - -
11 An Interpretable Knowledge Transfer Model for Knowledge Base Completion 2017 ACL - - - - - - Quote - - - - -
12 An Interpretable Reasoning Network for Multi-Relation Question Answering 2018 COLING - - - - - - Quote - - - - -
13 Analysing the potential of seq-to-seq models for incremental interpretation in task-oriented dialogue 2018 BlackboxNLP Quote Quote - - - - - - - - - -
14 Analysis methods in neural language processing: A survey 2019 TACL Quote Quote - - - - - Quote - - - -
15 Analytical methods for interpretable ultradense word embeddings 2019 EMNLP - - - - - - Quote - - - - -
16 Analyzing the Structure of Attention in a Transformer Language Model 2019 BlackboxNLP Quote - - - - - - - - - - -
17 Anchors: High-Precision Model-Agnostic Explanations 2018 AAAI - - - - Quote - - - - - - -
18 Are sixteen heads really better than one? 2019 NeuIPS Quote - - - - - - - - - - -
19 Assessing social and intersectional biases in contextualized word representations 2019 NeuIPS - Quote - - - - - - - - - -
20 Attention interpretability across nlp tasks 2019 Arxiv Quote - - - - - - - - - - -
21 Attention is not Explanation 2019 NAACL Quote - - - - - - - - - - -
22 Attention is not not Explanation 2019 EMNLP Quote - - - - - - - - - - -
23 AttentionMeSH: Simple, Effective and Interpretable Automatic MeSH Indexer 2018 Proceedings of the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answering Quote - - - - - - - - - - -
24 Auditing deep learning processes through kernel-based explanatory models. 2019 EMNLP-IJCNLP - - - - - - - Quote - - - -
25 Automatic rule extraction from long short term memory networks 2017 ICLR Quote - - - - - - - - - - -
26 BERT Rediscovers the Classical NLP Pipeline 2019 ACL - Quote - - - - - - - - - -
27 Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs 2018 ICLR Quote - - - - - - - - - - -
28 Captum: A unified and generic model interpretability library for PyTorch 2020 Arxiv Quote - - - - - - - - - - -
29 Chains-of-Reasoning at TextGraphs 2019 Shared Task: Reasoning over Chains of Facts for Explainable Multi-hop Inference 2019 EMNLP - - Quote - - - - - - - - -
30 CNM: An Interpretable Complex-valued Network for Matching 2019 NAACL Quote - - - - - - - - - - -
31 COGS: A Compositional Generalization Challenge Based on Semantic Interpretation 2020 EMNLP - - Quote - - - - - - - - -
32 Cold-Start and Interpretability: Turning Regular Expressions into Trainable Recurrent Neural Networks 2020 EMNLP - - - - Quote - - - - - - -
33 Comparing Automatic and Human Evaluation of Local Explanations for Text Classification 2018 NAACL Quote - - - - - - - - - - -
34 Compositional Explanations of Neurons 2020 NeuIPS - - - - - - Quote - - - - -
35 Constructing Interpretive Spatio-Temporal Features for Multi-Turn Responses Selection 2019 ACL Quote - - - - - - - - - - -
36 Deconfounded lexicon induction for interpretable social science 2018 NAACL Quote - - - - - - - - - - -
37 Deconfounded Lexicon Induction for Interpretable Social Science 2018 NAACL - - - - - - Quote - - - - -
38 Designing and Interpreting Probes with Control Tasks 2019 EMNLP - Quote Quote - - - - - - - - -
39 Detecting and Explaining Causes From Text For a Time Series Event 2017 EMNLP - - Quote - - - - - - - - -
40 Detecting Linguistic Characteristics of Alzheimer’s Dementia by Interpreting Neural Models 2018 NAACL Quote - - Quote - - - - - - - -
41 Did the Model Understand the Question? 2018 ACL Quote - - - - - - - - - - -
42 Dissonance Between Human and Machine Understanding 2019 CSCW - - - - - - - - - - - -
43 Do Human Rationales Improve Machine Explanations? 2019 BlackboxNLP Quote - - - - - - - - - - -
44 Do Multi-hop Readers Dream of Reasoning Chains? 2019 ACL - - Quote - - - - - - - - -
45 Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study 2019 ACL - - - - - - - - Quote - - -
46 Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA System 2018 BlackboxNLP Quote - - - - - - - - - - -
47 Does String-Based Neural MT Learn Source Syntax? 2016 EMNLP - - - - Quote - - - - - - -
48 Does String-Based Neural MT Learn Source Syntax? 2016 EMNLP - - Quote - - - - - - - - -
49 DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification 2020 ACL Quote - Quote - - - - - - - - -
50 e-snli: Natural language inference with natural language explanations 2018 NeuIPS Quote - - - - Quote - - - - - -
51 EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing 2019 ACL Quote - - - Quote - - - - - - -
52 Educe: Explaining model decisions through unsupervised concepts extraction 2019 Arxiv - - - - - - Quote - - - - -
53 Ensembling Visual Explanations for VQA 2017 Proceedings of the NIPS 2017 workshop on Visually-Grounded Interaction and Language (ViGIL) Quote - - - - - - - - - - -
54 Eraser: A benchmark to evaluate rationalized nlp models 2020 ACL Quote - - - - - - - - - - -
55 Evaluating and Characterizing Human Rationales 2020 EMNLP Quote - - - - - - - - - - -
56 Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? 2020 ACL - - - - Quote - - Quote - - - Quote
57 Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement 2018 ACL Quote - - - - - - - - - - -
58 exbert: A visual analysis tool to explore learned representations in transformers models 2020 ACL Quote - - - Quote - - Quote - - - Quote
59 ExpBERT: Representation Engineering with Natural Language Explanations 2020 ACL - - - - Quote - - - - - - -
60 Explain Yourself! Leveraging Language Models for Commonsense Reasoning 2019 ACL - - - - - Quote - - - - - -
61 Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience 2020 CSCW Quote - - - - - - - - - - -
62 Explainable Automated Fact-Checking for Public Health Claims 2020 EMNLP - - - - - Quote - - - - - -
63 Explainable Clinical Decision Support from Text 2020 EMNLP Quote - - - - - - - - - - -
64 Explainable Prediction of Medical Codes from Clinical Text 2018 NAACL Quote - - - - - - - - - - -
65 Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions 2020 ACL - - - - - - - Quote - - - -
66 Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules? 2018 EMNLP Quote - - - - - - - - - - -
67 Explaining non-linear Classifier Decisions within Kernel-based Deep Architectures 2018 BlackboxNLP - - - - - - - Quote - - - -
68 Explaining Simple Natural Language Inference 2019 ACL - - - - - Quote - - - - - -
69 Explaining the Stars: Weighted Multiple-Instance Learning for Aspect-Based Sentiment Analysis 2014 EMNLP - - - - - - - Quote - Quote - -
70 Exploiting Structure in Representation of Named Entities using Active Learning 2018 COLING - - - - Quote - - - - - - -
71 Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension 2019 ACL - - Quote - - - - - - - - -
72 Exploring Interpretability in Event Extraction: Multitask Learning of a Neural Event Classifier and an Explanation Decoder 2020 ACL - - - - Quote - - - - - - -
73 F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering 2020 EMNLP - - - - - Quote - - - - - -
74 FIND: Human-in-the-Loop Debugging Deep Text Classifiers 2020 EMNLP - - - - - - - - - Quote - -
75 Fine-grained analysis of sentence embeddings using auxiliary prediction tasks 2017 ICLR - Quote - - - - - - - - - -
76 Generating Fact Checking Explanations 2020 ACL Quote - - - - - - - - - - -
77 Generating question relevant captions to aid visual question answering 2019 ACL Quote - - - - Quote - - - - - -
78 Generating Token-Level Explanations for Natural Language Inference 2019 NAACL Quote - - - - - - - - - - -
79 GEval: Tool for Debugging NLP Datasets and Models 2019 BlackboxNLP Quote - - - - - - - - - - Quote
80 Global model interpretation via recursive partitioning 2018 IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS) - - Quote - - - - - - - - -
81 GLUCOSE: GeneraLized and COntextualized Story Explanations 2020 EMNLP - - - - Quote - - - - - - -
82 Guiding the Flowing of Semantics: Interpretable Video Captioning via POS Tag 2019 EMNLP - - - - - - Quote - - - - -
83 HEIDL: Learning Linguistic Expressions with Deep Learning and Human-in-the-Loop 2019 ACL - - - - Quote - - Quote - - - -
84 HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering 2018 EMNLP Quote - - - - - - - - - - -
85 How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings 2019 EMNLP-IJCNLP - Quote - Quote - - - - - - - -
86 How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking 2020 EMNLP Quote - - - - - - - - - - -
87 How Important is a Neuron 2019 ICLR Quote - - - - - - - - - - -
88 How much should you ask? On the question structure in QA systems 2018 BlackboxNLP Quote - - - - - - - - - - -
89 How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels 2020 HCOMP Quote - - - - - - - - - - -
90 Human Attention in Visual Question Answering: Do Humans and Deep Networks look at the same regions? 2016 EMNLP Quote - - - - - - - - - - -
91 Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? 2020 ACL Quote - - - - - - - - - - -
92 Human-grounded Evaluations of Explanation Methods for Text Classification 2019 EMNLP-IJCNLP Quote - - - - - - - - - - -
93 Identification, interpretability, and Bayesian word embeddings 2019 NAACL - - - - - - Quote - - - - -
94 Identifying and Controlling Important Neurons in Neural Machine Translation 2019 ICLR Quote - - - - - - - - - - -
95 Imparting Interpretability to Word Embeddings while Preserving Semantic Structure 2018 IEEE/ACM Transactions on Audio, Speech, and Language Processing - - - - - - Quote - - - - -
96 Improving Abstractive Document Summarization with Salient Information Modeling 2019 ACL Quote - - - - - - - - - - -
97 Interpretable emoji prediction via label-wise attention LSTMs 2018 EMNLP Quote - - - - - - - - - - -
98 Interpretable Entity Representations through Large-Scale Typing 2020 EMNLP - - - - - - Quote - - - - -
99 Interpretable Multi-dataset Evaluation for Named Entity Recognition 2020 EMNLP Quote - Quote - - - - - - - - -
100 Interpretable Neural Architectures for Attributing an Ad’s Performance to its Writing Style 2018 BlackboxNLP Quote - - - - - - - - - - -
101 Interpretable neural predictions with differentiable binary variables 2019 ACL Quote - - - - - - - - - - -
102 Interpretable Question Answering on Knowledge Bases and Text 2019 ACL Quote - - - - - - - - - - -
103 Interpretable Question Answering on Knowledge Bases and Text 2019 ACL Quote - - - - - - - - - - -
104 Interpretable Relevant Emotion Ranking with Event-Driven Attention 2019 EMNLP Quote - - - - - - - - - - -
105 Interpretable Word Embeddings via Informative Priors 2019 EMNLP - - - - - - Quote - - - - -
106 Interpretation of Natural Language Rules in Conversational Machine Reading 2018 EMNLP - - - - Quote - - - - - - -
107 Interpretation of NLP models through input marginalization 2020 EMNLP Quote - - - - - - - - - - -
108 Interpreting Neural Network Hate Speech Classifiers 2018 EMNLP Quote - - - - - - - - - - -
109 Interpreting neural networks to improve politeness comprehension. 2016 EMNLP Quote - - Quote - - - - - - - -
110 Interpreting Neural Networks with Nearest Neighbors 2018 BlackboxNLP Quote - - - - - - - - - - -
111 Interpreting Open-Domain Modifiers: Decomposition of Wikipedia Categories into Disambiguated Property-Value Pairs 2020 EMNLP Quote - - - - - - - - - - -
112 Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings 2020 ACL - - - - - - Quote - - - - -
113 Interpreting recurrent and attention-based neural models: a case study on natural language inference 2018 EMNLP Quote - - - - - - - - - - -
114 Invariant Rationalization 2020 ICML Quote - - - - - - - - - - -
115 Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications 2019 NAACL - - - - Quote - - - - - - -
116 Is attention interpretable? 2019 ACL Quote - - - - - - - - - - -
117 Iterative Recursive Attention Model for Interpretable Sequence Classification 2018 BlackboxNLP Quote - - - - - - - - - - -
118 Joint Concept Learning and Semantic Parsing from Natural Language Explanations 2017 EMNLP - - - - - - Quote - - - - -
119 KERMIT: Complementing Transformer Architectures with Encoders of Explicit Syntactic Interpretations 2020 EMNLP Quote - Quote - - - - - - - - -
120 Knowledge Aware Conversation Generation with Explainable Reasoning over Augmented Graphs 2019 EMNLP - - Quote - - - - - - - - -
121 Latent alignment and variational attention 2018 NeuIPS Quote - - - - - - - - - - -
122 Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language? 2020 EMNLP Findings - - - - - Quote - - - - - -
123 LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from Explanation 2020 ACL Quote - Quote - - Quote - - - - - -
124 Learning concept embeddings for dataless classification via efficient bag-of-concepts densification 2019 Knowledge and Information Systems - - - - - - Quote - - - - -
125 Learning Corresponded Rationales for Text Matching 2019 ICLR Quote - - - - - - - - - - -
126 Learning credible deep neural networks with rationale regularization 2019 ICDM Quote - - - - - - - - - - -
127 Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning 2019 NeuIPS Quote - - - - - - - - - - -
128 Learning Explainable Linguistic Expressions with Neural Inductive Logic Programming for Sentence Classification 2020 EMNLP - - - - Quote - - - - - - -
129 Learning Explanations from Language Data 2018 BlackboxNLP Quote - - - - - - - - - - -
130 Learning from Explanations with Neural Execution Tree 2020 ICLR - - - - Quote - - - - - - -
131 Learning interpretable negation rules via weak supervision at document level: A reinforcement learning approach 2019 NAACL - - - - Quote - - - - - - -
132 Learning Interpretable Relationships between Entities, Relations and Concepts via Bayesian Structure Learning on Open Domain Facts 2020 ACL - - Quote - - - Quote - - - - -
133 Learning to Explain Entity Relationships in Knowledge Graphs 2015 ACL - - Quote - - - - - - - - -
134 Learning to Explain: Answering Why-Questions via Rephrasing 2019 ACL - - - - - Quote - - - - - -
135 Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering 2020 EMNLP - - - - Quote - - - - - - -
136 Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers 2020 EMNLP Quote - - - - - - - - Quote - -
137 Lightly-supervised representation learning with global interpretability 2019 NAACL - - - Quote - - - - - - - -
138 Linguistic Knowledge and Transferability of Contextual Representations 2019 NAACL - Quote - - - - - - - - - -
139 LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation 2018 BlackboxNLP Quote - - Quote - - - - - - - -
140 Localizing Moments in Video With Natural Language 2017 ICCV - - - - - Quote - - - - - -
141 Lstmvis: A tool for visual analysis of hidden state dynamics in recurrent neural networks 2017 IEEE transactions on visualization and computer graphics Quote - Quote Quote Quote - - Quote - - - -
142 Machine Guides, Human Supervises: Interactive Learning with Global Explanations 2020 Arxiv - - - Quote - - - - - - - -
143 MathQA: Towards interpretable math word problem solving with operation-based formalisms. 2019 NAACL - - - - Quote - - - - - - -
144 Modeling Paths for Explainable Knowledge Base Completion 2019 ACL - - Quote - - - - - - - - -
145 Multi-Granular Text Encoding for Self-Explaining Categorization 2019 ACL Quote - Quote - - - - - - - - -
146 Multi-hop question answering via reasoning chains 2019 Arxiv - - Quote - - - - - - - - -
147 Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph 2018 ACL - - Quote - - - - - - Quote - -
148 Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language Analysis 2020 EMNLP - - - - - - Quote - - - - -
149 Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs 2020 EMNLP - - - - - Quote - - - - - -
150 Neural vector conceptualization for word vector space interpretation 2019 NAACL - - - - - - Quote - - - - -
151 No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML 2020 CHI Quote - - - - - - - - - - -
152 Obtaining Faithful Interpretations from Compositional Neural Networks 2020 ACL - - - - Quote - - - - - - -
153 Open Sesame: Getting Inside BERT's Linguistic Knowledge 2019 BlackboxNLP - Quote - - - - - - - - - -
154 OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs 2019 ACL - - Quote - - - - - - - - -
155 Pathologies of Neural Models Make Interpretations Difficult 2018 EMNLP Quote - - - - - - - Quote - - -
156 Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT 2020 ACL - Quote - - - - - - - - - -
157 Predicting and interpreting embeddings for out of vocabulary words in downstream tasks 2018 BlackboxNLP Quote - - - - - - - - - - -
158 Principles of Explanatory Debugging to Personalize Interactive Machine Learning 2015 IUI Quote - - - - - - - - - - -
159 Probing Emergent Semantics in Predictive Agents via Question Answering 2020 Arxiv - Quote - - - - - - - - - -
160 Probing for semantic evidence of composition by means of simple classification tasks 2016 Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP - Quote - - - - - - - - - -
161 Probing Neural Dialog Models for Conversational Understanding 2020 ACL-NLP4ConvAI - Quote - - - - - - - - - -
162 Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems 2017 ACL Quote - - - - Quote - - - - - -
163 PROVER: Proof Generation for Interpretable Reasoning over Rules 2020 EMNLP - - - - Quote - - - - - - -
164 Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering 2020 EMNLP Quote - - - - - - - - - - -
165 Quint: Interpretable question answering over knowledge bases. 2017 EMNLP - - Quote - - - - - - - - -
166 Rationalizing Neural Predictions 2016 EMNLP Quote - - - - - - - - - - -
167 Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control 2019 EMNLP Quote - - - - - - - - - - -
168 Saliency-driven word alignment interpretation for neural machine translation 2019 ACL Quote - - - - - - - - - - -
169 Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning 2019 EMNLP Quote - - - Quote - - - - - - -
170 Self-Critical Reasoning for Robust Visual Question Answering 2019 NeuIPS Quote - - Quote - - - - - - - -
171 Self-Explaining Structures Improve NLP Models 2020 Arxiv Quote - - - - - - - - - - -
172 Seq2seq-vis: A visual debugging tool for sequence-to-sequence models 2018 IEEE transactions on visualization and computer graphics Quote - Quote Quote - - - Quote - - - -
173 Show, attend and tell: Neural image caption generation with visual attention 2015 ICML Quote - - - - - - - - - - -
174 SPINE: SParse Interpretable Neural Embeddings 2018 AAAI - - - - - - Quote - - - - -
175 Tell-and-answer: Towards explainable visual question answering using attributes and captions 2018 EMNLP - - - - - Quote - - - - - -
176 The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? 2020 BlackboxNLP Quote - - - - - - - - - - -
177 The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models 2020 EMNLP Quote - Quote Quote Quote Quote - Quote - - - Quote
178 The Promise and Peril of Human Evaluation for Model Interpretability 2019 NeurIPS 2017 Symposium on Interpretable Machine Learning - - - Quote - Quote - - - - - -
179 Toward Machine-Guided, Human-Initiated Explanatory Interactive Learning 2020 TAILOR workshop at ECAI - - - Quote - - - - - - - -
180 Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure 2018 HCOMP - - Quote - - - Quote - - - - -
181 Towards Explainable NLP: A Generative Explanation Framework for Text Classification 2019 ACL - - - - - Quote - - - - - -
182 Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness? 2020 ACL - - - - - - - - - - - -
183 Towards Interpretable Reasoning over Paragraph Effects in Situation 2020 EMNLP Quote - - - - - - - - - - -
184 Towards Transparent and Explainable Attention Models 2020 ACL Quote - - - - - - - - - - -
185 Train, Sort, Explain: Learning to Diagnose Translation Models 2019 NAACL Quote - - - - - - - - - - -
186 Training Classifiers with Natural Language Explanations 2018 ACL - - - - Quote Quote - - - - - -
187 Transformers as Soft Reasoners over Language 2020 IJCAI - - - - Quote Quote - - - - - -
188 Trick Me If You Can: Human-in-the-Loop Generation of Adversarial Examples for Question Answering 2019 TACL Quote - - - - - - - - - - -
189 Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information 2018 BlackboxNLP - Quote - - - - - - - - - -
190 Understanding black-box predictions via influence functions 2017 ICML - - - - - - - Quote - - - -
191 Understanding Convolutional Neural Networks for Text Classification 2018 EMNLP Quote - - - - - - - - - - -
192 Understanding Neural Abstractive Summarization Models via Uncertainty 2020 EMNLP - - - - - - - - - - - Quote
193 Understanding neural networks through representation erasure 2016 Arxiv Quote - - - - - - - - - - -
194 Universal adversarial triggers for attacking and analyzing NLP 2019 EMNLP-IJCNLP - - - - - Quote - - Quote - - -
195 Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop Question Answering 2020 ACL - - - - - - - Quote - - - -
196 Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation 2018 ACL - - - - - - Quote - - - - -
197 Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models 2018 BlackboxNLP Quote - - - - - - - - - - -
198 Unsupervised, Knowledge-Free, and Interpretable Word Sense Disambiguation 2017 EMNLP - - - - - - - Quote - - Quote -
199 Using “Annotator Rationales” to Improve Machine Learning for Text Categorization 2007 NAACL Quote - - - - - - - - - - -
200 Using Explanations to Improve Ensembling of Visual Question Answering Systems 2017 Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI) Quote - - - - - - - - - - -
201 Using regional saliency for speech emotion recognition. 2017 ICASSP (IEEE International Conference on Acous- tics, Speech and Signal Processing) Quote - - - - - - - - - - -
202 Visualisation and'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure 2018 Journal of Artificial Intelligence Research Quote Quote Quote - - - - - - - - -
203 Visualizing and Understanding Neural Machine Translation 2018 ACL Quote - - - - - - - - - - -
204 Visualizing and Understanding Neural Models in NLP 2016 NAACL Quote - Quote - - - - - - - - -
205 Visualizing and Understanding the Effectiveness of BERT 2019 EMNLP-IJCNLP - - - Quote - - - - - - - -
206 Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision 2020 EMNLP - - - - - - - - - - Quote -
207 Vqa-e: Explaining, elaborating, and enhancing your answers for visual questions 2018 ECCV - - Quote - - Quote - - - - - -
208 What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models 2020 TACL - Quote - - - - - - - - - -
209 What can AI do for me? evaluating machine learning interpretations in cooperative play 2019 IUI Quote - - - - - - Quote - - - Quote
210 What do Neural Machine Translation Models Learn about Morphology? 2017 ACL - Quote - - - - - - - - - -
211 What do you learn from context? Probing for sentence structure in contextualized word representations 2019 ICLR - Quote - - - - - - - - - -
212 What Does BERT Learn about the Structure of Language? 2019 ACL - Quote Quote Quote - - - - - - - -
213 What does bert look at? an analysis of bert's attention 2019 BlackboxNLP - Quote Quote Quote - - - - - - - -
214 What does this word mean? explaining contextualized embeddings with natural language definition 2019 EMNLP - - - - - - Quote - - - - -
215 What is one grain of sand in the desert? analyzing individual neurons in deep nlp models 2019 AAAI Quote - - - - - - - - - - -
216 What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties 2018 ACL - Quote - - - - - - - - - -
217 Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models 2020 LREC Quote - - - - - - - - - - -
218 Word2Sense: sparse interpretable word embeddings 2019 ACL - - - - - - Quote - - - - -