ACL 2023 Tutorial:
Complex Reasoning in Natural Language

1Cornell, 2Google, 3AI2, 4Stanford, 5CMU, 6HKU
*Equal Contribution

Sunday July 9 9:00 - 12:30 (EDT) @ Metropolitan Centre
We will take Q&A through Rocket.Chat: https://acl.rocket.chat/channel/tutorial-2

Slides may be subject to updates, and full paper lists will be up soon

About this tutorial

Teaching machines to reason over texts has been a long-standing goal of natural language processing (NLP). To this end, researchers have designed a diverse set of complex reasoning tasks that involve compositional reasoning, knowledge retrieval, grounding, commonsense reasoning, etc.

A standard choice for building systems that perform a desired type of reasoning is to fine-tune or prompt a language model (LM) on specific downstream tasks. However, recent research has demonstrated that such a straightforward approach is often brittle, and that the reasoning capabilities of these models thus remain at a surface level, i.e., exploiting data patterns. Consequently, augmenting LMs with techniques that make them robust and effective becomes an active research area.

This tutorial provides an overview of complex reasoning tasks where the standard application of pretrained language models fails. This tutorial then reviews recent promising directions for tackling these tasks. Specifically, we focus on the following groups of approaches that explicitly consider problem structures: (1) knowledge-augmented methods, where the knowledge is either incorporated during fine-tuning or pretraining; (2) few-shot prompting methods, which effectively guide the models to follow instructions; (3) neuro-symbolic methods, which produce explicit intermediate representations; and, (4) rationale-based methods, one of the most popular forms of the neuro-symbolic methods, which highlight subsets of input as explanations for individual model predictions.

Schedule

Our tutorial will be held on July 9 (all the times are based on EDT = Toronto local time):

Time Section Presenter
9:00—9:15 Section 1: Introduction [Slides] Wenting
9:15—9:40 Section 2: Benchmarks & Evaluation [Slides] Mor
9:40—10:05 Section 3: Knowledge-Augmented Approaches after Pretraining [Slides] Yuchen
10:05—10:30 Section 4: Knowledge-augmented Approaches during Pretraining [Slides] Michi
10:30—11:00 Coffee break
11:00—11:30 Section 5: Few-shot Prompting Approaches [Slides] Aman
11:30—12:00 Section 6: Neuro-symbolic approaches: LLMs + Tool Use [Slides] Tao
12:00—12:30 Section 7: Rationale-based approaches & Conclusions [Slides] Wenting

BibTeX

@article{ complex-reasoning-tutorial,
  author    = { Zhao, Wenting and Geva, Mor and Lin, Bill Yuchen and Yasunaga, Michihiro and Madaan, Aman and Yu, Tao},
  title     = { ACL 2023 Tutorial: Complex Reasoning in Natural Language },
  journal   = { ACL 2023 },
  year      = { 2023 },
}