Arxiv Paper – Reverse Question Answering: Can an LLM Write a Question so Hard (or Bad) that it Can’t Answer?

In this episode, we discuss Reverse Question Answering: Can an LLM Write a Question so Hard (or Bad) that it Can’t Answer? by Nishant Balepur, Feng Gu, Abhilasha Ravichander, Shi Feng, Jordan Boyd-Graber, Rachel Rudinger. The paper investigates the reverse question answering (RQA) task where a question is generated based on a given answer and examines how 16 large language models (LLMs) perform on this task compared to traditional question answering (QA). The study reveals that LLMs are less accurate in RQA for numerical answers but perform better with textual ones, and they often can answer their incorrectly generated questions accurately in traditional QA, indicating that errors are not solely due to knowledge gaps. Findings also highlight that RQA errors correlate with question difficulty and are inversely related to the frequency of answers in the data corpus, presenting challenges in generating valid multi-hop questions and suggesting areas for improvement in LLM reasoning for RQA.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *