Will lecture on
Measuring Factuality in Text Generation: When Language Models Are Twisting the Facts
Text generation is at the core of many NLP tasks like question answering, dialog generation, machine translation or text summarization. While current text generation models produce text that seems fluent and informative, their outputs often contain factual inconsistencies with respect to the inputs they rely on (a.k.a. "hallucinations"), making it hard to deploy such models in real-world applications.
In this talk I will present two of our recent works tackling those issues. First, I will describe KOBE (Gekhman et al., 2020), a knowledge-based approach for evaluating the quality of machine translation models, which uses multilingual entity resolution instead of human reference translations. I will then present Q^2 (Honovich et al., 2021), an automatic evaluation metric that combines question generation, question answering and natural language inference to validate the outputs of dialogue generation models.
Roee Aharoni is a Research Scientist at Google Research, where he works on natural language processing. Prior to Google Roee completed his Phd in Computer Science at BIU's NLP lab under the supervision of Prof. Yoav Goldberg, and his M.Sc. in Computer Science under the supervision of Prof. Moshe Koppel.
zoom link: https://us02web.zoom.us/j/83383478356
בניין 216 חדר 201