Technology

Why language models hallucinate

0
Please log in or register to do it.

Hallucinations are plausible but false statements generated by language models. They can show up in surprising ways, even for seemingly straightforward questions. For example, when we asked a widely used chatbot for the title of the PhD dissertation by Adam Tauman Kalai (an author of this paper), it confidently produced three different answers—none of them correct. When we asked for his birthday, it gave three different dates, likewise all wrong. 

Israeli strikes kill 19 Palestinians near Gaza’s last functioning hospitals
Israel intensifies Gaza City bombing as Rubio arrives

Reactions

1
0
0
0
0
0
Already reacted for this post.

Your email address will not be published. Required fields are marked *

GIF