Statue of Socrates with a laptop on his lap

From Socrates to ChatGPT: The Ancient Lesson AI-powered Language Models Have Yet to Learn

OMSCS instructor Santosh Vempala is a co-author of "Why Language Models Hallucinate", a research study from OpenAI and Georgia Tech, released in September. He says that there is a direct correlation between an LLM's hallucination rate and its misclassification rate regarding the validity of a given response. "This means that if the model can't tell fact from fiction, it will hallucinate."
Read more at cc.gatech.edu
Blank Space (small)
(text and background only visible when logged in)

Tags:

Recent Stories