Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> since it trusts inherently its training data and considers that the basis of all its responses.

Doesn't that make "hallucination" the better term? The LLM is "seeing" something in the data that isn't actually reflected in reality. Whereas "confabulation" would imply that LLMs are creating data out of "thin air", which leaves the training data to be immaterial.

Both words, as they have been historically used, need to be stretched really far to fit an artificial creation that bears no resemblance to what those words were used to describe, so, I mean, any word is as good as any other at that point, but "hallucination" requires less stretching. So I am curious about why you like "confabulation" much better. Perhaps it simply has a better ring to your ear?

But, either way, these pained human analogies have grown tired. It is time to call it what it really is: Snorfleblat.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: