Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> bootstrap the initial KGs from existing LLMs

LLMs generate responses based on statistical probabilities derived from their training data. They do not inherently understand or store an "absolute source of truth." Thus, any KG bootstrapped from an LLM might inherit not only the model's insights but also its inaccuracies and biases (hallucinations). You need to understand that these hallucinations are not errors of logic but they are artifacts of the model's training on vast, diverse datasets and reflect the statistical patterns in that data.

Maybe you could build retrieval model but not generative model.



I thought addition of the "logical" constraints in the existing training loop using KGs and logical validation would help into reducing wrong semantic formation at the training loop itself. But your point is right that what if the whole knowledge graph is hallucinated during the training itself.

I don't have answer to that. I felt there would be lesser KG representations which would fit a logical world, than what fits into the current vast vector spaces of network's weight and biases. But that's just a idea. This whole thing stems from this internal intuition that language is secondary to my thought process and internally I feel I can just play around concepts without language - what kind of Large X models will meet that kind of capability I don't know!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: