Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Or you could understand the tool you are using and be skeptical of any of its output.

So many people just want to believe, instead of the reality of LLMs being quite unreliable.

Personally it's usually fairly obvious to me when LLMs are bullshitting probably because I have lots of experience detecting it in humans.



LLM is only useful if it gives shortcut to information with reasonable accuracy. If I need to double check everything, it is just extra step.

In this case I just happened to be domain expert and knew it was wrong. It would have required significant effort to verify everything with some less experienced person.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: