Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In context learning (i.e., tuning the LLM by adding all relevant info to prompts) seems to be preferred over fine-tuning. I'm not sure what exactly the reason is, perhaps because fine-tuning often doesn't make sense in a dynamic setting since it's fairly expensive? On the other hand, the only reason the entire pipeline for vector DB exists is because context size is relatively limited.

I'd love to see a comparison between the two in terms of accuracy of the outputs and the degree of hallucinations.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: