Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there a reason human preference data is even needed? Don't LLMs already have a strong enough notion of question complexity to build a dataset for routing?


> a strong enough notion of question complexity

Aka Wisdom. No, LLMs don't have that. Me neither, I usually have to step in the rabbit holes in order to detect them.


"Do you think you need to do high/medium/low amount of thinking to answer X?" seems well within an LLMs wheelhouse if the goal is to build an optimized routing engine.


How do you think that an LLM could come by that information? Do you think that LLM vendors are logging performance and feeding that back into the model or some other mechanism?


Why not something dumb like this: https://chatgpt.com/share/68b60199-b6ac-8009-b50d-3e7cfff1d7... (gpt-4o)


Yes, that's why they keep getting better and why Anthropic is switching privacy policy defaults to eat my data please.


LLMs don't have notions ... they are pattern matchers against a vast database of human text.


Please do a SELECT * from this database


What was the name of the rocket that brought the first humans into space?


This is like asking someone to make you a sandwich and expect them to read your mind to determine what kind of sandwich you want.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: