Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans also respond differently when prompted in different ways. For example, politeness often begets politeness. I would expect that to be reflected in training data.




If I, a moron, hire a PhD to crack a tough problem for me, I don't need to go back and forth prompting him at a PhD level. I can set him loose on my problem and he'll come back to me with a solution.

> hire a PhD to crack a tough problem for me, I don't need to go back and forth prompting him at a PhD level. I can set him loose on my problem and he'll come back to me with a solution.

In my experience with many PhDs they are just as prone to getting off track or using their pet techniques as LLMs! And many find it very hard to translate their work into everyday language too...


The PhD can't read minds, the quality if the request from a moron would be worse than the quality of the request from someone with avg intelligence. And the output would probably noticeably differ accordingly

Unless your problem fits the very narrow but very deep area of expertise of the PhD you’re not going to get anything. The phds I have worked with can’t tie their shoes because that wasn’t in their dissertation.

Well if it ever gets to be a full replacement for phds, you’ll know cause it will have already replaced you.

I think that's what is happening. It's simulating a conversation, after all. A bit like code switching.

that seems like something you wouldn't want from your tools. humans have that and that's fine, people are people and have emotions but I don't want my power-drill asking me why I only call when I need something.

>Humans also respond differently when prompted in different ways.

And?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: