Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic. It's not obvious to me that this is the case

I'm with you on this. Software engineers talk about being in the flow when they are at their most productive. For me, the telltale sign of being in the flow is that I'm no longer thinking in English, but I'm somehow navigating the problem / solution space more intuitively. The same thing happens in many other domains. We learn to walk long before we have the language for all the cognitive processes required. I don't think we deeply understand what's going in these situations, so how are we going to build something to emulate it? I certainly don't consciously predict the next token, especially when I'm in the flow.

And why would we try to emulate how we do it? I'd much rather have technology that complements. I want different failure modes and different abilities so that we can achieve more with these tools than we could by just adding subservient humans. The good news is that everything we've built so far is succeeding at this!

We'll know that society is finally starting to understand these technologies and how to apply them when we are able to get away from using science fiction tropes to talk about them. The people I know who develop LLMs for a living, and the others I know that are creating the most interesting applications of them, already talk about them as tools without any need to anthropomorphize. It's sad to watch their frustration as they are slowed down every time a person in power shows up with a vision based on assumptions of human-like qualities rather than a vision informed by the actual qualities of the technology.

Maybe I'm being too harsh or impatient? I suppose we had to slowly come to understand the unique qualities of a "car" before we could stop limiting our thinking by referring to it as a "horseless carriage".



Couldn't agree more. I look forward to the other side of this current craze where we actually have reasonable language around what these machines are best for.

On a more general level, I also never understood this urge to build machines that are "just like us". Like you I want machines that, arguably, are best characterized by the ways in which they are not like us—more reliable, more precise, serving a specific function. It's telling that critiques of the failures of LLMs are often met with "humans have the same problems"—why are humans the bar? We have plenty of humans. We don't need more humans. If we're investing so much time and energy, shouldn't the bar be bette than humans? And if it isn't, why isn't it? Oh, right it's because actually human error is good enough and the actual benefit of these tools is that they are humans that can work without break, don't have autonomy, and that you don't need to listen to or pay. The main beneficiaries of this path are capital owners who just want free labor. That's literally all this is. People who actually want to build stuff want precision machines that are tailored for the task at hand, not some grab bag of sort of works sometimes stochastic doohickeys.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: