Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't share any of the excitement that ppl like you appear to be feeling, and I recoil from the grandiose claims being made by people who, in my opinion, are being fooled by the 21st century equivalent of a ventriloquist's doll.

(I did feel excitement while following the development of AlphaZero and its played Go matches, but that was because it was revealing greater depths and beauty in the human created game of Go. And I maintain some interest in following the development of self-driving, particularly by Tesla.)

With regard to LLMs I can see how they could be useful. I think more particularly useful when they work from a constrained corpus, so the user can know what they're drawing from (and thus the limitations of that knowledge base). The example site that been posted by its maker to HN [1] where you can ask questions against a particular book is a good one for showing the use of the tool I think. But it's just a tool and it's not in any way a breakthrough in our understanding of ourselves, of cognition or anything like that. I think the people who are making these claims can't distinguish science fiction from actual reality. They are fantasists and I think they are leading themselves and others into delusion.

[1] https://portal.konjer.xyz/



I am not excited. I am terrified.

Right at the beginning of the current wave (2010-2012) of ML approaches I did some work on ML systems and NLP, and back then I clearly saw how nothing truly outstanding is happening, we were only starting to figure our what GPUs were capable of.

So all of this was fun: NLP, ML, vintage AI. But nothing felt like it did was groundbreaking, or would solve fundamental true GAI problems, or was even close.

Yet, 10 years later, here we are. Language is solved. In most areas I know /something/ about (programming, ML, NLP, compilers) this is huge and makes mountains of knowledge obsolete.


For me AlphaZero was boring. :-) Solution space is vast but rules are simple. It was a question of time when somebody could put things together here. There was nothing unknowable about it, unlike how natural languages were always a mystery to me. Even with all the syntax, grammars, linguistic knowledge, NLP... Something was lacking.


Interesting to have this contrast in perspectives. For me, the language generated by ChatGPT is flat and boring. No spark of human creativity or originality or flair. And this cheap trick of getting it to write in rhyme or 'in the style of' such and such I find awfully tacky.

I'm not saying AlphaZero was creative either. But because it was operating inside a system that was already beautiful and which had such a vast 'solution space' as you put it, its exploration into greater depths of that space I found intriguing.

I think that's the contrast for me. Machine learning can be useful and even intriguing inside constrained spaces That's why I liked AlphaZero, working inside a very constrained (but deep) space. And why I also find Tesla's progress with self-driving interesting. It's a constrained task, even though it has a huge range of variables. And again why I find ChatGPT potentially useful in drawing from a constrained corpus but still don't find the language it generates appealing. It comes across as exactly what it is - machine generated text.


The breakthrough of ChatGPT is not a brilliant literary work per se.

It's how it interprets what people write and provides coherent answers. This was not possible previously.

AlphaZero, chess algos do not have to break this barrier, they work form a very clear and well-defined input. It was clear that a mixture of machine brute force and smart thinking would eventually beat us at these games. No magic here. Alpha family algos are /very/ understandable.

Language, on the contrary, is fundamentally not very well defined. Is it flawed, fluid, diverse... not possible to formalize and make it properly machine-readable. All the smaller bits (words, syntax, etc) are easy. But how these things come together - this can be only vaguely described through rigid formal frammars, but never fully.

Compare that to how on the lowest level we understand our brain very well. Every neuron is a trivial building brick. It's how super-complex functions of input to output arise from these trivial pieces - that's amazing. Every neural network is unique. Abstractions, layers of knowledge - everything is there. And it's kind of unique for every human so unknowable in the general case...


Your third paragraph describes language pretty well (although I'd quibble with formal grammars only being 'vague' in their coverage - I think they do a pretty good job although I agree they can never be perfect). And I appreciate the achievement of LLMs in being able to take in language prompts and return useful responses. So it's an achievement that is useful certainly in providing a 'natural language' querying and information collating tool. (I agree here with your second paragraph.)

But it remains a tool and a derivative one. You will see people in these recent HN threads making grandiose claims about LLMs 'reasoning' and 'innovating' and 'trying new things' (I replied negatively under a comment just like this in this thread). LLMs can't and will never be able to do these things because, as I've already said, they are completely derivative. They may, by collating information and presenting it to the user, provoke new insights in the human user's mind. But they won't be forming any new insights themselves, because they are machines and machines are not alive, they are not intelligent, and they cannot think or reason (even if a machine model can 'learn').


> LLMs can't and will never be able to do these things because, as I've already said, they are completely derivative.

I agree, they are completely derivative. And so are you and I. We have copied everything we know, either from other humans or from whatever we have learned from our simple senses.

I'm not asking you to bet that LLMs will do any of those things really, I suppose it's not a guarantee that anything will improve to a certain point. But I am cautioning not to bet heavily against it because, after witnessing what this generation of LLM is capable of, I no longer believe there's anything fundamentally different about human brains, so, to me, it's like asking if an x86-64 PC will ever be able to emulate a PS5. Maybe not today, but I don't see any reason why a typical PC in 10 or 15 years would have trouble.


Well... complaining about people online or in the media making grandiose is like fighting wind.

I totally see your point about inherent "derivativeness" of LLMs. This is true.

But note how "being alive" or "being intelligent" or "be able to think" are hard to define. I'd work for the "duck test" approach: if it is not possible to distinguish a simulation from the original then it doesn't make sense to draw a line.

Anyways, yes, LLMs are boring. I am just not sure we people are not boring as well.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: