Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, because there are lots of things people can do that it still can't do.


"If it is still possible to put a goalpost somewhere - and we don't care where - then it's not AGI."


LLMs are what they are, calling them "AGI" won't make them any more useful or exciting than they are, it's just going to devalue the term "AGI" which has revolutionary, disease-curing, humanity-saving connotations. What are you looking for us to say exactly?

1. We aren't even close to AGI and it's unclear that we'll ever get there, but it would change the course of humanity in a significant way if we ever do.

2. Wow we've reached AGI but now I'm realizing that AGI is lame, we need a new term for the humanity-saving sales pitch that we were promised!


I think getting out of the binary is good for the long run. We have something which is artificial, intelligent, and general in scope. We're there. Is it perfect? No. Is it even good? Sometimes! Do airplanes flap their wings? Also no, but they do a lot of stuff nonetheless.


That's where we disagree, I do not consider a system that isn't capable of learning, improving, or reasoning to be generally intelligent. My most basic criteria for "AGI" is a system that can absorb and integrate new knowledge through repetition and experience in real time, just like a human would.

Further, their statements, knowledge, and "beliefs" should be reasonably self-consistent. That's where I'm usually told that humans aren't self-consistent either, which is true! But if I ever met a human that was as inconsistent as LLMs usually are, I'd recommend that they get checked for brain damage.

Of course the value of LLMs isn't binary, they're useful tools in many ways, but the sales pitch was always AGI == human-like, and not AGI == human-sounding, and that's quite clearly not where we are right now.


Yeah, this is in 'flies like a plane, not like a bird' territory. But I think it's closer than you think.

The systems do learn and have improved rapidly over the last year. Humans have two learning modes - short-term in-context learning, and then longer-term learning that occurs with practice and across sleep cycles. In particular, humans tend to suck at new tasks until they've gotten in some practice and then slept on it (unless the new task is a minor deviation from a task they are already familiar with).

This is true for LLM's as well. They have some ability to adapt to the context of the current conversation, but don't perform model weight updates at this stage. Weight updates happen over a longer period, as pre-training and fine-tuning data are updated. That longer-phase training is where we get the integration of new knowledge through repetition.

In terms of reasoning, what we've got now is somewhere between a small child and a math prodigy, apparently, depending how much cash you're willing to burn on the results. But a small child is still a human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: