The people who, from Trinity (or before), were worried about global annihilation and scrambled to build systems to prevent it were correct. The people saying “it’s just another weapon” were incorrect.
It's kind of infuriating to see people put global thermonuclear conflict or a sudden change in atmospheric conditions (something that has cause 4 of the 5 biggest mass extinctions in the entire history of the planet) on the same pedestal as a really computationally intense text generator.
My worries about AI are more about the societal impact it will have. Yes it's a fancy sentence generator, the problem is that you already have greedy bastards talking about replacing millions of people with that fancy sentence generator.
I truly think it's going to lead to a massive shift in economic equality, and not in favor of the masses but instead in favor of the psychopathic C-suite like Altman and his ilk.
I'm personally least worried about short-term unemployment resulting from AI progress. Such structural unemployment and poverty resulting from it happens when a region loses the industry that is close to the single employer there and people affected don't have the means to move elsewhere or change careers.
AI is going to replace jobs that can be done remotely from anywhere in the world. The people affected will (for the first time in history!) not mostly be the poorest and disenfranchised parts of society.
Therefore, as long as countries can maintain political power in their populations, the labor market transition will mostly be fine. The part where we "maintain political power in populations" is what worries me personally. AI enables mass surveillance and personalized propaganda. Let's see how we deal with those appearing, which will be sudden by history's standards... The printing press (30 years war, witch-hunts) and radio (Hitler, Rwandan genocide) might be slow and small innovations compared at what might be to come.
I don't think existing media channels will continue to be an effective way to disseminate information. The noise destroys the usefulness of it. I think people will stop coming to platforms for news and entertainment as they begin to distrust them.
The surveillance prospect however, is frightening.
I think people aren't thinking about these things in the aggregate enough. In the long term, this does a lot of damage to existing communication infrastructure. Productivity alone isn't necessarily a virtue.
I've recently switched to a dumb phone. Why keep an internet browsing device in my pocket if the internet's largest players are designing services that will turn a lot of its output into noise?
I don't know if I'll stick with the change, but so far I'm having fun with the experience.
The Israel/Gaza war is a large factor - I don't know what to believe when I read about it online. I can be more slow and careful about what I read and consume from my desktop, from trusted sources. I'm insulated from viral images sent hastily to me via social media, from thumbnails of twitter threads of people with no care if they're right or wrong, from texts containing links with juicy headlines that I have no hope of critically examining while briefly checking my phone in traffic.
This is all infinitely worse in a world where content can be generated by multi-modal LLMs.
I have no way to know if any of the horrific images/videos I've already seen thru the outlets I've identified were real or AI generated. I'll never know, but it's too important to leave to chance. For that reason I'm trying something new to set myself up for success. I'm still informed, but my information intake is deliberately slowed. I think that others may follow in time, in various ways.
It’s kind of infuriating to see people put trench warfare or mustard gas on the same pedestal as a tiny reaction that couldn’t even light a lightbulb.
There are different sets of concerns for the current crop of “really computationally intense text generators” and the overall trajectory of AI and the field’s governance track record.
...you do realize that a year or two into the earliest investigations into nuclear reactions what you would have measured was less energy emission than a match being lit, right?
The question is, "Can you create a chain reaction that grows?", and the answer is unclear right now with AI, but it's hard to say with any confidence that the answer is "no". Most experts five years ago would have confidently declared that passing the Turing test was decades to centuries away, if it ever happened, but it turned out to just require beefing up an architecture that was already around and spending some serious cash. I have similarly low faith that the experts today have a good sense that e.g. you can't train an LLM to do meaningful LLM research. Once that's possible, the sky is the limit, and there's really no predicting what these systems could or could not do.
It seems like a very flawed line of reasoning to compare very early days nuclear science to an AI system that has already scaled up substantially.
Regarding computing technology, I think the positive feedback you're describing happened with chip design and vlsi stuff, eg. better computers help design the next generation of chips or help lead to materials breakthroughs. I'm willing to believe LLMs have a macro effect on knowledge work in a similar way search engines, but as you said, it remains to be seen whether the models can feed back into their own development. From what I can tell, gpu speed and efficiency along with better data sets are the most important inputs for these things. Maybe synthetic data works out, who knows.
The people who thought Trinity was “scaled up” were also wrong.
The only reason we stopped making larger nuclear weapons is because they were way, way, way beyond useful for anything. There’s no reason to believe an upper bound exists in the physical universe (especially given how tiny and energy efficient the human brain is, we’re definitely nowhere near it) and there’s no reason to believe an upper bound exists on the usefulness of marginally more intelligence. Especially when you’re competing for resources with other nearly-as-intelligent superintelligences.