Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with humanity is we are really poor at recognizing all the ramifications of things when they happen.

Did the indigenous people of north America recognize the threat that they'd be driven to near extinction in a few hundred years when a boat showed up? Even if they did, could they have done anything about it, the germs and viruses that would lead to their destruction had been quickly planted.

Many people focus on the pseudo-religious connotations of a technological singularity instead of the more traditional "loss of predictability" definition. Decreasing predictability of the future state of the world stands to destabilize us far more likely than the FOOM event. If you can't predict your enemies actions, you're more apt to take offensive action. If you can't (at least somewhat) predict the future market state then you may pull all investment. The AI doesn't have to do the hard work here, with potential economic collapse and war humans have shown the capability to put themselves at risk.

And the existential risks are the improbable ones. The "Big Brother LLM" where you're watched by a sentiment analysis AI for your entire life and if you try to hide from it you disappear forever are much more, very terrible, likelihoods.



> The problem with humanity is we are really poor at recognizing all the ramifications of things when they happen.

Zero percent of humanity can recognize "all the ramifications" due to the butterfly effect and various other issues.

Some small fraction of bonafide super geniuses can likely recognize the majority, but beyond that is just fantasy.


And by increasing uncertainty the super genius recognizes less...


That's already happening unfortunately. Voice print in call centers is pretty much omniscient, knowing your identity, age, gender, mood, etc. on a call. They do it in the name of "security", naturally. But nobody ever asked your permission other than to use the "your call may be recorded for training purposes" blanket one. (Training purposes? How convenient that models are also "trained"?) Anonymity and privacy can be eliminated tomorrow technologically. The only thing holding that back is some laziness and inertia. There is no serious pushback. You want to solve AI risk, there is one right here, but because there's an unchecked human at one end of a powerful machine, no one pays attention.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: