What percentage likelihood of being wrong would convince you that caution was necessary? Personally, I wouldn't be very happy with even a 1% chance of humanity being annihilated
Caution is absolutely necessary. But not primarily because there is a chance of superhuman AI being evil or destroying humanity by accident, but because on our way to superhuman AI we are going to have many powerful AI:s controlled by humans.
I don't disagree that those constitute legitimate problems that will need to be dealt with. But... why not primarily? The potential annihilation of humanity really feels like quite a primary issue to me. I can't tell if you're trying to imply that the odds of that happening are zero or negligible, which we could debate, but just ignoring and sidestepping the issue entirely strikes me as suicidally insane
Because I think it is way, way more likely that a human controlled powerful AI wipes out humanity than a real superhuman AI. A paperclip maximizer is way more likely to be a human controlled powerful AI than a proper superhuman AI simply because maximizing paperclips is, in the end, dumb as f#ck.