> I see exponentially stronger algorithms every year
You do? I don't. My sense as somebody near (but not in) the ML research community is that, while there have been recent flashy new things like GPT3 or the new protein folding benchmark, these are more like "solid improvements to applications of things we mostly already know". Are you referring to something else?
You can take almost any deep learning task that we had 5 years ago, and we can train the same task about 1000x cheaper because of hardware improvements that are slowing down, and algorithmic improvements that don't seem to be slowing down. Jeff Dean had an overview article about it I think.
Also many people forget that we have working self driving cars on the road, 20 years ago nobody thought that it would happen so fast.
I see exponentially stronger algorithms every year, but the advancement in being able to set a limit to it is nothing so far.