Exponentially better than a polynomial in N is a polynomial in log N. See the abstract of [1] for the precise complexity claim about the algorithm mentioned in the article. As good as N^3 sounds, it's still not great on enormous systems.
It's funny you put it that way. O(N^3) sounds horrific to me. Only practical for small problems and for anything else I'd need to look for approximations instead. O(2^N) meanwhile translates in my mind to completely impracticable.
But I guess it all depends on what kind of Ns you are looking at.
One thousand cubed is one billion. Of course it depends on what the Big-O notion is hiding, but one billion 'steps' is generally okay. But if you had tested it with ten thousand players it would go from one billion to one trillion -- which we'd expect to take more than a day on the same machine. That scaling is pretty brutal. Whereas something like sorting a list with O(N log N) would only have gone from about ten thousand to about one hundred fifty thousand. In order to hit one trillion steps we'd need to be sorting a list of nearly thirty billion items.
I sometimes find myself working with ten thousand 'things', but very rarely with thirty billion.
From the abstract if the algorithm they are implementing
> We consider the case where one doesn't need to know the solution x itself, but rather an approximation of the expectation value of some operator associated with x, e.g., x'Mx for some matrix M. In this case, when A is sparse, N by N and has condition number kappa, classical algorithms can find x and estimate x'Mx in O(N sqrt(kappa)) time. Here, we exhibit a quantum algorithm for this task that runs in poly(log N, kappa) time, an exponential improvement over the best classical algorithm.
The computational complexity is way better, though, so even if it takes a few seconds, it makes larger systems solvable that wouldn't be practical otherwise (though first you need a large enough quantum computer).