The article shows that the Swift approach produces different values for length depending on operating system and text library versions. Is that really intuitive?
The Swift approach can't reach perfection in isolation because data from the future can always break it.
That's why in the article you see Swift running on Ubuntu 14.04 returning len==2 while the same code on Ubuntu 18.04 returns len==1 for the same emoji string.
IMO that's a big philosophical question here: do we accept that "string length" means something you can't compute for arbitrary strings unless your code is receiving annual updates containing the latest Unicode interpretation instructions?
Swift includes its own Unicode data tables with the standard library since last year, so it’s now tied to the stdlib version rather than some other library that may or may not be updated on the system.
Your example shows an improvement, which proves my point (also don't drop the word asymptotically, nothing can ever be perfect, that's not the issue, being closer to perfect is a positive)
And you can compute it, you can pin a Unicode version and ship it in the language if those platform differences are unbearable (so, you can actually isolate it and simply ignore the future :))
The bigger philosophical question: how much longer do we accept that "string length" does not measure the most intuitive measure of string length and call a byte a char?