There are languages which put at least some effort into parallelism / concurrency (and Go would be one of those along with Java, Erlang, Ada, Clojure, even C++ to some extend).
Then there are languages which outsource everything to the system: eg. Lua, Ruby. They have a way in the language to make a system call, and so if the system can create multiple processes or multiple threads, they can use that.
There are languages that have no way to do even that. For example JavaScript, XSLT or SQL. Surprisingly, a lot of these handle concurrency very well in their runtimes due to automatic parallelization performed by the runtime (not the language).
Python is the language that has neither design nor discernible goals. It has some parallelism in the language, but it's lacking important components which are then either outsourced to the system, or aren't there at all. Because of the randomness of the "design decisions" Python cannot also be reliably automatically parallelized, nor do the developers have reliable tools for building parallel applications, especially not in a modular way because different modules may not agree on the way to go about parallelization.
Python has always been a language where you need to be really knowledgeable about things outside of Python and about Python's own implementation details to get ahead. If all you knew was Python, you'd do very poorly. This is in contrast to languages like Java, which put a great deal of attention towards making sure that even the dumbest programmer will not screw up too much.
Now the people who know how to use Python well are gone, and the language is gradually transforming into Java. But it still has a very long road ahead before it can do enough hand-holding for the losers. Parallelism is one of those things where the goals are very far and so far, mostly, unattainable.