Can you please show me some examples of what good Python code looks like, and point out the overhead? Next month I will be working on a major redesign of a fairly complicated simulations code (Pytorch) with the goal of making it more flexible, and incorporating some new features. This code will be used by many others, so I want to follow good software engineering practices.
I don't off hand, but I didn't want to leave you hanging too long without saying anything either.
The main two things that get me all bent out of shape is having a purposefully disjoint types passed in as the same variable with a bunch of logic around handling either of them, often in very different ways, and never, ever checking types of values or ranges.
If you start throwing in type hinting and make use of mypy, it keeps your own code pretty coherent. If you do need to have disjoint types coming in, spend a lot of time thinking about how you want it to work. It may be reasonably cheap to force everything into a single type from the many possible types coming in, which should simplify things a lot. If that won't work, consider wrapping any of these types in a composite object that unifies the _how_ of accessing the data inside the type into a single low-cost abstraction. Whatever you do, don't let the logic about how to operate over your abstract data input bleed into the logic of how you're building off of it.
This is one of those things that Python makes hard, not because it purposefully stops you, but because it makes it _so very easy_ to spew implementation details through every single function call. It's easy, and programming is hard, and people have deadlines and it's something you can easily convince yourself you don't need to do, and suddenly you're writing bad code and nothing and nobody is there to stop you from it.
I'm a big fan of keeping my type hints in my code and not using typeshed; I WANT people to see what types I'm agreeing to support. You don't have to agonize over documentation or look 14 levels deep in my code to see what I'm handling. The goal here should be for someone to read my function signature and go "oh, okay, I know exactly what I need to provide for use".
Another vexing thing that comes about ALL the time in dsci/ml code is single letter identifiers. A lot of this is because the paper says this variable is `p` or maybe even `epsilon` so... that's what the variables get named. I've even seen `f()`, `g()`, and `h()` in the wild, and of course there wasn't a lick of documentation around it.
Unless your audience is only ever the people who wrote the paper itself, or those who studied it vigorously (more so than just reading it), these are terrible choices.
This advice is not python specific - it's language agnostic, but I notice it most in Python solely because I run into it the most dealing with non-devs taking a stab at writing their first libraries (or, rather, a bit of polish on their initial prototype). But in general, write your code not so that it's easy to write, but so that it's easy to read, easy to reason about, and doesn't require a chain of whitepapers to understand. The goal is so that someone reading this later doesn't have to literally be you, at this exact moment in time, to understand what is going on. You want your code to live long past your current attention on it, so write it in a way that is easy for another maintainer to pick up and run with.
In the end, other languages make some of this table stakes - not the naming, obviously, but the types? Range checking? Handling error conditions explicitly? Python gives you all the rope in the world necessary to bind yourself into a knot with, so it's all up to the developer to do the right thing. When I lament python, it's not because it's inherently bad, but because allowing people to play fast and loose with the rules means you're going to find a lot of people who just don't give a shit.