I'm a big fan of not posix but instead modern bash and to all the complainers about dash and ash, I say "tough cookies".
Bash has arithmetic, dictionaries, regex matching, nice string manipulation, co-processes, TCP pipes built in, it's pretty nice.
However, bash has very low tolerance for incompetency. It's a brutal friend. It's cooking dinner with a pocket knife. You gotta actually read the docs and be sincere in your studies.
I was around when supporting Solaris, HPUX, AIX, and Irix was a real concern. Those times sucked and those platforms are so minor I stopped caring about them except for enthusiast projects many years ago. (And I still actually have an hpux machine)
Sometimes I'll even use zsh. It can even do floating point.
Here's some example of a modern tool I have written for a subject I call "music discovery"
You'll see many languages in there but you won't see posix sh. It's slow, it's clunky, it's a pain in the ass.
If you don't like my practice then I guess don't use it. I've been using/developing these particular tools nearly every day for over 3 years and it works well for me.
I'm not going to say bash is awesome but it's pretty great for programming.
I use zsh as my interactive shell though, it's miles ahead of bash in that department. Watching people who really know what they're doing in zsh is a total mindfuck.
I think the real point is that most of the time when someone writes a bashism, it's unnecessary. You don't really lose anything 99% of the time by writing posix shell. If you need bash, write bash, and just change the shebang.
I think real arrays are the exception to this rule. They're the only non-posix shell feature I use consistently. Otherwise, whenever you want to gather a list of strings or filenames and then iterate through them, you need to worry about delimiters and splitting and parsing and IFS and that's simply not worth the dubious benefit of not including a bash shebang.
If you really need to run the script on alpine, just install bash. If you're really space-constrained, then sure, go back and refactor the arrays into newline-delimited strings or whatever, but I've never had to do that and I doubt I ever will. Getting to use arrays means my scripts can avoid dealing with the horrible quirks of shell tokenization, and that's 100% worth it to me.
About 1/3 of the debug sessions involved looking at the screen peeved going
"yes, obviously I meant gsed and ggrep you dumbass."
You go through the man page of the vendor supplied version like "What do you mean you don't support feature x? This can't be real. I swear I'm being recorded right now. A bunch of people are sitting around a large conference table in mountain view right now laughing at my pain"
Eventually I just read them as parody. Things like this can be really funny if you allow them to be.
I'm not going to throw shade on earlier eras of computing. I study them intensely.
However, I will throw shade on pegging things to those eras when everyone else has moved on.
It'd be like having to grapple with near and far pointers still because that was a reality of the 16-bit world. I remember them, it was a brilliant system for the limitations but they've been gone for 25 or so years and good riddance. They were also bastards.
Also when I see these things, it's like I've been in some kind of mythical suppressed memory therapy as these horrid events from my past come forth to haunt me again. One man's competency is another man's post traumatic disorder
Back in the 90s, Solaris was the big hold out -- IRIX and HPUX both came with at least a subset. First thing I had to do on Solaris was install the GNU tools, and, thankfully, eventually there were packages to manage that installation.
I cringe when I hear people (often senior) saying things like: "Yikes, a 500-line shell script. We must rewrite it in Python for maintainability!!!!1!" -- without even looking at how it has been written.
Yes. Shell scripts can be terrible, and most are. But if one approaches them thinking that “the shell is a real language and must be written as such”, with solid principles in mind, it is perfectly possible to write clean, maintainable, and testable shell scripts.
Oh, and that 500-line shell script probably ends up being a 5000-line Python monster anyway.
consider this sentence:
"arrays do not exist in posix shell"
...
"NOOO!" you shout, because you like data structures a LOT
"now THAT'S a good reason to use bash! bash supports arrays!"
absolutely not!
i believe that you should use shell if your problem is:
- small and scoped (~200 lines of shell or less)
- unlikely to increase in size and scope as time goes on
- not very complex
if you find yourself desiring arrays, good error handling,
static typing, structs, etc, then your problem should
be solved using a different language, not shell.
I tend to agree with this heuristic.
It's also why you see people bothering to use perl 5 at all in this day and age. It just happens to be preinstalled on a decent majority of *nix systems.
A young traveller came to visit Emperor Sh, and found him sitting in his
sparsely furnished temple.
“Emperor Sh,” he said, “I am told you are the greatest scholar of shell that
the world has known.”
Emperor Sh made no reply. The traveller continued.
“I have come to ask your advice. I am thinking of developing a character-based
graphing tool. It will interactively change the plot based on key presses.
Which shell commands should I use?”
“Don’t do it in shell,” said Emperor Sh, curtly.
The young traveller was confused. He tried again. “Well… I am also working on
a database audit script. It needs to verify that certain characters do not
appear in any fields in several tables. What should—”
“Don’t do it in shell,” interrupted Emperor Sh.
The traveller began to despair. “I have journeyed one thousand miles, tried
thirteen distributions of my operating system, and waded through hundreds of
badly-written manual pages,” he cried, “and now I have finally come to Emperor
Sh, the greatest shell programmer in the world, and I am told to use no shell
at all! Perhaps I should do what my Python user friends told me to do, and
just pipe together some scripts in venv environments!”
“Good idea,” said Emperor Sh. “Do it in shell.”
Enlightenment crushed down on the young traveller and he bowed to the Emperor,
sobbing with reverent joy.
The "unlikely to increase in size or scope" is the big one. It basically eliminates almost all uses of shell scripts (as it should).
I would add a 4th condition: you aren't going to share the script with anyone else (including CI) - it's going to stay strictly on your computer and nobody else will ever read it.
Yeah, but perl 5 hasn’t broken compatibility (as far as I’ve noticed) since 2000 or so. Python breaks compatibility every week, in practice. Does LSB include some sort of magical frozen in time version of python 2 or something?
In my 12 years of python I was hit with compatibility problems once, in a school project where someone started in python 2 and we changed for python 3. What libraries break that often so I can stay away?
So is the RPM format. I like LSB particularly as a reference for obscure ABI details, but I don’t think anybody actually builds either systems or applications to it.
The official Let's Encrypt client is written in Python, and the core 'executable' is much longer, and in addition it pulls in a boatload of dependencies:
100% disagree. Standard /bin/sh has terrible error handling facility, which on its own is enough to disqualify it as a "proper" programming language in my book. Even C is not that bad, and C is pretty terrible.
I'm proficient at POSIX shell scripting and use it all the time, but every time I discover that a tool I need to maintain or, god forbid, improve upon is written in pure shell I immediately get a migraine.
The shell is not a real language and must be written as such.
There's no such thing as "standard /bin/sh". You're talking about busybox ash, or dash, or something else that is shipped as /bin/sh in your distro.
Take a look at winetricks, or clitest, or crossroot-ng, or shellspec. They are modular, easy to read, well written pieces of software that are written in shell.
In the 2000s, people used to say that it was impossible to write large stuff in JavaScript, and it was not a real language. They were wrong, because they only saw browser differences and the issues with it, not the great things about it. The bourne shell is in a similar state.
I meant POSIX shell or more specifically a loosely defined subset of all these shell dialects that is reasonably portable in practice.
I'm not saying that it's impossible to do, I'm saying that it's very difficult, requires every contributor to be extremely well versed into shell coding and the particular convention of the codebase, and small issues can easily snowball out of control. It's doable, but it's almost never worth it IMO. One notable exception is when you need max portability with effectively zero dependencies.
>In the 2000s, people used to say that it was impossible to write large stuff in JavaScript, and it was not a real language.
They were partly right, which is why modern web development bears very little resemblance with early 2000's JS hacking. Remember that even JQuery only appeared in 2006. And those were the dark days of IE6...
But IMO that's not exactly the same thing: the issue with Javascript was less with its syntax and more with the very poor Javascript API and standard lib (or lack thereof), shell scripting is the other way around in a sense: you have access to all the power of the OS and all the programs in it, but it's the actual programming language that's lacking.
Actually a decent language that would transpile to POSIX sh would be interesting, now that I think about it.
The POSIX standard for sh is broken, it was born broken.
For example, all moderns shells support some form of `local` for declaring local variables. There are some small subtleties but consistent behavior can be achieved across interpreters. It's not on POSIX though. Shellspec complaints about it (but it shouldn't).
It sounds small, but it's a huge thing to have local variables instead of a huge global scope.
There are many other features that are widely available but with slightly different interfaces. Kind of what happened with ActiveX/XmlHttpRequest in JavaScript.
What is missing is precisely documentation on the subset of stuff that _can_ be portable with coding standards, polyfills and adapters.
I'm not talking about core utilities here. No `cat` or `grep` or nothing like it. Basially, `PATH=''`, only builtins are allowed. Even with this constraint, the shell is usable and powerful.
Unfortunatelly, most people are well versed in bash+coreutils, not raw portable shell. It requires learning a few things that are not straightforward at first (variable substitution, how the shell forks processes, making evals without making a mess, nested quoting, etc).
> Actually a decent language that would transpile to POSIX sh would be interesting, now that I think about it.
That would be nice. I believe it can be written in shell itself, self-hosted and portable. It's hard work though (on the same level of writing babel from scratch).
This is terrible advice. Senior people say don’t do this because they’ve seen it done and it’s an absolute nightmare (I’ve seen it happen multiple times).
It’s untyped, untestable, arcane, and unmaintabale. Even the link says don’t do this. Just don’t.
Congrats on having the chops (and neckbeard) to pull this off. If you want to do it in a personal project, more power to you. But it doesn’t scale at organizations.
Anyone writing code in a language they don't know will write nightmarish code. No matter if it is shell, Python, Go, or what have you. I have seen it happen multiple times as well, with all of these languages. And /that/'s the problem.
So, it depends on the context. "Organization" is a very broad term. If the team you are in is familiar with the shell, and the shell is the right tool for the job, there isn't that much of a problem.
I'm not arguing for writing large shell scripts when there are better alternatives. But, sometimes, it's the right choice and it can be written properly. And even if it isn't... 500 lines of poorly written shell aren't that many in any case, and they aren't that bad compared to 5000 lines of poorly written Python for example.
> Anyone writing code in a language they don't know will write nightmarish code. No matter if it is shell, Python, Go, or what have you. I have seen it happen multiple times as well, with all of these languages. And /that/'s the problem.
I fail to see why you think it would take more lines to do in Python.
And the problem with bash is similar to C: many people think they can write correct C code, but as it has been shown plenty of times, that doesn’t really work out in the end. Languages have different tradeoffs and bash really has no redeeming qualities, imo.
> I fail to see why you think it would take more lines to do in Python.
Because Bourne shell is not a Python-class language like Ruby or perhaps even Scheme or AWK are. Things that are easy in shell can be hard in Python and vice versa: awkward wrangling of subprocess.Popen in Python can be a single line of shell; simple tree munging with lxml in Python can easily be a hundred or more lines of wasteful xmlstarlet invocations in shell. Parallel processing, even without the heavy artillery of make or GNU parallel, is easy in shell but hard in Python. Structured data is hard in shell but easy in Python.
(Similarly, I wouldn’t use ML or Haskell to write a web scraper, and I wouldn’t use Python to write a compiler.)
Regarding parallel processing, I have to disagree. It is bad in shell, because it makes best practices such as "set -e" impractical. The usual failure mode is a script that does not stop correctly on SIGTERM, and its children have to be hunted down and killed separately.
Perhaps a better way to put it would be that sloppy concurrency is easy. This doesn’t sound all that good, but it’s easy enough that I find myself parallelizing one-off things in shell that would be such a hassle to do nonsequentially in Python (compared to the magnitude of the problem) that the option wouldn’t even surface in my brain. (If I rewrite the shell script in Python later, the rewritten version is often sequential.)
The shell error handling story for “run stream processing steps in lockstep” parallelism is much better than for “spawn a bunch of identical children” parallelism, though, and that’s often enough.
$ shellcheck myscript
Line 1:
#!/bin/bash -eux -o pipefail
^-- SC2096 (error): On most OS, shebangs can only specify a single parameter.
Line 4:
trap "rm -rf $TMPDIR" SIGINT SIGTERM ERR EXIT
^-- SC2064 (warning): Use single quotes, otherwise this expands now rather than when signalled.
Line 6:
cd $TMPDIR
^-- SC2086 (info): Double quote to prevent globbing and word splitting.
Did you mean: (apply this, apply all SC2086)
cd "$TMPDIR"
Line 9:
baz $TMP | tee baz.out &
^-- SC2086 (info): Double quote to prevent globbing and word splitting.
Did you mean: (apply this, apply all SC2086)
baz "$TMP" | tee baz.out &
Line 13:
tar -cf - -C $TEMPDIR | zstd -6 -z - | pv | ssh server.com bash -c "cat - > out.tag.zstd"
^-- SC2086 (info): Double quote to prevent globbing and word splitting.
^-- SC2153 (info): Possible misspelling: TEMPDIR may not be assigned. Did you mean TMPDIR?
Did you mean: (apply this, apply all SC2086)
tar -cf - -C "$TEMPDIR" | zstd -6 -z - | pv | ssh server.com bash -c "cat - > out.tag.zstd"
Python wins the triathlon by being second best in every game. Piping things between programs is about the only thing bash is good at, nothing else, of course it is going to look good when unconditional straight piping things through standard programs is the only requirement. It's like saying c++ is fastest at calculating pi, duh, nobody will argue against that. Now, instead, put that script into a bigger context where some actual program logic or other integrations are needed. It is going to explode in unreadability, bugs and only supporting happy-cases. It is only simple if your original problem is simple.
Python is untyped and untestable. The only way I’ve seen people use it is by importing whatever libraries were popular in whatever year they wrote it, so it’s also arcane.
Also, to understand someone else’s python, you have to read at least 10x more lines of line noise, so it somehow manages to be less maintainable than shell.
Concrete example: Every python code base I’ve encountered has some function named exec that poorly reimplements shell job control and argument passing. As the python code bases grow, they end up with more than one of these, all of which have different bugs.
I usually discover this about 6 hours after the script has been sent to me, since someone who has heard the words “supply chain vulnerability” locked down whatever machine I need to run the script on.
The last ten times this has happened (at multiple companies, with multiple teams) it ended up being faster to figure out what shell commands the python script wanted to invoke, port a few things to jq and then reimplement the rest in a mix of shell and perl.
The result is always 1-10% the lines of code of the python, and actually runs without downloading an entire python distribution, dependency manager, and hundreds of broken packages at runtime.
Completely untrue. You're conflating static typing with strong typing. And pytest is a great test framework.
And you're recommending shell for better typing and testability? The language where everything is a string, there are no proper arrays, and functions don't have local variables? You have to be trolling.
I’ve probably spent two weeks (80+ active hours) debugging stuff like ‘“” is not None’. At this point, I’d prefer a language with only string types to one that only checks types in production.
pytest is terrible, at least from how I’ve seen people use it. It hides the diagnostic error explaining why the test failed in a random place. Its output is not parseable. If you have a long running test that times out, then it buffers the stdout and stderr so you have to wait for the fallback test time out every. single. time.
You've got exactly same problem of undefined vs empty in sh (-n not being exact inverse of -z vs string comparision with != "", different behavior if -u is set, etc). Just check the stackoverflow answers on how to test if a bash string is empty or undefined and there isn't even consensus on what is the best way to do it.
Mypy has had strict Optional checking on as default since 2018, almost entirely eliminating any None-related errors, meaning it is actually stricter and safer than Java or other traditionally considered static languages in that regard, (bar any dynamic misuse in deserialization boundaries). Assuming you use mypy of course, but that's a given today imo, like using shellcheck is for sh.
I cringe everytime my manager tells me not to write shell scripts but javascript instead. There is ShellCheck, Bats and there are extensions for e.g. Visual Studio Code or Jetbrains IDEA that make writing shell and bash scripts a pleasure. I find it generally much more productive to write a shell script because I already know all the gnu/unix userland tools. And i know this shell script will be portable and still running decades from now unless some version specific node script with dependencies.
> it is perfectly possible to write clean, maintainable, and testable shell scripts.
Possible? Maybe. Easy? No. Especially the “testable” part.
The vast majority of engineers do not know how to write clean shell, let alone testable shell. There’s a reason for this: it’s extremely difficult and non-obvious to do, and generally isn’t worth investing the time in learning (you can just use a different language instead: perhaps one with support for advanced features like… arrays…)
So sure, you can write your 500 line shell script and it may well end up being beautiful. But what happens in 5 years when someone who isn’t as enlightened as you has to maintain it?
(This all without mentioning the worst aspect of shell scripting: security. Good luck handling secrets without leaking them in one way or another).
> Possible? Maybe. Easy? No. Especially the “testable” part.
a testable shell script? Never seen one.
Thinking about scripts I've read in the past, I remember seeing Jason Donenfeld's bash script for wireguard-wg and thinking how productive and readable it was,
I see your point, but what happens in your alternate world of having to maintain the 5000-line Python version 5 years down the line? You face the same issues you describe. The people that wrote the original beautiful code aren't there anymore, and whoever comes touching the code "messes it up". Most Python "scripts" I have had to deal with that had been around for a while were pretty terrible as well.
First, I doubt that the SLOC conversion ratio between sh and Python is really as high as 10x.
Second, yes, I do think the 5000 line Python script would be more maintainable than than the 500 line sh script. I'd choose real types and data structures any day of the week.
Bad code is bad code, regardless of language. At least with Python, I more or less know all of the idioms and can at a glance understand what the flow of a program is trying to accomplish. Shell has a lot of esoteric voodoo syntax to replicate better string/path/array handling that is just a given in other languages.
I don’t think you do see my point: my point is that writing clean, testable, maintainable Python is both easier and a more widespread skill than writing equivalently clean, testable and maintainable shell. Both of these factors will mean that the Python remains maintainable for longer.
The fact that posix shell is somehow state of the art of concurrent pipeline and filesystem interface is really more a critique of state of the art than a praise of shell.
I'm doing programming language research and it's just infuriating how retrograde shell is. Its design is a fossil of history, and boy did our understanding of what makes a good language evolve. As a proof of this look at the jq tool. It is so powerful and clean. The implementation is simple and reasonably fast. It has been designed by a researcher from functional languages (an ocaml guy). This is a way to build a language around pipelining. At a lesser level, look at how moving from vimscript to lua in neovim has brought so many high-quality new plugins, look at how many people now tinker with low-level stuff and write high-perf libraries now that C/C++ is not the only choice. Surface language and having very little very orthogonal concepts is important.
Some core implementation patterns for PL are very old, but there have been lots of simplifications and rationalizations in surface language. Error handling, values hygiene, scoping, simple datastructures. Posix shell might have been a prowess in the 70s, now it is mostly a hack and this is a fact. Let's be a bit less superficial when honoring the unix mantras and dream about something better (making a better shell for its core ideas(!), that is, handling processes and fds).
Long time ago some dude created a couple of korn shell scripts on Aix to automate file uploads and ETL related tasks.
With time that work moved from Aix into Red-Hat Linux, still korn shell, with the adaptations from Aix into GNU/Linux command line parameters and such.
I got tasked to port them into Java, because the team that got the job to maintain the servers lacked UNIX experience, and they thought selling the porting project was a good idea.
Feedback from customer, slower than those scripts and not as easy to change, what a surprise.
I use Korn shell (ksh93) for quick scripts. Besides arrays and such (it's approximately a superset of bash, since bash copied some but not all of ksh), the killer feature is a declarative extended `getopts` that makes it trivial to provide even the quickest and dirtiest script with short/long option processing with full help, so that the purpose and usage of the script remain comprehensible five days or five years later.
https://gist.github.com/kpschoedel/91fdcfa934111f3846efb5034...
I understood that Stephen Borne's love of ALGOL was the chief structural influence of the POSIX shell (so much so that he used the C preprocessor to turn his source code into fake ALGOL).
"Stephen Bourne's coding style was influenced by his experience with the ALGOL 68C compiler that he had been working on at Cambridge University... Moreover, – although the v7 shell is written in C – Bourne took advantage of some macros to give the C source code an ALGOL 68 flavor. These macros (along with the finger command distributed in Unix version 4.2BSD) inspired the International Obfuscated C Code Contest (IOCCC)."
The question is if the script outgrew its original intended lifespan.
In my opinion, shell scripts should be used either as glue or due to portability reasons.
Rewriting a perfectly good and working script just because makes no sense.
So if the script started as some quick and easy way to deploy to prod but slowly becomes a massive cli "one tool to rule them all" kind of thing.. maybe it's time to change course.
There are even great unit testing frameworks for testing across a variety of shells and versions. This can be integrated pretty easily into existing build tools and run the same way the other code it may (or may not) support is run.
One important con that isn't mentioned: It's so unnecessarily confusing how to write a shell script that can properly deal with whitespace, dollar signs or other special characters in filenames.
Of course it's possible and learnable. But even if you know the tricks, it's tedious, and human error prone. I had a team mate (many years ago) who used to test the local admins at any new workplace by putting files names named "$HOME" and similar in his home directory. He claimed it wasn't that rare for this to crash some backup script.
Yet this isn't inherently hard. In fact it's trivial in literally any "proper" programming language, no matter which one you end up picking. I get why shell is "scripting of last resort", I regularly write shell scripts myself. I don't get why people would choose it over anything else where choices are available.
The backup script's job is to back up all files, not crash fatally because someone used a dollar sign, a tilde, or a space, or `rm -rf` in a file name. The sysadmin’s job is to write robust tools, not expect users to understand arcane UNIX stuff.
You are right of course. Just my knee jerk reaction that if someone is deliberately creating such files, they know exactly how much peril they are creating for themselves. Owing to so many years on the terminal, I still hesitate to name files with a space - a practice which the uninitiated find odd.
This would be considered silly if a web dev said this about e.g. a script breaking because someone entered a single quote into a text field and it was interpolated directly into an SQL statement. We should not tolerate this from sysadmins either.
Right, it is unwise to do that precisely because there are so many shitty bash scripts in modern systems.
I think the point was he was trying to find those shitty bash scripts - and ideally get them changed into something more robust (essentially anything else).
Many people advise against using shell scripts for non trivial stuff. Is there a language as convenient as shell for calling and piping external programs but with proper typing and error handling?
People advise using Python but I don't see myself using subprocess and Popen for gluing programs together and end up writing shell scripts anyway, I'm pretty comfortable with posix shell.
If you code in Python, your probably should use the language as much as possible and avoid calling shell commands.
E.G:
- manipulate the file system with pathlib
- do hashes with hashlib
- zip with zipfile
- set error code with sys.exit
- use os.environ for env vars
- print to stderr with print(..., file=...)
- sometimes you'll need to install lib. Like, if you want to manipulate a git repo, instead of calling the git command, use gitpython (https://gitpython.readthedocs.io/en/stable/)
But if you don't feel like installing a too many libs, or just really want to call commands because you know them well, then the "sh" lib is going to make things smoother:
Also, enjoy the fact Python comes with argparse to parse script arguments (or if you feel like installing stuff, use typer). It sucks to do it in bash .
If what you need is more build oriented, like something to replace "make", then I would instead recommend "doit":
It's the only task runner that I haven't run away from yet.
Remember to always to everything in a venv. But you can have a giant venv for all the scripts, and just she-bang the venv python executable so that it's transparent. Things don't have to be difficult.
Finally, sometimes it's just easier to do things in ipython directly. Indeed, with a good PYTHONSTARTUP script, you don't have to import everything, you get superb completion+help, and you can call any bash command by just doing "!cmd". In fact, the result can be store in a python variable. Also you can set autocall so that you don't have to write parenthesis in simple function calls.
- It's a flavor of Lisp (Clojure, specifically), a language whose flexibility makes it ideally suited for gluing together programs and working with data
- Compare to alternative scripting shells, babashka is very pragmatic and 100% production ready today, as it's built on top of Java, GraalVM and Clojure
- Even though it's Java under the hood, it's FAST (`time bb -e '(+ 1 1)' -> 0.014 ms)
- Support for filesystem operations (e.g. globbing) is as good or better than Bash
- Working with subprocesses is better than in bash, because there are fewer gotchas
- Piping is easier in bash, but most of the time you _don't_ really want to use pipes. `cat /etc/fstab | grep /usr` should be a set of function calls, not subprocesses
Write 1 file shell scripts for convenient zero dependency command line tools that will work on most systems. If things start to get rough due to shell limitations, then Python scripts without dependencies is another 1 file solution that's quite portable. You can even mix and match (mostly use shell and call Python from it for the complicated bits as a last resort).
The takeaway is that for 99% of cases using IFS or Bash's string replace in a variable is great but if you have strict parsing requirements then maybe going with Python is worth it. The post links to a massive SO answer that demonstrates how string manipulation in shell scripting is hard.
I tried https://plumbum.readthedocs.io/en/latest/ recently and it was quite good. You might want to write a wrapper function depending on what error or newline splitting behaviour you want.
If you have a single command it’s fine. Not great because of the list syntax, but fine. But I’ve found when I’m doing shell commands and using Python as glue, the Python takes up most of the script while doing barely any of the work.
Don’t get me wrong, I like Python more and can barely write shell (ChatGPT/Copilot has made it easy). But it is not as convenient for calling and piping commands as shell.
>in short; when the problem you're solving is small, well defined, and unlikely to change, consider shell.
If you have to caveat it this much, it's a pretty weak statement. Any language works when the problem is small, well defined, and unlikely to change!
Strict POSIX compatibility is something of a chimera. You don't just need to read the POSIX shell spec, because you're not just programming in POSIX shell, you have to use the coreutils as well to get anything done. Do you remember which flags on grep are POSIX-only, and which are extensions provided by the GNU version? And then the same for sed, ls, and the rest of the UNIX zoo? There's an order of magnitude more shit you have to remember and worry about.
Needing strict POSIX compatibility is an increasingly niche use-case. I find it baffling that, on the one hand, we have these modern software isolation and distribution systems like containers and Nix and so on, technologies that are meant to let us run anything anywhere reliably, and then we forget we have these marvels and wear the POSIX hair-shirt anyway. Why can't you just include a better language in the container? A container is supposed to contain things! Stop hitting yourself in the face repeatedly!
Finally if you somehow really do need to live in the POSIX cave, you don't need to use shell. POSIX also specifies a much saner language: awk.
Shellcheck is useful if you are learning shell, but it gets really annoying over time with its amount of false positives. Its still a static checker that does not understand context. I've seen too many scripts where half of all comments lines were shellcheck disables.
In my experience false positives are rare. You need to a take some time to actually understand the warnings to know how to avoid them, often there is some edge case lurking even the most trivial code. I see a lot of people are way too quick on dismissing the warnings, because they are overconfident that they know shell, when in fact most people don't have the slightest clue on the amounts of footguns hidden.
It's such an essential tool. Always such a beast to install it though, definitely keep it off any servers.
It's probably saved myself from weird hard to see mistakes so many times I wouldn't be able to count them because if I didn't have that tool I might not even see it until runtime, which is a terrible place to discover an error.
>Always such a beast to install it though, definitely keep it off any servers.
I do not follow. Isn't it just a static executable? I have only ever apt-get installed, but I am unsure where the complication rests, unless you are compiling from source.
I haven't seen anyone mention one important reason for writing POSIX shell over other shells. The article implies it, but never says in directly: portability.
I wrote a 2100 line monster of a POSIX shell script. [1]
The purpose was entirely for portability. That script allows me to use a straight POSIX Makefile for building my bc. The combination of the two means that my bc builds without modification on any semi-POSIX system.
That said, it was hard. I basically had to constrain myself to POSIX utilities and still implement a template language. I got it done, and it was worth it, but if you don't have to, don't do it.
Shell has its place as a glue language between commands. I tried writing a good CLI in bash this week and yikes it was hard. The lack of a package manager and good libraries really hurt. If you could “pip” install bash libraries like an argparse equivalent it would help a lot
For legacy reasons, my filesystem is full of files and directories with spaces. Are they impossible to deal with in shell? No. Quoting, print0, it's not even hard. But it's like taking a Sunday stroll through an unburied minefield. I'd rather walk in the park.
My rule of thumb is that one needs to switch to other language when one needs maps or complex array operations. Arrays for proper processing of arguments or storing a sequence of results are doable in a POSIX shell with couple of utilities.
I used to be one of the write .sh people. Now I just ensure my shells scripts end in .bash and don’t insist people limit themselves to an effectively unmaintained standard.
> posix shell is portable: it'll run on debian, ubuntu, on openbsd, in an alpine container, on illumos! arch! even horrible old AIX!
> lists three Linux distributions and three Unixes: calls POSIX shell 'portable'.
You know where POSIX shell scripts don't run natively? Windows. If you're going to conveniently exclude about half the developer demographic and about 90% of the user demographic to push a not-very-valid point... But given the aesthetic of the website, I somehow get the feeling the author couldn't care less about Windows (or its users).
Please use a proper glue language with nice types and syntax, like Python or PowerShell. IMO PowerShell >> POSIX shells and children; interop with full .NET already gives it a several point-lead over POSIX shells.
P.S., I notice increasingly more bloggers abandoning sentence case, and it makes for very difficult reading.
Opening "man bash" transformed my experience with bash, which has now risen to the top in the list of my favourite languages.
I don't get all the hate.
I used fish for a while and really liked it, but I realise now this was only because I didn't really know bash. (that and it came with some nice defaults out of the box, like prefix sensitive command history, which requires a .inputrc oneliner for bash)
But now every time I try fish I'm like, eh, and immediately switch back to bash.
I havent tried zsh, but it just looks gimmicky and "appleish" to me.
Rust may actually be a solution in this space. It is better typed and more testable than Python. It has fewer issues with environmental dependencies due. It is easy to install and build.
There is ongoing work to rewrite the coreutils in Rust, and if they were exposed with an ergonomic library interface, would provide a lot of the value of shell with much better safety.
Rust is _a lot_ slower to develop than sh/bash/Python, though. If your script is in reality a semi-complex CLI tool, then Rust is great, but if it's a 1k line shell script, Python/Ruby/JS (especially with Deno) are better options.
For sure! I used to think that python was the next logical step for a growing project that started out life as a shell script, but now I think we have even better choices like rust.
My reason for staying away from posix-shell as much as possible is that it has so many ways to shoot itself in the foot.
This is also only indirectly touched in the article.
I'd rather write 10x as long Python scripts than to shoot myself in the foot with posix-shell again.
Bash has arithmetic, dictionaries, regex matching, nice string manipulation, co-processes, TCP pipes built in, it's pretty nice.
However, bash has very low tolerance for incompetency. It's a brutal friend. It's cooking dinner with a pocket knife. You gotta actually read the docs and be sincere in your studies.
I was around when supporting Solaris, HPUX, AIX, and Irix was a real concern. Those times sucked and those platforms are so minor I stopped caring about them except for enthusiast projects many years ago. (And I still actually have an hpux machine)
Sometimes I'll even use zsh. It can even do floating point.
Here's some example of a modern tool I have written for a subject I call "music discovery"
https://github.com/kristopolous/music-explorer/tree/master/t...
You'll see many languages in there but you won't see posix sh. It's slow, it's clunky, it's a pain in the ass.
If you don't like my practice then I guess don't use it. I've been using/developing these particular tools nearly every day for over 3 years and it works well for me.
I'm not going to say bash is awesome but it's pretty great for programming.
I use zsh as my interactive shell though, it's miles ahead of bash in that department. Watching people who really know what they're doing in zsh is a total mindfuck.