> Honestly this is not a bad idea. Since bringing up a new instance VM will always be in exactly the same state, the shell script is completely deterministic.
Wrong, kind of. There are two variables in a VM: 1. Everything not in the VM and 2. The script itself.
1 is mostly dependent on what you're doing. If you're just calculating digits of pi, then yes, it's quite probably deterministic; if you're deploy software that's being pulled from github, running some initialization scripts, attaching some storage, then you're going to run into variables. All of those aforementioned actions have failed: github.com might be down (a rarity, but happened this week!), your scripts contain new code that's not quite up to par, and the cloud provider says the storage is attached to the VM, but it doesn't actually show up.
2 is that the script is probably in a VCS, and people are changing it. Someone is bound to write a line that doesn't work. (In fact this seems to happen quite often when tests are absent…)
> I can't even think of a drawback.
I can. The biggest one is that bash's arcane syntax is a deathtrap. It's a great shell, but for stuff that needs to work and work reliably, it's riddled with holes. Take the article's script:
sudo apt-get -y install build-essential zlib1g-dev libssl-dev libreadline6-dev libyaml-dev
cd /tmp
wget http://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p247.tar.gz
tar -xzf ruby-2.0.0-p247.tar.gz
cd ruby-2.0.0-p247
./configure --prefix=/usr/local
make
sudo make install
rm -rf /tmp/ruby*
Several of these (apt-get, wget, tar, did you just install code downloaded over an insure channel onto a server?!, ./configure, make, make install) can easily fail; if they do, your fucking shell script will keep plowing along as if nothing happened. Depending on the next action, this can be meh, or WAT. Since it ends with "rm -rf ...", I think if it does blow up horribly, it'll return success. You can say "set -e" at the top to cause it to bail sooner, but `set -e` won't catch failures in all commands (false | true). Fucking shell scripts.
Don't get me wrong: shell scripts are great, especially if you need it to work NOW. One off stuff especially. But if it's going to stick around awhile, having something that automatically looks at and raises exceptions/errors when stuff fails is great.
The thing I miss from a lot of these automation libraries is being able to annotate dependencies between commands. That wget and apt-get can run together. (The rest is must pretty much run in parallel.)
Libraries also allow people who really know how to make this stuff sing built the low level functionality in. That make could be make -j $(( $coeff * $number_of_cores )) ; make install could be similar. Maybe CFLAGS or CXXFLAGS could compile ruby with a bit more options for a slightly more optimized install. We might extract the tar in a directory where a rm -rf /tmp/ruby* won't inadvertently delete something (unlikely if you're on a new VM, but I find that's not always the case).
Shell scripts are a tool. They have a place. Nobody is saying get rid of them, nor is anyone saying get rid of them for deploys. I just want something a little more robust.
Thank you for explaining this with some clarity; it's early here, and I've been struggling to say this very thing.
Bash is fine, but it's not a sophisticated high-level language. For doing sophisticated things, a simplistic tool is not enough. We need processes that are deterministic and that can act intelligently. The "bash is fine" crowd, in my experience, tends to be the same crowd that thinks that servers are special snowflakes that we must feed and care for. Those days are over.
Wrong, kind of. There are two variables in a VM: 1. Everything not in the VM and 2. The script itself.
1 is mostly dependent on what you're doing. If you're just calculating digits of pi, then yes, it's quite probably deterministic; if you're deploy software that's being pulled from github, running some initialization scripts, attaching some storage, then you're going to run into variables. All of those aforementioned actions have failed: github.com might be down (a rarity, but happened this week!), your scripts contain new code that's not quite up to par, and the cloud provider says the storage is attached to the VM, but it doesn't actually show up.
2 is that the script is probably in a VCS, and people are changing it. Someone is bound to write a line that doesn't work. (In fact this seems to happen quite often when tests are absent…)
> I can't even think of a drawback.
I can. The biggest one is that bash's arcane syntax is a deathtrap. It's a great shell, but for stuff that needs to work and work reliably, it's riddled with holes. Take the article's script:
Several of these (apt-get, wget, tar, did you just install code downloaded over an insure channel onto a server?!, ./configure, make, make install) can easily fail; if they do, your fucking shell script will keep plowing along as if nothing happened. Depending on the next action, this can be meh, or WAT. Since it ends with "rm -rf ...", I think if it does blow up horribly, it'll return success. You can say "set -e" at the top to cause it to bail sooner, but `set -e` won't catch failures in all commands (false | true). Fucking shell scripts.Don't get me wrong: shell scripts are great, especially if you need it to work NOW. One off stuff especially. But if it's going to stick around awhile, having something that automatically looks at and raises exceptions/errors when stuff fails is great.
The thing I miss from a lot of these automation libraries is being able to annotate dependencies between commands. That wget and apt-get can run together. (The rest is must pretty much run in parallel.)
Libraries also allow people who really know how to make this stuff sing built the low level functionality in. That make could be make -j $(( $coeff * $number_of_cores )) ; make install could be similar. Maybe CFLAGS or CXXFLAGS could compile ruby with a bit more options for a slightly more optimized install. We might extract the tar in a directory where a rm -rf /tmp/ruby* won't inadvertently delete something (unlikely if you're on a new VM, but I find that's not always the case).
Shell scripts are a tool. They have a place. Nobody is saying get rid of them, nor is anyone saying get rid of them for deploys. I just want something a little more robust.