Nicely written up but superficial analysis and trite conclusions. It would be nice if someone took this comparative analysis of programming languages a little more seriously and go beyond the tiresome "should a semicolon be a separator or a terminator" or the naive imperative-functional distinctions.
Take for example ML (prototypical high-level functional) vs C (prototypical low-level imperative). What makes them essentially different? In ML you have the common imperative features (assignment, references) and in C you can write functions including higher-order (via pointers). The answer you are looking for is "memory management", automatic in ML versus manual in C. Everything comes out of this distinction, down to the fact that in C you don't have lambdas and currying (because no garbage collection means no closures).
What makes ML and Algol 60 different? Both have a strikingly similar set of primitives, imperative and functional. The difference is call-by-value in ML vs. call-by-name in Algol 60, which leads to global effects in ML vs. local effects in Algol 60.
What makes Lisp different from Scheme, the misunderstanding of alpha conversion in Lisp versus its correct implementation in Scheme.
And so on. It would be fun if someone who actually know programming languages beyond the naive and the trivial took time to write a proper potted history and/or comparative analysis.
This entire piece seems to have a bit of a fallacy running under it, that there exist ideal computer languages, and that we should treat all of them equally.
Rather, I'd argue that there are ideal computer languages for specific problems, and most of the holy wars he mocks have their roots in people working on different problems rightly concluding that their language X is ideal and language Y is inferior. The flame wars are peripheral.
That said, there are solid reasons for Dijkstra's critiques of BASIC and in lesser degrees Pascal and friends. Almost all the dialects of BASIC deserve to die.
I would love to see evidence of Dijkstra's critique of Pascal (which is NOT a basic dialect.)
Dijkstra hates Basic, Fortran, Cobol and APL .. but not Pascal. He wrote an Algol compiler and used Algol-based languages and formalisms throughout his entire career, even inventing some himself.
The reason is simple: Pascal is hard to hate (or love for that matter.) It's a reliable 4-door, automatic sedan.
The programming world isn't as balkanized as natural languages. If Tiobe is to believed, the top 20 programming languages account for 85% of "use". By contrast, the native speakers for the top 20 natural languages only have 50% of the world population. Here's a spreadsheet with the data.
http://j.mp/hiPGXA
Other factors to consider: learning a new programming language requires less than a tenth the effort of learning a natural language (my estimate). That's for people. Getting my laptop to speak a new language is mostly a matter of downloading the interpreter or compiler. Granted the interpreter or compiler has to be initially ported to the architecture or OS. So not only is there less programming language balkanization, but the cost of balkanization is less.
I think the differences between computer languages are more profound than the differences between human languages. Human language differences aren't necessarily "not profound" but for the most part they have the deep structural similarities mentioned in the article and their interestingness comes from other sources. (I think a profoundly different language would have to be something that isn't a reshuffling of noun, verb, adverb, adjective, etc, such as the one in the classic Darmok episode of ST:TNG [1]. Which I think also can't work in practice, but at least it's different.)
The challenge of learning a new human language lies primarily in their sheer size; especially assuming two essentially unrelated languages like Chinese and English, you must relearn everything; the words, the phonology, the surface grammar, even as you still really have noun, verb, adverb, adjective. Learning a new programming language on the other hand can cause you to rethink everything you thought you knew about programming in general... but you can do it relatively quickly because opening a file, reading some strings, chewing on them, and spitting them out into a new file just doesn't take several thousand verbs and nouns.
I make these observations without value judgment. I'm not saying one is better or worse than the other; both are worth pursuing. If I have a particular point at all I guess it would be that using the learning of a computer or human language as a metaphor for the other doesn't seem like a great idea to me, there's too many relevant differences for the points people want to make.
I thought this a very nice, even-handed treatment of the various religious wars and the reasons for them over time. Of course, I also prefer Lisp, so I am predisposed to agree with the author.
The picture is an odd programming language family tree. Since when was CLOS a programming language, descended from Common Lisp? The Common Lisp Object System -- that's what CLOS stands for -- is just how Common Lisp does objects. You can make another object system as a library, if you want to. In fact, that's a pretty fun exercise.
But they place Common Lisp in 1984, probably because of the publication of the first edition of Common Lisp: The Language. CLOS came much later. And as the Common Lisp standard never included a full meta-object protocol, I think it's fair to list CLOS seperately.
We almost did our web framework in Scheme back in late 90s, but for some weird reasons settled on PHP instead. Our project, and maybe the state of Open Source CMSs could look different if we had chosen otherwise (Midgard, our framework, was the second free software CMS out there, and the first for PHP).
Anyway, this article made me buy the Wizard Book and Land of Lisp just to get back to the language... Last time I read (parts of) the Wizard book was in the student commune some time around 97.
The premise is the fatal disease. Programming languages are tools of expression. Asking which programming language is "best" is like asking which musical instrument is best. All the centuries of refinement and craftsmanship that go into making a good violin don't invalidate the piano.
The textual nature of the two aside, I think this is perhaps a superficial comparison. Western musical notation has survived largely unchanged for hundreds of years because it covers the the bases of practically every major genre. While many computer languages are theoretically the same in terms of Turing completeness, they are also expressive media in in that they each bring affordances to the table that facilitate or discourage particular concepts, abstractions and patterns.
There are some important things that are the same between the chant and the piece by Messian, but the notations are very very different, and have accumulated some really important stuff over the years.
Saying that musical notation has been largely unchanged for hundreds of years is kind of like saying that English has remained largely unchanged for hundreds of years.... in that it is incorrect.
I probably should have made myself clearer, but I was referring to the modern standard of notation, which if I am not mistaken, is indeed largely the same as what you would find dating back to the 17th-18th centuries, i.e. several hundred years ago. A more instructive comparison, in this regard, would be between the Messiaen you linked and Bach's Well Tempered Clavier.
Yes, I'm familiar with the experimentation with notation of 20th century composers such as Xenakis, Cage, Stockhausen, etc, but they're beside the point (no offense to you or Xenakis, I happen to like his work). Their music is obscure and inaccessible to a great deal of Westerners, even many of those who could ordinarily be considered "musically literate", and few of these modern, experimental forms of notation, to my knowledge, have caught on in a way that's comparable to the popularity of even an "obscure" or "academic" computer language such as Haskell. There have been and always will be exceptions, not to mention music that falls outside the realm of Western tradition entirely, but you have to admit that the umbrella of expression that fits the modern standard of notation is pretty wide.
The point to take away, I think, is that once a standard came into common use that allowed composers to convey enough relevant information about the music they were creating, innovation in notation became a lot less important, for quite a long time. By a similar token, while English has of course been constantly evolving throughout its history, as all languages do, standard English orthography hasn't changed much in the last few hundred years either. You don't need a new kind of staff or notehead to represent chromaticism vs. baroque counterpoint, any more than you need new letters of the alphabet in order to write words and phrases that no one has ever heard before, in dialects that didn't exist at the time Walt Whitman wrote "Leaves of Grass", or what have you. As long as you're talking about notes over the chromatic scale, time and key signatures, common rhythmic subdivisions, and so on, you can notate whatever it is you like.
In any case, the point I was making earlier in the thread is that this seems fairly different from the development of computer languages, and I think that a lot of it has to do with the inherent nature of computer languages as tools of abstraction that extend beyond the merely representational. Hence the comparison with musical instruments – in my view, code is more like a hammer than it is a book.
Curl and Links are web oriented programming languages? As I use and understand them, they are both webbrowsers.... Maybe haven't been keeping up with development..
EDIT: apparently both are languages as well. What retarded naming!
The author put an odd amount of emphasis on those languages. I'd never heard of them, and the amount of emphasis he gave them seems disproportionate to the actual amount of interest there seems to be in them.
No, he's a Lisp guy, from Boston. Curl is a Lisp programming language from the early noughties by some other Lisp guys from Boston, and Links was, as I understand it, a 2006 attempt by the Haskell guys to bring the goodness of Haskell to the web; Haskell being the language that took over the mantle of Most Advanced Functional Programming Language from Lisp sometime in the 1990s, a title it had held since its inception in 1959. It's not surprising he'd mention the two of them.
Something that Wirth or Hoare wrote in 1970 deserves to be labeled old (and that's only because computing is such a new field - in quite a few other fields, that would be relatively recent). An article from 2006 is hardly old, and I think it still reflects the current state of the programming world.
It doesn't matter whether 2006 is old or not for an article on programming, I just like to know the context of things. Just like when I watch a movie I like to know what year it was made, who was the director and who wrote the script.
EDIT: sorry, this came across terribly entitled. To put it differently, I think it helps to know the context of things, and the year something was made is part of the context. If the article submitter had linked to the actual article instead of the print version, the sidebar would have given the date. Alternatively, the submitter could have simply added the year into the title. Both wouldn't have cost anything and would have added a bit more context to the submission.
Take for example ML (prototypical high-level functional) vs C (prototypical low-level imperative). What makes them essentially different? In ML you have the common imperative features (assignment, references) and in C you can write functions including higher-order (via pointers). The answer you are looking for is "memory management", automatic in ML versus manual in C. Everything comes out of this distinction, down to the fact that in C you don't have lambdas and currying (because no garbage collection means no closures).
What makes ML and Algol 60 different? Both have a strikingly similar set of primitives, imperative and functional. The difference is call-by-value in ML vs. call-by-name in Algol 60, which leads to global effects in ML vs. local effects in Algol 60.
What makes Lisp different from Scheme, the misunderstanding of alpha conversion in Lisp versus its correct implementation in Scheme.
And so on. It would be fun if someone who actually know programming languages beyond the naive and the trivial took time to write a proper potted history and/or comparative analysis.