> I've never really understood the advantage of dynamic typing.
It's easier to write code in dynamic-typed languages because (generally speaking, I'm pretty sure there are exceptions to this rule) dynamically-typed languages are not as verbose as static-typed languages.
The focus is now on writing bug-free programs as best as we can, but I remember that when I started programming about 15 years ago there was still a vibe of "let's make it as easier as possible for people to program so that everyone can write code, bugs don't matter (at least not that much)". That was the general vibe behind HTML and browsers (the way that you can see the actual HTML "code" that displays this very page, even if it may be "broken" HTML), behind early JavaScript (anyone remembers how one was able to include different JS-widgets on one one's HTML page?) and behind very cool projects like One Laptop per Child [1] (which got killed by Intel and MS among others and which, presumably, was allowing its users to see the Python source-code of each program they were using). Those were the dying times (even though I didn't know that at the time) of "worse is better".
But in the meantime the general feeling around writing computer programs has changed, correctness and "bug-free-ness" are now seen as one of the main mantras of being a programmer, and the programming profession itself has built quite a few walls around it (TDD was the first one, "use static types or you're not a real programmer" seems to be the latest one) so that the general populace can be kept at the gates of the "programming profession". In a way that suits me, because I'm a programmer and less people practicing this profession means more potential guaranteed earnings for me, but the idealist in me that at one moment thought that programming could (and should) become as widespread as reading stuff like books and newspapers feels let down by these industry trends.
> It's easier to write code in dynamic-typed languages because (generally speaking, I'm pretty sure there are exceptions to this rule) dynamically-typed languages are not as verbose as static-typed languages.
I really don't understand this argument. What typed languages have you used?
Decent type system are able to infer most or all of your types. Meaning you ge the benefits of a type system without having to explicitly type everything.
Also, when you code, is your bottleneck serious how fast you can type? If it is, I would suggest you need a different approach.
> But in the meantime the general feeling around writing computer programs has changed, correctness and "bug-free-ness" are now seen as one of the main mantras of being a programmer
We are engineers, or at least we should be. Correctness has to be a goal, although I feel like even saying that is laughable right now. The amount of effort to prove a program is correct is extremely high for anything nontrivial. Robust type systems at least help cover our bases and help us reason about the structure of things, and more powerful type systems allow you to push even more invariants to the type system.
> Also, when you code, is your bottleneck serious how fast you can type? If it is, I would suggest you need a different approach.
My brain can process only so much information, the less thing it has to process (like the less type declarations) the better.
> We are engineers, or at least we should be. Correctness has to be a goal, although I feel like even saying that is laughable right now.
That’s what I was trying to explain, in a very convoluted way, that back in the day programmers were not intrinsically viewed as engineers, and that that viewpoint was seen as a valid one. Engineers were seen as building cathedrals while programmers (or hackers, if you will) were seen as building bazaars and other random stuff (like forums written in Lisp-like languages like Arc).
> My brain can process only so much information, the less thing it has to process (like the less type declarations) the better.
Ignoring that I've already mentioned inference that makes this a non-issue. My other problem with this is that just because you don't have a type system or don't have to write types, doesn't mean that the constraints of that 'information' goes away.
Your function still has a set of invariants that need to be maintained, you've just opted to make them invisible and have your program crash when they are violated. Instead of knowing statically when you've made a mistake.
>It's easier to write code in dynamic-typed languages because (generally speaking, I'm pretty sure there are exceptions to this rule) dynamically-typed languages are not as verbose as static-typed languages.
I don't understand this reasoning though. Actually typing out the code takes a fraction of the time that you spend on writing software. Furthermore, we have autocomplete and had autocomplete for a long time in statically typed languages.
I was formally taught programming about a decade ago in Python, but I naturally gravitated towards Java (and C) because of static typing. The same arguments were made back then and I just didn't get them.
As my comment near the top of this comment tree hinted, I pretty much do find statically-typed languages to be overall quite superior, but the reason that I find myself defending dynamically-typed languages all over this thread is because of the hard-line being taken by many that they couldn't possibly have any advantages in any situation. I do plenty of scripting, to improve both my personal and professional productivity, and I don't even think twice about using Python instead of Kotlin or C++ or what-have-you. I iterate rapidly on my scripts, they're not part of large engineering systems that a reader may be bouncing around, the rare bug that may pop up isn't production-critical, and the ability to be both fluent and concise just makes the code faster to read and write, given the small scope of the program. This is sort of a lazy example, because it's not far from the truth to say that scripting languages are suited to scripting and that's why they suck for engineering, but it illustrates the point that different situations can highlight or downplay the importance of the relative strengths of weaknesses of various languages and paradigms, and there are few common programming-language characteristics[1].
The problem here is that half the people on this thread are looking for a simple, low-dimensional model to cram the issue into, requiring dynamic languages to have _zero_ advantages in order to sustain their view that statically-typed languages are superior overall.
I'm not really sure how to fix this tendency, and it's by no means limited to just disputes within the field of programming, Simpler, black-and-white models are much easier for the brain to handle, so people gravitate towards them even when they don't match reality as accurately.
[1] though it is fair to say that a big chunk of the advantage of dynamically-typed languages is "accessibility to those who barely know how to code", which doesn't affect the calculus for someone who's capable of writing in either type of language
The problem is not that it takes a while, but that it takes a while when I'm in the middle of a complex task. Because of language verbosity, I might need to context-switch away from solving a hard problem.
For instance, if I am writing a complex function `frobnicate(x)` and I notice that I need to pass a configuration object to the function, I can just `frobnicate(x, opts)` in the function declaration, and `x.foo` to access the relevant option, and go back to the complexity that is still fresh in my mind.
With a verbose type system, I need to pick a type for `opts` NOW, which means I need to define a new type NOW, which will typicaly involve a new file with at least a dozen lines. By the time I'm done generating those stubs, the hard bits of the original function will have faded from my mind.
The code I write, in either case, isn't the final code. Once I have an initial working version where the complexity has been embedded in actual code instead of floating around in my mind, I can start iterating, refactoring, annotating, and so on. In TypeScript, at this point I will define a new options type and annotate `opts` with it.
Or you could just not. I mean, you could just write out your code as if you already had the types specified and then fill in the boilerplate and fix errors afterwards. That's essentially what happens with dynamic typing anyway, so the only major difference is that your IDE/editor might notice.
In a statically typed language, the code will not run until I have convinced the compiler that it should. If I wanted to explore the behaviour of `frobnicate` before committing to a design, I would still have to define all those satellite types that may well become unnecessary.
I think the main reason some people say they prefer dynamically-typed languages is because of a feature that historically correlated quite well with it: they provide almost instant gratification.
There are two parts to that: no required boilerplate (“Hello, world!” shouldn’t require more than twice the number of characters, and requiring a manual compile-link step is a no-no) and low time to first output (who cares that each value carries a hundred or more bits of type info and takes an indirection to access, as long a as that “Hello, world” makes it to the screen in <100ms)
Historical counterexamples to the claim people prefer dynamic typing are the ROM-based Basics of the 8-bit computer era. Statically-typed, but popular, IMO because they ticked both boxes.
It's easier to write code in dynamic-typed languages because (generally speaking, I'm pretty sure there are exceptions to this rule) dynamically-typed languages are not as verbose as static-typed languages.
The focus is now on writing bug-free programs as best as we can, but I remember that when I started programming about 15 years ago there was still a vibe of "let's make it as easier as possible for people to program so that everyone can write code, bugs don't matter (at least not that much)". That was the general vibe behind HTML and browsers (the way that you can see the actual HTML "code" that displays this very page, even if it may be "broken" HTML), behind early JavaScript (anyone remembers how one was able to include different JS-widgets on one one's HTML page?) and behind very cool projects like One Laptop per Child [1] (which got killed by Intel and MS among others and which, presumably, was allowing its users to see the Python source-code of each program they were using). Those were the dying times (even though I didn't know that at the time) of "worse is better".
But in the meantime the general feeling around writing computer programs has changed, correctness and "bug-free-ness" are now seen as one of the main mantras of being a programmer, and the programming profession itself has built quite a few walls around it (TDD was the first one, "use static types or you're not a real programmer" seems to be the latest one) so that the general populace can be kept at the gates of the "programming profession". In a way that suits me, because I'm a programmer and less people practicing this profession means more potential guaranteed earnings for me, but the idealist in me that at one moment thought that programming could (and should) become as widespread as reading stuff like books and newspapers feels let down by these industry trends.
[1] https://en.wikipedia.org/wiki/One_Laptop_per_Child