> if you haven't coded in a language like OCaml, Haskell, ML etc
Same argument could be made the other way, if you haven't used Smalltalk, Clojure (and other languages that are strongly dynamic), you shouldn't discount them.
I've coded in Clojure enough to know that there are cases where it completely sucks (mostly when you have to do imperative low level code). I used to have a link to a standard library implementation of channels that was downright hideous to read as an example of this (and that code transformed to Java is actually easier to follow).
Clojure has some cases where it's insane how elegant you can make the solution, but frankly static languages with good type systems and tooling come close enough but don't have the scaling downsides.
You can find hideous Haskell code too. Especially in code for which lazy evaluation doesn't work and you need to coax the runtime into computing stuff in the right order.
Once upon a time I was writing a lot of Objective-C, and back then I found it really cool and fun to embrace the potential of dynamic dispatch. But when I did get bit, the thing that was painful about it is that runtime failures often occurred way far away from the source of the issue, so debugging was often extremely painful.
I'm, not sure if Smalltalk and Clojure do this better, but since I've gotten into static languages, I basically haven't looked back, and I haven't missed the dynamism much at all.
You're making exactly the same point as seanwilson did above. "I have used this dynamic language and it was worse than this strong static type language" while not even tried something like Smalltalk or Clojure.
Yes, languages that are truly dynamic (ships with a REPL, has a instant change->run this snippet workflow, can redefine anything in the runtime AT runtime) does work a lot better than Python which is basically comparing the type systems of Java and Haskell.
Give it a try. Worst that can happen is that you now have tried and learned something different, I'm sure it'll give you some knowledge that'll be helpful in any programming language :)
My comparison was more with the developer experience. In Smalltalk you live inside your program, doing edits as you go along and with dynamic introspection. In Objective-C you would edit your program from the outside, and then build it after doing changes, maybe introspect it from the outside.
Pharo is a nice implementation of Smalltalk and they have a nice page that describes the top features of the language. Take a look and see if it's different from the developer experience you had with Objective-C: https://archive.is/uAWX5 (linking to a archive via archive.is as some images couldn't load)
On the surface yes, however in order to keep the "-C" side Objective-C lost all of smalltalks dynamic REPL development experience in exchange for the traditional compile/run/repeat cycle.
In particular, similarly to Smalltalk and differently for example from C++, Objective-C does method dispatching from empty interfaces and has does-not-understand functionality.
Everything Is An Object in Smalltalk, and of course this can't be true in Objective-C due to its mixed heritage.
in smalltalk you are always running inside of a live system, so if you made a mistake its immediate that it happened right then and there, but in obj-c its a code-build-run environment so the place you changed and the state you have to progress to to exercise that change are far away and you might not catch it until runtime / in the field.
A strongly typed dynamic language will raise an type error if you for instance try to add two incompatible types, for instance in python
Python 3.9.0 (default, Oct 10 2020, 11:40:52)
[Clang 11.0.3 (clang-1103.0.32.62)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> 1+"1"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +:
'int' and 'str'
While a weakly typed dynamic language like JS will allow for it
Welcome to Node.js v16.0.0.
Type ".help" for more information.
> 1 + "1"
'11'
If the language allows it the types are (ipso facto) not incompatible; “weak typing” is more a subjective statement about how well a language lines up with the speakers mental model of what types should be compatible than an objective one about falsifiable properties of the type system.
I don't know, but JS and PHP implicit conversions are widely acknowledged to cause plenty of issues, which Python and Ruby do not have since you have much fewer cases of implicit conversion.
I contend there is no such thing as "weakly-typed" because it suggests it exists in opposition to strongly-typed systems - but instead I feel "weakly-typing" should be considered the absence of type-correctness checking by either the compiler nor the runtime - and that the term "weakly typed" and related should not be used to refer to languages with plenty of implicit conversion between types, because in those languages (like JS) the implicit conversions are very, very, well-defined and predictable ahead of time - TypeScript models JavaScript's implicit type conversions accurately, too.
JavaScript's more forgiving, and often surprising, implicit conversions are still rigidly-defined .
Remember that implicit conversions should never (ostensibly) throw any exception or cause any errors because implicit conversion should always succeed because it should only be used to represent an obvious and safe narrowing-conversion. A narrowing-conversion necessarily involves data-loss, but it's safe because all of the data needed by the rest of the program is being provably retained.
> I contend there is no such thing as "weakly-typed" because it suggests it exists in opposition to strongly-typed systems - but instead I feel "weakly-typing" should be considered the absence of type-correctness checking by either the compiler nor the runtime
This doesn't feel like a very useful definition. What languages would fall under the category of "weakly typed" by your definition? I can only think of C, and even that is only true for a subset of the language (when you're messing about with pointers).
> Remember that implicit conversions should never (ostensibly) throw any exception or cause any errors because implicit conversion should always succeed because it should only be used to represent an obvious and safe narrowing-conversion.
> const fun = (x, y) => (x + y) / y
> fun(1, 2)
1.5
> fun("1", 2)
6
No exception? Yes. No error? Nope, implicit conversion absolutely just caused an error. If I got that "1" from an input field and forgot to cast it to an int, I want to be told at the soonest possible juncture, I don't want to rely on me noticing that the result doesn't look right. As it is, "weak" feels like a good word for the types here. They don't hold their form well.
'Indeed, the phrases are not only poorly defined, they are also wrong, because the problem is not with the “strength” of the type checker but rather with the nature of the run-time system that backs them. The phrases are even more wrong because they fail to account for whether or not a theorem backs the type system.'
I don't know whether to feel disgusted or fascinated by this. I find the concept of prototype-based inheritance quite elegant, but this is a bit too much insane.
There was an experimental language called Cecil that had "conditional inheritance". You could say things like
class Rectangle:
var w,h
class Square(Rectangle if w == h):
...
You could then overload functions/methods on both `Square` and `Rectangle` and it would call the correct overload depending on the runtime values of `w` and `h`.
They are dynamic to a degree that facilitates interactivity and expression on a different level than one might be used to. Late binding, REPL, expression based, building up code while it is running without losing state etc.
But none of those things you mentioned are dependent on a dynamic type system: on the contrary, plenty of statically-typed languages offer late-binding, expressions (certainly enough to implement a REPL) - I know C# certainly has all of those things.
The last thing: being able to edit program code, add new types/members/functionality and so-on while it's running... that's not really a language limitation nor typing-system limitation either: that's just good tooling that is able to hot-swap executable code after the process has already started. Visual Studio has had "Edit & Continue" in debugging sessions for almost 3 decades now, and the runtime build system used by ASP.NET means that source code and raw aspx/cshtml views can still be FTP'd up to production and just work, because .NET supports runtime compilation.
------------
For ordinary business software, where the software's focus is on data-modelling and data-transforming-processes, so all these programs basically just pass giant aggregates of business data around, so strong-typing is essential to ensure we don't lose any properties/fields/values as these aggregate objects travel through the software-proper. In this situation there is absolutely no "expressiveness" that's could be gained by simply giving up on type-correctness and instead rely on unit tests (and hope you have 100% test coverage).
> that's not really a language limitation nor typing-system limitation either: that's just good tooling that is able to hot-swap executable code after the process has already started.
It's more complex than that. The language absolutely has to define these operations for them to make any sense. Modifying some code inside a function is the easy case, and supported by many environments. Modifying types is much more difficult, and usually not supported (.NET and JVM do not support any kind of type modification - neither field nor function). I'm not sure if the C++ debugger can handle this either. Common Lisp can do it, and it actually defines how existing objects behave if their type changes.
> For ordinary business software, where the software's focus is on data-modelling and data-transforming-processes, so all these programs basically just pass giant aggregates of business data around, so strong-typing is essential to ensure we don't lose any properties/fields/values as these aggregate objects travel through the software-proper.
There is also ways of dealing with this in dynamic languages. For example, Clojure has clojure.spec that achieves the same results, but without relying on strong typing.
In the end, it's all tradeoffs. There is no "one solution" that will be perfect for everything.
REPL and expression-based are not limited to dynamic languages. REPL-oriented programming as it's understood by Lisp/Clojure users might be, I'm not sure.
> If you're not familiar with Clojure, you may be surprised that I describe the REPL as Clojure's most differentiating feature: after all, most industrial programming languages come with REPLs or 'shells' these days (including Python, Ruby, Javascript, PHP, Scala, Haskell, ...). However, I've never managed to reproduced the productive REPL workflow I had in Clojure with those languages; the truth is that not all REPLs are created equal.
Basically, the "REPL" you usually call a REPL is in fact a shell, and not a REPL as normally used in lisp languages.
I think FORTH has something very similar to the lisp REPL, if I'm understanding it properly. I know little of lisp, but everything I've read makes it seem that it and FORTH are almost evil twins of each other. Opposite but equal.
Thanks, from what I understand the difference I made between having a REPL and being able to do REPL-driven development is the one you make between a shell and a REPL.
Unlike most languages, Lisp defines how strings can be transformed to Lisp ASTs (s-expressions). In Scheme (which doesn't allow reader macros) this is a safe operation: you can read untrusted input without worry.
By contrast, most other languages only have eval() - which takes a string and parses and executes it as code. In Lisp, eval() takes a Lisp list object and executes that.
Same argument could be made the other way, if you haven't used Smalltalk, Clojure (and other languages that are strongly dynamic), you shouldn't discount them.