Everyone will have a different take on this based on the kind of software they write. The bottom line is that math is pervasive at all levels of software engineering but it impacts developers in different ways.
If you write video games or any other graphic related application, or if you work on jet engines or modelization or industrial engineering, you are obviously immersed in math on a daily basis. If you write servlets on a Java backend, you are probably not required to be as much versed in math, physics or chemistry.
Still, no matter how close to mathematics the software you write runs on, the bottom line is that you are using a programming language to do so, and all languages are rooted to various extents into math. Some are very closely connected, and sometimes based on, specific mathematical fields (e.g. Haskell and Category theory) while others are more loosely based on such principles.
I find that just like you don't need to know how an engine works in order to drive a car, you don't need to know a lot of math to be a decent developer, but it certainly doesn't hurt to read up on some of the theoretical foundations that underlie all of computer science.
Has there really been that much actual progress made lately, though?
I'd say that C++, Erlang and perhaps Haskell are the most recent languages to bring anything new to the table in a way that's at least somewhat usable in practice. And they're decades old at this point.
C++ helped make OO and generic programming feasible. Erlang helped with developing concurrent, distributed, fault-tolerant software. Haskell brought pure functional programming and laziness to a wider audience.
Otherwise, the widely-hyped languages of today, including Scala, Ruby, JavaScript, Go, Rust and even Kotlin generally just rehash what C++, Erlang, Haskell and other languages offered several decades ago.
Some of those languages, like JavaScript and even Go, are arguably worse in many ways than languages developed in the 1980s or way earlier.
At best, we're seeing small, incremental improvements. More realistically, we're just seeing old ideas rehashed again and again, with minor syntactic differences. Thus we aren't really seeing real "experimentation", and we aren't witnessing much "progress".
This sort of conversation is usually pretty subjective so it will be impossible to actually come to any agreement.
That said, ignoring the JVM in this conversation is near criminal. High performance GC, JIT, standardized profiling, etc. are all major steps forward. Are they language improvements? No, but they proved that a VM is a viable target platform and that was under serious debate in the late 90s.
I ignored the JVM because it really isn't that special at all, and its supposed benefits haven't been proven to exist in reality.
Various types of VMs predated it by decades, including ones that used various forms of GC and JIT compilation. See some of the Pascal and Smalltalk implementations from decades ago as an example of these systems. These did see a fair amount of use, in practice.
The JVM isn't particularly portable. It may support most of the major mainstream OSes used today, but it's not like portable C or C++ code (including other language implementations for Perl, Python, and so forth) that can run on all sorts of obscure and ancient systems.
Its performance isn't particularly remarkable, either, even with all of the effort that some very large companies have put into it. Its been pretty much relegated to server-side use at this point, and even then we're seeing more and more effort made to move away from it. People are realizing that it's often better to go native, as we're seeing with newer languages like Go and Rust.
The JVM did get a lot of hype, and has seen a lot of use, but it's mediocre at best. There's nothing particularly earth-shattering about it.
I hate to debate you about Rust; but no other practical language offers same memory-safety with no garbage collection that Rust provides, the lifetime and ownership systems are fairly unique. AIUI, Cyclone had something similar to lifetimes, but in a different manner, and without the considerations for concurrency/data races that Rust has.
If all the languages and frameworks that are being developed were progress, then sure. If they were meant as just experiments, then more power to them.
Here's the deal, we'll see a flood of "Why I chose (X) language" posts. A year later we'll see a flood of "Why I left (X) language" posts. Developers read too many blogs and follow too many fads. They are like golfers! "This'll get you an extra 10 yards on your swing!" B.S.
So what? I don't know about you, but I started programming for fun. Where's the fun if you never experiment or try new things?
You can't discourage people from experimenting with new languages, frameworks, design patterns, etcetera, just because some people might use it and fail.
I am not assuming that at all. Re-read my first two sentences.
The problem that I see is that developers only want to work on what is new. They don't want to work with C++ because it's old. They don't want to work with Java because it's old. They don't want to work with Ruby because even it is old. It's the same thing with frameworks. There has been an explosion of frameworks because developers can't be caught dead working on something older than 6 months.
If one's intention is to rewrite entire applications every year, knock yourself out. But if you're writing apps for customers who expect it to be solid and remain in the field for a useful amount of time, then you need the reliability of C++ or Java or C. Not some language someone whipped together in a handful of weekends.
Another commenter mentioned beginning development because it was fun. I did too. It still is, but I also grew to care about the products I develop. And no, it's not fun to chase issues with immature technology when a customer is wanting answers.
I agree with the positivity. It is always easiest to critique and stick with the familiar.
Something we can quantitatively take away from the slur of new languages is that new languages are easier now than ever to create/implement. This should be a good thing. This means faster iteration can occur. Even if that iteration is not necessarily taking place, the creation of new languages verifies this ability.
> The problem with Android's layout system (if I understand it correctly) is that you specify how things are arranged relative to each other, so you never really know where something is.
RelativeLayout is just one of the several layouts you can use on Android. Typical Android GUI's are usually a mix of this one and a few others. This combination is very powerful and has been instrumental in making Android GUI's scale so well to many different devices with various resolutions and DPI's, a challenge that Apple will soon face.
> Sorry, we just wanted to give the background to the project and why in the post, rather than focussing too heavily on code.
Your landing page should contain what your users want to see, not what you want to put there. You might be excited about the motivation behind your project but nobody cares, really.
I clicked on the link and I spent ten minutes reading a wall of text hoping to find good reasons why I should switch from Angular, Backbone or Ember. Instead, I just closed the window without knowing anything about your framework.
> There's plenty of API docs for the core components
Still not a substitute for a user manual, even tiny.
A little harsh, don't you think? The link is to a blog post - not a landing page - and it seems to me it can contain whatever the author wanted it to. The post wasn't submitted here by anyone from &yet, but by Jeremy Ashkenas (the creator of Backbone which is partly the basis for ampersand).
But the landing page doesn't say much as well. The first section is "Why?," and the only "what" is "Ampersand.js is a well-defined approach to combining (get it?) a series of intentionally tiny modules."
I don't know what that means, and the reasons for it are "simplicity of tiny modules and npm dependency," which are not exactly convincing by itself, as I can do that just fine with plain Browserify. Then, going to the "Learn" page I get my hands busy.. with something I know nothing about.
This is heavily marketed towards Backbone users, I'm guessing.
> You might be excited about the motivation behind your project but nobody cares, really.
Yes we'll try and improve the content on the homepage to make it more focussed. Though I guarantee if we didn't talk about the motivation we'd have people saying "why did you make another framework?!"
> Still not a substitute for a user manual, even tiny.
I'm not suggesting we've got it perfect by any stretch, but I'm not sure exactly what you're hoping to see? There's also http://ampersandjs.com/learn with some more detail around the various pieces.
I'm excited. Not so much by the fact that Google now provides this service (which I may or may not use) but because this is going to put a tremendous amount of pressure on other ISP's, and competition is badly needed in this field.
It was ordered and paid for by the catholic church, of course it's going to represent concepts of the catholic creed. If NASA ordered their rockets to be painted with science based themes, you could hardly call such paintings "inspired by science".
"Inspired" implies that the artist woke up with an idea and painted it.
Michelangelo was mostly told to 'paint the ceiling', what he painted was of his own devising. For sure he would probably not have been paid if he painted a forest scene but there is some evidence in his own hand that his inspiration was mostly scripture (the OT).
I'm pretty sure the reigning pope approved of the design, both from an execution perspective as from the content itself.
But if he had done the job according to the normal way of decorating ceilings in that day we likely would not be having this discussion. It really is a masterpiece.
Why not? They would be the genesis of the idea, after all. It just seems bizarre to me to say that religion did not urge him to do this, which is the usual meaning of 'inspire.'
> Unless you use Erlang or OS processes you are sharing memory.
Actually, even actor-based systems share memory. If two actors A and B send a message to an actor C and expect a response from it, they are sharing memory: what's in C's state. Which can be different depending on whether C received A's message first or not.
> If two actors A and B send a message to an actor C and expect a response from it, they are sharing memory: what's in C's state.
Ok in that respect there is just one big pile of shared memory in the whole world, isn't it (maybe except for military air-gaped system). It is the equivalent of saying if A makes an HTTP post to server C the it shares memory. Well ok, I am not sure what you mean by "shared memory", usually it means living in the same heap. So can access it via a pointer or reference.
No apology there (nor should there be).