Then why not just require explicit initialization? If "performance" is your answer then adding extra optimization capabilities to the compiler that detects 0 init would be a solution which could skip any writes if the allocator guarantees 0 initialization of allocated memory. A much safer alternative. Replacing one implicit behavior with another is hardly a huge success...
Operating systems usually initialize new memory pages to zero by default, for security reasons, so that a process can’t read another process’s old data. So this gives you zero-initialization “for free” in many cases. Even when the in-process allocator has to zero out a memory block upon allocation, this is generally more efficient than the corresponding custom data-type-specific default initialization.
If you have a sparse array of values (it might be structs), then you can use a zero value to mark an entry that isn’t currently in use, without the overhead of having to (re-)initialize the whole array up-front. In particular if it’s only one byte per array element that would need to be initialized as a marker, but the compiler would force you to initialize the complete array elements.
Similarly, there are often cases where a significant part of a struct typically remains set to its default values. If those are zero, which is commonly the case (or commonly can be made the case), then you can save a significant amount of extra write operations.
Furthermore, it also allows flexibility with algorithms that lazy-initialize the memory. An algorithm may be guaranteed to always end up initializing all of its memory, but the compiler would have no chance to determine this statically. So you’d have to perform a dummy initialization up-front just to silence the compiler.
"Often enough" is what's introducing the risk for bugs here.
I "often enough" drive around with my car without crashing. But for the rare case that I might, I'm wearing a seatbelt and have an airbag. Instead of saying "well I better be careful" or running a static analyzer on my trip planning that guarantees I won't crash. We do that when lives are on the line, why not apply those lessons to other areas where people have been making the same mistakes for decades?
Please, can we stop assuming every single software has actual lives on the line? These comment threads always devolve into implicit advertisement of Rust/Ada and other super strict languages because “what about safety?!”
It is impossible to post about a language on this forum before the pearl clutching starts if the compiler is a bit lenient instead of triple checking every single expression and making your sign a release of liability.
Sometimes, ergonomics and ease-of-programming win over extreme safety. You’ll find that billion dollar businesses have been built on zero-as-default (like in Go) and often people reaching for it or Go are just writing small personal apps, not cruise missile navigation system.
I'm actually with you on the ease of use. I don't see this as the opposite to safety. To me, making it harder for me to make mistakes means it's easier to use. That is, easier to use right and harder to use wrong. I'm not a Rust or Ada advocate. I'm just saying that making it harder to make the same mistakes people have been doing for decades would be a good thing. That would contribute to ease-of-use in my book since there are fewer things you need to think about that could possibly go wrong.
Or are you saying that a certain level of bugs is fine and we are at that level? Are you fine with the quality of all the software out there? Then yes, this discussion is probably not for you.
> Are you fine with the quality of all the software out there?
This is the kind of generalisation I'm ranting against.
It is not constructive to extrapolate any kind of discussion about a single, perhaps niche, programming languages with applicable advice for "all the software out there". But you probably knew that already.
TL;DR: I disagree, and I will say upfront that my views on software are extreme. I think quality is a glaring issue in most software.
There is a lot of subpar software out there, and the rest is largely decent-but-not-great. If it's security I want, that's commonly lacking, and hugely so. If it's performance I want, that's commonly lacking[0]. If it's documentation...you get the idea. We should have rigor by default, and if that means software is produced slower, I frankly don't see the problem with that. (Although commercial viability has gone out the window unless big players comply.) Exceptions will be carved out depending on the scope of the program. It's much harder to add in rigor post hoc. The end goal is quality.
The other issue is that a program's scope is indeed broader than controlling lives, and yet there are many bad outcomes. If I just get my passwords stolen or my computer crashes daily or my messaging app takes a bit too long to load every time, what is the harm? Of course those are wildly different outcomes, but I think at least the first and second are obviously quality issues, and I think the third is also important. Why is the third important? When software is such an integral part of users' lives, minor issues cause faults that prompt workarounds or inefficiencies. [1] discusses a similar line of thought. I know I personally avoid doing some actions commonly (e.g. check LinkedIn) because they involve pain points around waiting for my browser to load and whatnot, nothing major but something that's always present. Software ("automation") in theory makes all things that the user implicitly desires to be non-pain points for the user. An interesting blend of issues is system dialog password prompts, which users will generally try to either avoid or address on autopilot, which tends to reduce security. Or take system update restarts, which induce not updating frequently. Or take what is perhaps my favorite invectives: blaming Electron apps. One Electron app can be inconvenient. Multiple Electron apps can be absurd. I feel like I shouldn't have to justify calling out Electron on HN, but I do, but I won't here. And take unintended uses: if I need to set down an injured person across two chairs, I sure hope a chair doesn't break or something. Sure, that's not the intended use case of a chair, but I don't think it's unreasonable that a well-made chair would not fail to live up to my expectations. I wouldn't put an elephant on the chair either way, because intuitively I don't expect that much. Even then, users may expect more out of software than is reasonable, but that should be remedied and not overlooked.
Do not mistake having users for having a quality product.
You seem to use eager evaluation of usability whereas in practice most people only need lazy evaluation. We use risk assessment of going from point A to point B, two concrete points. You seem to use risk assessment equivalent to JavaScript's array.flat(Infinity).
We don't need to talk about theoretical risks. Is there not something wrong about a calculator app asking for contacts and location permissions[0]? Are ads fine if they can be used to track every detail of what you do and want? Was it fine when CrowdStrike caused Windows systems in airports BSOD and lead to massive delays? I'm not even talking about threats to life here. There is plenty of evidence that a lot of software has...issues. If you haven't come across one that you consider indicative, try waiting a few years. Time doesn't heal bugs that don't get fixed, and the best case scenario is a headache or some lost money. You can say "in practice most people only need lazy evaluation" because the reality isn't that software quality is overall mediocre, it's that there aren't delightfully convenient alternatives to switch to. In practice, most people only need something that somewhat works, even when it often doesn't, because complaining otherwise is unproductive.
And you seem to have ignored a lot of what I said, as if I was just talking about a few rare, critical problems.
"It is difficult to get a man to understand something, when his salary depends on his not understanding it."
- Upton Sinclair
and note (because it is not included in the quote) that a literal salary is not necessary or sufficient. Really? Is this not just resigning to subpar software? My counterpoint is that what is popular is not necessarily what should be popular. And I think you're still tunnel-visioning for a specific thing to criticize. Do I have to keep giving examples until I find one you will deign to agree is a serious issue? Just as hasty generalization is harmful, so is hasty specialization. Perhaps you personally don't see a problem, but there can be many reasons for that.
It's getting from point A to point B with whatever works best given the circumstances after considering all the pros and cons. Sometimes that is garbage software. I mean I've even used _____ once or twice! [edit] redacted to not throw any software under the bus
> It's getting from point A to point B with whatever works best given the circumstances after considering all the pros and cons.
I agree with you on this, but on this forum that's full of people who write software, I'm skeptical that making better software isn't often (usually?) the better choice. But I understand that this is one of those "critical mass" things where a few people can't do nearly as much.
The point is that "better" is relative to a whole bunch of trade-offs that you have to manage and pick and choose per language per project pre need, and just like in spoken language, there's no obvious Perfect Software Language. They all have trade-offs.