Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't imagine error-checking if's being a source of slowness. How are exceptions generated in the first place? Using if's. And anyway, even well structured code is full of if's (e.g. array iteration), and it's not a problem. Especially for error-checking, when the error case is rare, the CPU will correctly predict the branch to take most of the time, so the cost of the if is negligible.

The only call where if's should probably be minimized is on hot paths, where there shouldn't be any I/O or other exception code anyway.

I would agree that manual stack unwinding (many stack frames deep) by propating error values up the call chain is bad. The problem here is not the if's, but that the program structure is unnecessarily convoluted. This can happen when specific code is calling generic code a lot (which can be avoided by having the generic code call the specific code instead).

> You are writing bad code, and should be ashamed.

Woah, calm your horses!



> I can't imagine error-checking if's being a source of slowness.

The overhead of doing this is measured in the article itself.


As far as I can see, only the overhead of throwing exceptions is measured. Not error-checking if's (which might be quite hard to measure).


That "if" that guards the throw is the same "if" as would guard the error return. It is the second "if", in the caller, checking the returned result, that up-to-doubles your branch-prediction cache footprint. On, yes, the hot path.

Program structure corruption is another problem. They add.

Bad code is a tax on all of us.


Which is what I said. And I still don't see how most code that potentially returns errors would be on a hot path. Code that is really on a hot path should probably not call into generic code anyway, and/or that called code should be inlined.

I don't know, as a C programmer, somehow I rarely run into these situations where I have to check error return values. I think the reason is that I work hard to avoid wrappers around wrappers around wrappers. Mostly error checking is required when interacting with the OS - i.e. for I/O, which is not critical paths.

A simple example would be memory allocation. A good C programmer allocates memory upfront. A bad C++ programmer could go, "eh, if it's so convenient I'll simply declare this std::vector locally, and if allocation fails it'll throw an exception and RAII and exceptions will solve the error handling issue magically without me having to type a single keystroke". Of course allocating each time is a lot slower than allocating only a single time upfront; no matter how much faster or convenient and individual allocation and error checking would be. This example shows how the perception of speed can often be warped because we're measuring the wrong thing entirely.

What is bad code is often not clear cut, and neither is how to improve it.


Inlined code still has all the "if" checks of the original code, just without the actual "jsr" and "ret" instructions in between.

Comparing a good C coder to a bad C++ coder is meaningless: By definition, a bad coder makes bad choices, whatever the language.

Bad code is slow in a place where speed matters. It is improved by making it fast. Language choice does not affect this.

What language choice does affect is whether you can afford to spend the attention needed to make the code fast. C++ provides tools to offload busywork, freeing your attention to apply to what matters. It remains the programmer's job to choose that.


I'm not sure what you had in mind, but this little example is compiled using a single test: https://godbolt.org/z/bcqPWEn4a

I still can't imagine a situation where speed matters and where exceptional situations are likely and also hard to handle in a performant way with explicit error checking. Do you have any examples?


It suffices that exceptional situations are possible and must be checked for.

It is true that in C++, as in Rust, it is much more common than in C to have very short functions that call other very short functions that, all composed together, do a job that, absent abstraction, might all be coded in a single larger function tailored to the specific use case; and the compiler squeezes out most of the calls and generates object code as if for that single, longer function. Therefore, there are many more places where an error return report would need to bubble up through, and that would cripple performance to handle the naive way, without exceptions.

It might be that some languages that have a native "result" type, and automatically generate checking code at the call site, can squeeze out most of the intermediate-level checks when composing inline functions together, e.g. if actual errors mostly occur at leaf nodes of the call tree. That could mitigate the heavy overhead cost of the method. But I don't know if any compilers do such an optimization, or if they do, how well it really works. Coding an optimizer is hard.


> Therefore, there are many more places where an error return report would need to bubble up through, and that would cripple performance to handle the naive way, without exceptions.

Do you have any evidence for these claims? If a called function is quite long by itself, then there is hardly any added overhead if the client checks the returned code and that check would be "redundant".

For example, if I make a syscall and check the return value, there will be a duplicated checking effort with the kernel code in a way - but compared to the cost of calling into the kernel that overhead is a negligible cost for the benefit of modularization (kernel vs userspace application). This overhead will be very close to unmeasureable, and does not justify the introduction of exceptions which are an additional mechanism to return values which introduce syntactic and binary incompatibilities (i.e. non-orthogonal functionality).

Oh, by the way - wouldn't exceptions and stack unwinding have to do just as much work per stack frame? It can't really just skip over all frames in a single instruction.

I think the best solution to cruft is generally to not have it in the first place - instead of introducing a mechanism to skip over layers upon layers of "abstraction", it's better to just not have these layers. There should not be a large call chain that unwraps multiple stack frames in a row. The best code is code that just does what needs to be done in a straightforward way without using any library cruft. Nothing to "optimize" away this way.

Give any concrete example where exceptions are useful in order to work around cruft, and we can see if there is a better way to write it that does not require exceptions.

Frankly it seems hilarious to me that we're having a discussion about optimizing a few if's because they allegedly add too much overhead for the work of skipping over layers of cruft.

In the past years I have seen the need to use longjmp() (which is in a way a mechanism to get exceptions for C) only one single time, when I was coding an interpreter that got embedded into a longer running application. I didn't know how to skip through recursive layers of parsing calls in the recursive descent parser. But I still feel like there is a better way to write the parser (like, pushing work to parse to a data structure instead representing it as a nested stack frame). This could also benefit error reporting for example.


What you are calling "layers of cruft", other people call abstraction. That abstraction enables people to have code exactly tailored for a specific use without writing it over and over, slightly changed each time, for each use.

In C, of course, you have no choice; you write the whole function over and over. C programs are, e.g., filled with custom one-off hash tables, because you can't write a performant, general hash table library in C. People using C++ and Rust do not code their own hash tables, because they cannot match the performance of well-tested and tuned library components.

As a consequence, modern C++ and Rust coders get better-than-C performance for a lot less work, and ship with many fewer bugs, by relying on mature, well-tuned libraries. That is worth a lot of what you call cruft.

That said, it is not uncommon to find C++ and Rust coders aping what they see in general-purpose libraries by adding superfluous cruft in their own programs, that provides no such benefit. But that is a complaint for another day.


Somehow the discussion shifted away from the original topic...


> wouldn't exceptions and stack unwinding have to do just as much work per stack frame? It can't really just skip over all frames in a single instruction.

Taking that back. I forgot for a second that we're focusing on the cases where the exception is not thrown.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: