I have always hated this crap; the fact that I'm not 100% sure the result of this indicates that maybe the ++ operator (pre or postfix) is something that should be avoided?
I don't do a lot of C anymore, but even when I did, I always would do increments on separate lines, and I would do a +=1, or just a = a + 1. I never noticed a performance degradation, and I also don't think my code was harder to read. In fact I think it was easier since I think the semantics were less ambiguous.
I do remember that this particular code snippet (with a = 5, even) used to be popular as an interview question. I found such questions quite annoying because most interviewers who posed them seemed to believe that whatever output they saw with their compiler version was the correct answer. If you tried explaining that the code has undefined behaviour, the reactions generally ranged from mild disagreement to serious confusion. Most of them neither cared about nor understood 'undefined behaviour' or 'sequence points'.
I remember one particular interviewer who, after I explained that this was undefined behaviour and why, listened patiently to me and then explained to me that the correct answer was 17, because the two post-increments leave the variable as 6, so adding 6 twice to the original 5 gives 17.
I am very glad these types of interview questions have become less prevalent these days. They have, right? Right?
In some sense, and without the interviewer knowing, that is actually a great scenario for an interview.
If you can convince someone in a position of authority that they’re wrong about something technical without upsetting them then you’re probably a good culture fit and someone who can raise the average effectiveness of your team.
I know I did recommend someone after the interview because I looked it up and they were right. Great person to work with. Though I fully understand why most would hesitate.
What's the reason that C didn't define the order of this?
The horrible undefined behavior of signed integer overflow at least can be explained by the fact that multiple CPU architectures handling those differently existed (though the fact that C even 'attracts' its ill-defined signed integers when you're using unsigned ones by returning a signed int when left shifting an uint16_t by an uint16_t for example is not as forgivable imho)
But this here is something that could be completely defined at the language level, there's nothing CPU dependent here, they could have simply stated in the language specification that e.g. the order of execution of statements is from left to right (and/or other rules like post increment happens after the full statement is finished for example, my point is not whether the rule I type here is complete enough or not but that the language designers could have made it completely defined).
The short answer is because C was designed to give leeway to really dumb compilers on really diverse hardware.
This isn't quite the same case, but it's a good illustration of the effect: on gcc, if you have an expression f(a(), b()), the order that a and b get evaluated is [1] dependent on the architecture and calling-convention of f. If the calling convention wants you to push arguments from right to left, then b is evaluated first; otherwise, a is evaluated first. If you evaluate arguments in the right order, then after calling the function, you can immediately push the argument on the stack; in the wrong order, the result is now a live variable that needs to be carried over another function call, which is a couple more instructions. I don't have a specific example for increment/decrement instructions, but considering extremely register-poor machines and hardware instruction support for increment/decrement addressing modes, it's not hard to imagine that there are similar cases where forcing the compiler to insert the increment at the 'wrong' point is similarly expensive.
Now, with modern compilers using cross-architecture IRs as their main avenue of optimization, the benefit from this kind of flexibility is very limited, especially since the penalties on modern architectures for the 'wrong' order of things can be reduced to nothing with a bit more cleverness. But compiler developers tend to be loath to change observable behavior, and the standards committee unwilling to mandate that compiler developers have to modify their code, so the fact that some compilers have chosen to implement it in different manners means it's going to remain that way essentially forever. If you were making a new language from scratch, you could easily mandate a particular order of evaluation, and I imagine that every new language in the past several decades has in fact done that.
[1] Or at least was 20 years ago, when I was asked to look into this. GCC may have changed since then.
With modern register allocators and larger register sets, code generation impact from following source evaluation is of course lower than it used to be. Some CPUs can even involve stack slots in register renaming: https://www.agner.org/forum/viewtopic.php?t=41
On the other hand, even modern Scheme leaves evaluation order undefined. It's not just a C issue.
I think the history of this is that these operations were common with assembly programmers, so when C came along, these were included in the language to allow these developers to feel they weren't leaving lots of performance behind.
Look at the addressing modes for the PDP-11 in https://en.wikipedia.org/wiki/PDP-11_architecture and you'll see you can write (R0)+ to read the contents of the location pointed to by R0, and then increment R0 afterwards (so a post increment).
Back in the day, compilers were simple and optimisations weren't that common, so folding two statements into one and working out that there were no dependencies would have been tough with single pass compilers.
You could argue that without such instructions, C wouldn't have been embraced quite so enthusiastically for systems programming, and the world would have looked rather different.
Python recently went the other way and added an assignment expression. I actually wish more languages would go further and add statement expressions instead of having to imitate them with IIFEs.
Those things are for pointer golf and writing your entire logic inside the if statement.
Both are favorite idioms of C developers. And they are ok if done correctly, clearer than the alternative. They are also unnecessary in modern languages, so those shouldn't copy it (yeah, Python specifically).
In any language where the practice of iteration isn't achieved via C-style for-loops, having an operator devoted to increment just doesn't make sense (let alone four operators, for each of pre/post-increment/decrement). This is one of those backwards things that just needs to be chucked in the bin for any language developed post-2010.
When used well it makes for compact readable code. I don't see what it has to do with for loops or operators specifically. For example you can do the same in scheme while iterating by means of tail recursion.
The only other reasonable option is to make such garbage a compile time error. There is no reasonable definition of what code like that should do and if you write it in the real world you need find better job fit. I'd normally say McDonald's is hiring, but they don't want people like that either
It's valuable for compilers to be able to choose the instruction scheduling order. Standards authors try not to unnecessarily bind implementors. If post increment happened after the full statement is finished, then the original value has to be maintained until the next sequence point. Maybe the compiler will be smart enough to elide that, maybe not, but it's a lot more difficult to fix those kinds of edge cases than to say sequencing is undefined.
But this is not valuable if doing so results in different numerical results, and I think that will always happen if ++ is executed at different times, there's no point in a compiler optimizing pointless code that can silently give different results elsewhere
Your compiler does many optimizations that break numerical reproducibility, especially in floats. I reviewed a PR the other day that wrote
X=AB+(CD)+E;
And when I checked 3 different compilers, each of them chose a different way to use FMAs.
Even with integer math, you can get different numerical results via UB (e.g. expressions with signed overflow one way and not another).
It's a language feature that was in K&R, and the rules around sequencing were introduced in C89. There were good reasons to believe it would pessimize code in the following decades. Dennis Richie himself pointed out that Thompson probably added the operators because compilers of the time were able to generate better code that way.
The C standard doesn't define things where two or more historical compilers disagreed and there wasn't an obviously correct way. This is defined behavior (left to right, assignment last) in Java, which is a different language.
Probably because when C was standardised there were already multiple implementations, and this was an area where implementations differed but it wasn't viewed as important enough to bring them in line with one approach.
I had to fight through school and university in India with my teachers who believed these were legit questions to ask in written exams. Can't 100% blame them since almost all standard-issue textbooks had them and claimed they'd give predictable output. I thought the same until I noticed the weirdness when running them across different compilers and after I read about UB, sequence points and similar quirks in books that are not total garbage.
Luckily, I ended up with smug smiles in all those cases after showing them the output from different compilers.
With undefined behavior, a conforming compiler can do anything it wants at all, including generating a program that segfaults or something else.
But what often happens in practice is that "Bill's Fly-By-Night-C-Compiler-originally-written-in-the-mid-nineties" implemented it in some specific way (probably by accident) and maintains it as a (probably informal) extension. And almost certainly has users who depend on it, and can't migrate for a myriad of reasons. Anyway, it's hard to sell an upgrade when users can't just drop the new compiler in and go.
At the language level, it is undefined-behavior, and any code that relies on it is buggy at the language level, and non-portable.
Defining it would make those compiler non-conforming, instead of just dependent on defining something that is undefined.
Probably the best way forward is to make this an error, instead of defining it in some way. That way you don't get silent changes in behavior.
Undefined behavior allows that to happen at the language level, but good implementations at least try not to break user code without warning.
Modern compilers with things like UBSan and such makes changing the result of undefined behavior much less of an issue. But most UB is also, "No diagnostic required", so users don't even know they have in their code without the modern tools.
> The interesting thing here is the Undefined Behavior (UB), well... actually two UBs, thanks to which there are three possible correct answers: 11, 12 and 13.
No, if you invoke undefined behavior any result at all is possible.
I feel we need another category - unspecified behavior. I think everyone would agree the compiler should putout ONE of those answers and that nasal demons would be out of spec.
The problem is that it’s not specified which should be picked, but all pick something.
I agree what you say seems reasonable at a glance. But (IIUC) the issue is that for optimization we want the compiler to assume that UB doesn't happen in order to constrain the possible code paths. So if it goes some distance down a possible execution branch and discovers UB it can trim the subtree. At that point "anything can happen" becomes an (approximate) reality.
The obvious counterpoint in this particular instance is that there's no good reason not to make such an awful expression a compile time error.
I also personally think that evaluation order should be strictly defined. I'm unclear if the current arrangement ever offers noticable benefits but it is abundantly clear that it makes the language more difficult to reason about.
The C and C++ standards include "Implementation defined behavior", which means that a conforming implementation can do whatever it wants, as long as it specifically documents and sticks to that behavior.
This doesn't really help portability all that much.
The only point you can conclude out of these discussions, especially in an interview, that it doesn't matter what the answer happens to be on $CC and $ARCH but you wouldn't want anyone to write stuff like that in the first place.
Failing to recognize the dangers would be an instant fail; knowing that something reeks of undefined behaviour, or even potential UB, is enough: you just write out explicitly what you want and skip the mind games.
> The interesting thing here is the Undefined Behavior (UB), well... actually two UBs, thanks to which there are three possible correct answers: 11, 12 and 13.
I am, thankfully, out of this craziness now but it was fun solving ton of such puzzles from Yashavant Kanetkar books while preparing for campus hiring interviews back in 2000. "Test Your C Skills" in particular. Fun times.
Perhaps I'm just naive and/or have forgotten too much C, not that I knew that much, but I'm a bit perplexed as to why this is UB.
It seems like something that should trigger a "we should specify this" reaction when adding these operators, and there is at least one reasonable way to define it which is fairly trivial and easily implementable.
I love such puzzles! I used to use a lot ternary operators in C++ but one day friend of mine told me that I shouldn't nest ternary operators too much because code is too complicated to read - he understands code perfectly, he was just worried about younger programmers. Since then I started to use longer versions of code instead of smart shortcuts - to improve readability of code.
> If you would like to test your compiler (posting back the results in the comments is really appreciated, especially from strange/uncommon compilers and other languages which support pre- / post- increment ....
Uh, 85% of them show the wrong result so 85% of them clearly do not support pre and post increment.
I don't do a lot of C anymore, but even when I did, I always would do increments on separate lines, and I would do a +=1, or just a = a + 1. I never noticed a performance degradation, and I also don't think my code was harder to read. In fact I think it was easier since I think the semantics were less ambiguous.
I remember one particular interviewer who, after I explained that this was undefined behaviour and why, listened patiently to me and then explained to me that the correct answer was 17, because the two post-increments leave the variable as 6, so adding 6 twice to the original 5 gives 17.
I am very glad these types of interview questions have become less prevalent these days. They have, right? Right?
If you can convince someone in a position of authority that they’re wrong about something technical without upsetting them then you’re probably a good culture fit and someone who can raise the average effectiveness of your team.
I just refuse to do interviews like that any more.
The horrible undefined behavior of signed integer overflow at least can be explained by the fact that multiple CPU architectures handling those differently existed (though the fact that C even 'attracts' its ill-defined signed integers when you're using unsigned ones by returning a signed int when left shifting an uint16_t by an uint16_t for example is not as forgivable imho)
But this here is something that could be completely defined at the language level, there's nothing CPU dependent here, they could have simply stated in the language specification that e.g. the order of execution of statements is from left to right (and/or other rules like post increment happens after the full statement is finished for example, my point is not whether the rule I type here is complete enough or not but that the language designers could have made it completely defined).
This isn't quite the same case, but it's a good illustration of the effect: on gcc, if you have an expression f(a(), b()), the order that a and b get evaluated is [1] dependent on the architecture and calling-convention of f. If the calling convention wants you to push arguments from right to left, then b is evaluated first; otherwise, a is evaluated first. If you evaluate arguments in the right order, then after calling the function, you can immediately push the argument on the stack; in the wrong order, the result is now a live variable that needs to be carried over another function call, which is a couple more instructions. I don't have a specific example for increment/decrement instructions, but considering extremely register-poor machines and hardware instruction support for increment/decrement addressing modes, it's not hard to imagine that there are similar cases where forcing the compiler to insert the increment at the 'wrong' point is similarly expensive.
Now, with modern compilers using cross-architecture IRs as their main avenue of optimization, the benefit from this kind of flexibility is very limited, especially since the penalties on modern architectures for the 'wrong' order of things can be reduced to nothing with a bit more cleverness. But compiler developers tend to be loath to change observable behavior, and the standards committee unwilling to mandate that compiler developers have to modify their code, so the fact that some compilers have chosen to implement it in different manners means it's going to remain that way essentially forever. If you were making a new language from scratch, you could easily mandate a particular order of evaluation, and I imagine that every new language in the past several decades has in fact done that.
[1] Or at least was 20 years ago, when I was asked to look into this. GCC may have changed since then.
With modern register allocators and larger register sets, code generation impact from following source evaluation is of course lower than it used to be. Some CPUs can even involve stack slots in register renaming: https://www.agner.org/forum/viewtopic.php?t=41
On the other hand, even modern Scheme leaves evaluation order undefined. It's not just a C issue.
Anyway, yes, this one example has an obvious order it should be applied. But still, something like it shouldn't be allowed.
That would be nice, but don't forget the more general case of pointers and aliasing:
The compiler cannot statically catch every possible instance of a statement where a variable is updated more than once.Look at the addressing modes for the PDP-11 in https://en.wikipedia.org/wiki/PDP-11_architecture and you'll see you can write (R0)+ to read the contents of the location pointed to by R0, and then increment R0 afterwards (so a post increment).
Back in the day, compilers were simple and optimisations weren't that common, so folding two statements into one and working out that there were no dependencies would have been tough with single pass compilers.
You could argue that without such instructions, C wouldn't have been embraced quite so enthusiastically for systems programming, and the world would have looked rather different.
C just wouldn't be C without things like a[i++]
Both are favorite idioms of C developers. And they are ok if done correctly, clearer than the alternative. They are also unnecessary in modern languages, so those shouldn't copy it (yeah, Python specifically).
And when I checked 3 different compilers, each of them chose a different way to use FMAs.
Even with integer math, you can get different numerical results via UB (e.g. expressions with signed overflow one way and not another).
Luckily, I ended up with smug smiles in all those cases after showing them the output from different compilers.
But what often happens in practice is that "Bill's Fly-By-Night-C-Compiler-originally-written-in-the-mid-nineties" implemented it in some specific way (probably by accident) and maintains it as a (probably informal) extension. And almost certainly has users who depend on it, and can't migrate for a myriad of reasons. Anyway, it's hard to sell an upgrade when users can't just drop the new compiler in and go.
At the language level, it is undefined-behavior, and any code that relies on it is buggy at the language level, and non-portable.
Defining it would make those compiler non-conforming, instead of just dependent on defining something that is undefined.
Probably the best way forward is to make this an error, instead of defining it in some way. That way you don't get silent changes in behavior.
Undefined behavior allows that to happen at the language level, but good implementations at least try not to break user code without warning.
Modern compilers with things like UBSan and such makes changing the result of undefined behavior much less of an issue. But most UB is also, "No diagnostic required", so users don't even know they have in their code without the modern tools.
No, if you invoke undefined behavior any result at all is possible.
The problem is that it’s not specified which should be picked, but all pick something.
The obvious counterpoint in this particular instance is that there's no good reason not to make such an awful expression a compile time error.
I also personally think that evaluation order should be strictly defined. I'm unclear if the current arrangement ever offers noticable benefits but it is abundantly clear that it makes the language more difficult to reason about.
This doesn't really help portability all that much.
Failing to recognize the dangers would be an instant fail; knowing that something reeks of undefined behaviour, or even potential UB, is enough: you just write out explicitly what you want and skip the mind games.
There’s UB, so any answer is possible, isn’t it?
https://www.scribd.com/document/235004757/Test-Your-C-Skills...
It seems like something that should trigger a "we should specify this" reaction when adding these operators, and there is at least one reasonable way to define it which is fairly trivial and easily implementable.
It‘s the standard technical C++ blog post everybody seems to write.
Uh, 85% of them show the wrong result so 85% of them clearly do not support pre and post increment.
int a = 5; a = (++a * a++) + --a; a = ?