Why are shortcuts like x += y considered good practice?

  • I have no idea what these are actually called, but I see them all the time. The Python implementation is something like:

    x += 5 as a shorthand notation for x = x + 5.

    But why is this considered good practice? I've run across it in nearly every book or programming tutorial I've read for Python, C, R so on and so forth. I get that it's convenient, saving three keystrokes including spaces. But they always seem to trip me up when I'm reading code, and at least to my mind, make it less readable, not more.

    Am I missing some clear and obvious reason these are used all over the place?

    @EricLippert: Does C# handle this in the same way as the top answer described? Is it actually more efficient CLR-wise to say `x += 5` than `x = x + 5`? Or is it truly just syntactic sugar as you suggest?

    @blesh: That small details of how one expresses an addition in source code have an impact on *efficiency* of the resulting executable code might have been the case in 1970; it certainly is not now. Optimizing compilers are good, and you have bigger worries than a nanosecond here or there. The idea that the += operator was developed "twenty years ago" is obviously false; the late Dennis Richie developed C from 1969 through 1973 at Bell Labs.

    Most functional programmers will consider this bad practice.

  • It's not shorthand.

    The += symbol appeared in the C language in the 1970s, and - with the C idea of "smart assembler" correspond to a clearly different machine instruction and adressing mode:

    Things like "i=i+1", "i+=1" and "++i", although at an abstract level produce the same effect, correspond at low level to a different way of working of the processor.

    In particular those three expressions, assuming the i variable resides in the memory address stored in a CPU register (let's name it D - think of it as a "pointer to int") and the ALU of the processor takes a parameter and return a result in an "accumulator" (let's call it A - think to it as an int).

    With these constraints (very common in all microprocessors from that period), the translation will most likely be

    ;i = i+1;
    MOV A,(D); //Move in A the content of the memory whose address is in D
    ADD A, 1;  //The addition of an inlined constant
    MOV (D) A; //Move the result back to i (this is the '=' of the expression)
    
    ;i+=1;
    ADD (D),1; //Add an inlined constant to a memory address stored value
    
    ;++i;
    INC (D); //Just "tick" a memory located counter
    

    The first way of doing it is disoptimal, but it is more general when operating with variables instead of constant (ADD A, B or ADD A, (D+x)) or when translating more complex expressions (they all boil down in push low priority operation in a stack, call the high priority, pop and repeat until all the arguments had been eliminated).

    The second is more typical of "state machine": we are no longer "evaluating an expression", but "operating a value": we still use the ALU, but avoid moving values around being the result allowed to replace the parameter. These kind of instruction cannot be used where more complicated expression are required: i = 3*i + i-2 cannot be operated in place, since i is required more times.

    The third -even simpler- does not even consider the idea of "addition", but uses a more "primitive" (in computational sense) circuitry for a counter. The instruction is shorted, load faster and executes immediately, since the combinatorial network required to retrofit a register to make it a counter is smaller, and hence faster than the one of a full-adder.

    With contemporary compilers (refer to C, by now), enabling compiler optimization, the correspondence can be swapped based on convenience, but there is still a conceptual difference in the semantics.

    x += 5 means

    • Find the place identified by x
    • Add 5 to it

    But x = x + 5 means:

    • Evaluate x+5
      • Find the place identified by x
      • Copy x into an accumulator
      • Add 5 to the accumulator
    • Store the result in x
      • Find the place identified by x
      • Copy the accumulator to it

    Of course, optimization can

    • if "finding x" has no side effects, the two "finding" can be done once (and x become an address stored in a pointer register)
    • the two copies can be elided if the ADD is applied to &x instead to the accumulator

    thus making the optimized code to coincide the x += 5 one.

    But this can be done only if "finding x" has no side effects, otherwise

    *(x()) = *(x()) + 5;
    

    and

    *(x()) += 5;
    

    are semantically different, since x() side effects (admitting x() is a function doing weird things around and returning an int*) will be produced twice or once.

    The equivalence between x = x + y and x += y is hence due to the particular case where += and = are applied to a direct l-value.

    To move to Python, it inherited the syntax from C, but since there is no translation / optimization BEFORE the execution in interpreted languages, things are not necessarily so intimately related (since there is one less parsing step). However, an interpreter can refer to different execution routines for the three types of expression, taking advantage of different machine code depending on how the expression is formed and on the evaluation context.


    For who likes more detail...

    Every CPU has an ALU (arithmetic-logical unit) that is, in its very essence, a combinatorial network whose inputs and output are "plugged" to the registers and / or memory depending on the opcode of the instruction.

    Binary operations are typically implemented as "modifier of an accumulator register with an input taken "somewhere", where somewhere can be - inside the instruction flow itself (typical for manifest contant: ADD A 5) - inside another registry (typical for expression computation with temporaries: e.g. ADD A B) - inside the memory, at an address given by a register (typical of data fetching e.g.: ADD A (H)) - H, in this case, work like a dereferencing pointer.

    With this pseudocode, x += 5 is

    ADD (X) 5
    

    while x = x+5 is

    MOVE A (X)
    ADD A 5
    MOVE (X) A
    

    That is, x+5 gives a temporary that is later assigned. x += 5 operates directly on x.

    The actual implementation depends on the real instruction set of the processor: If there is no ADD (.) c opcode, the first code becomes the second: no way.

    If there is such an opcode, and optimization are enabled, the second expression, after eliminating the reverse moves and adjusted the registers opcode, become the first.

    +1 for the only answer explaining that it used to map to different (and more efficient) machine code back in the olden days.

    C is not an assembly language. Assembly language programs specify CPU instructions; C programs specify behavior.

    @KeithThompson That is true but one cannot deny that assembly had a huge influence over the design of the C language (and subsequently all C style languages)

    @MattDavey: Don't underestimate the influence that C has since had on the design of assembly. E.g. the slow introduction of multi-core CPU's can be linked to the absence of threading in ISO C, and the complexity of it in POSIX C (compared to say Erlang).

    Erm, "+=" doesn't map to "inc" (it maps to "add"), "++" maps to "inc".

    +1 I was going to put something about the difference reflects the way that the operation is done in assembly.

    I'm not sure this is true, I think Brendans comment is onto something. Also, the answer is very specific to C / C++. Does this logic even hold up in something like Java / C# which run in virtual machines?

    By "20 years ago", I think you mean "30 years ago". And BTW, COBOL had C beat by another 20 years with: ADD 5 TO X.

    @Andy The point is that the logic describes the thinking in the ancestral language (C) in which the syntax was invented; the descendant languages (Java, C#) have just inherited the syntax. It no longer matters whether the logic applies to those environments.

    +1 for discussion of side effects. It's a bit abstract, though; maybe it should be specified that `x()` returns an `int&`, or use an example of `std::map::operator[]` or the like.

    @fluffy I'd make it more clear that this is the historical reason, as the answer makes it sound like the only one. Since the question is tagged with R and Python which appeared in the 90s and to which the answer won't apply (exept that they borrowed from an older language).

    @Andy good point, I should pay closer attention to the tags. I'm not familiar enough with Python, but does it have a concept of property getters/setters? (Although I suppose in that case the two constructs would be equivalent.)

    Great in theory; wrong in facts. The x86 ASM INC only adds 1, so it doesn't affect the "add and assign" operator discussed here (this would be a great answer for "++" and "--" though).

    @joelFan: By "30 years ago" I think you mean "43 years ago".

    While "not inc, add" is correct, `+=` would still use a different addressing mode to `+`. In a naive translation, this could still avoid some memory accesses, replacing multiple instructions with just one. Proviso - I don't know the assembler that C was originally designed for, and am guessing based on general assembler principles.

    I really love the `*(x()) = *(x()) + 5; / *(x()) += 5;` comparison. It sums up the previous discussion just perfectly!

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM