Why is 0 false?

  • This question may sound dumb, but why does 0 evaluates to false and any other [integer] value to true is most of programming languages?

    String comparison

    Since the question seems a little bit too simple, I will explain myself a little bit more: first of all, it may seem evident to any programmer, but why wouldn't there be a programming language - there may actually be, but not any I used - where 0 evaluates to true and all the other [integer] values to false? That one remark may seem random, but I have a few examples where it may have been a good idea. First of all, let's take the example of strings three-way comparison, I will take C's strcmp as example: any programmer trying C as his first language may be tempted to write the following code:

    if (strcmp(str1, str2)) { // Do something... }
    

    Since strcmp returns 0 which evaluates to false when the strings are equal, what the beginning programmer tried to do fails miserably and he generally does not understand why at first. Had 0 evaluated to true instead, this function could have been used in its most simple expression - the one above - when comparing for equality, and the proper checks for -1 and 1 would have been done only when needed. We would have considered the return type as bool (in our minds I mean) most of the time.

    Moreover, let's introduce a new type, sign, that just takes values -1, 0 and 1. That can be pretty handy. Imagine there is a spaceship operator in C++ and we want it for std::string (well, there already is the compare function, but spaceship operator is more fun). The declaration would currently be the following one:

    sign operator<=>(const std::string& lhs, const std::string& rhs);
    

    Had 0 been evaluated to true, the spaceship operator wouldn't even exist, and we could have declared operator== that way:

    sign operator==(const std::string& lhs, const std::string& rhs);
    

    This operator== would have handled three-way comparison at once, and could still be used to perform the following check while still being able to check which string is lexicographically superior to the other when needed:

    if (str1 == str2) { // Do something... }
    

    Old errors handling

    We now have exceptions, so this part only applies to the old languages where no such thing exist (C for example). If we look at C's standard library (and POSIX one too), we can see for sure that maaaaany functions return 0 when successful and any integer otherwise. I have sadly seen some people do this kind of things:

    #define TRUE 0
    // ...
    if (some_function() == TRUE)
    {
        // Here, TRUE would mean success...
        // Do something
    }
    

    If we think about how we think in programming, we often have the following reasoning pattern:

    Do something
    Did it work?
    Yes ->
        That's ok, one case to handle
    No ->
        Why? Many cases to handle
    

    If we think about it again, it would have made sense to put the only neutral value, 0, to yes (and that's how C's functions work), while all the other values can be there to solve the many cases of the no. However, in all the programming languages I know (except maybe some experimental esotheric languages), that yes evaluates to false in an if condition, while all the no cases evaluate to true. There are many situations when "it works" represents one case while "it does not work" represents many probable causes. If we think about it that way, having 0 evaluate to true and the rest to false would have made much more sense.

    Conclusion

    My conclusion is essentially my original question: why did we design languages where 0 is false and the other values are true, taking in account my few examples above and maybe some more I did not think of?

    Follow-up: It's nice to see there are many answers with many ideas and as many possible reasons for it to be like that. I love how passionate you seem to be about it. I originaly asked this question out of boredom, but since you seem so passionate, I decided to go a little further and ask about the rationale behind the Boolean choice for 0 and 1 on Math.SE :)

    `strcmp()` is no good example for true or false, as it returns 3 different values. And you will be surprised when you start using a shell, where 0 means true and anything else means false.

    @ott-- Well, just asking about boolean values would be meaningless, my question exactly about why things are converted that way to boolean values :) Nice thing you cited Bash, I had almost forgotten about that one.

    @ott--: In Unix shells, 0 means *success* and non-zero means *failure* -- not quite the same thing as "true" and "false".

    @KeithThompson Depends whether the truth tables work with *success* and *failure* in Bash, right?

    Just to mention that Lua does not imply that "0" corresponds to false. http://www.luafaq.org/gotchas.html#T2

    @MasonWheeler I could probably list the (reasonably popular) languages where `0 != FALSE` on the fingers of 1 hand -- Haskell, Ada and Java off the bat, maybe a couple of others.

    What are those languages you are talking about? In my personal experience, in most languages, `0` in a Boolean context is either **truthy** or a `TypeError`. The languages in which `0` is considered **falsy** seem to be a tiny minority, and the ones in which `0` actually *is* `false` can be counted on the fingers on one hand.

    @KeithThompson: In Bash (and other shells), "success" and "failure" really are the same as "true" and "false". Consider, for example, the statement `if true ; then ... ; fi`, where `true` is a command that returns zero and this tells `if` to run `...`.

    @TC1: You can add Common Lisp to the list (no pun intended): `false` is represented by `nil` (the empty list). When considered as boolean values, both `0` and `1` are equivalent to `T` (`true`), as all other numbers.

    There are no booleans in hardware whatsoever, only binary numbers, and in most historical ISAs a non-zero number is considered as "true" in all the conditional branching instructions (unless they're using flags instead). So, the low level languages are by all means obliged to follow the underlying hardware properties.

    @Giorgio Not just Common, afaik, LISP in general, at least I'm sure of Scheme, Clojure and Elisp. They're all fairly similar on those kinds of things. Still, that's 4 then.

    @TC1: Right, in Scheme numbers are also equivalent to `true` = `#t`. Additionally, you have a special symbol for `false`, namely `#f`, which is not the same as the empty list `'()`.

    @MasonWheeler Having a boolean type doesn't imply anything. For example python *does* have a `bool` type but comparisons/if conditions etc. can have any return value.

    On a logic perspective, `false` (the strongest of conditions) is stronger than `true` (the weakest of conditions). Representing a (binary) zero could be expressed as *there are no ones*, which is stronger, hence harder to satisfy, than *there could be ones*.

    I would think, and likely wrong :), that 0 is false and 1 is true (not case in c++, with any positive int true), comes from the days of computers being programmed electronically. That is to say electrical signal gets sent, or doesn't to a pin. Kind of like a lightswitch on is I, off is O. So 0 is no electric, 1 is electric flow

    You shouldn't evaluate integer values to booleans at all. It's like casting a type with a lot of possible values to a type with only two possible values.

    `if (some_function() == TRUE)` I always cringe when I read code like this. That's just plain awkward.

    Is it? In C, 0 is false, all other values are true. However in sh the unix shell, 0 is true, all other values are false. (C and sh where written around the same time, as part of Unix. Unix is mostly consistent, but not here.)

    Exit codes != `true/false`... _you_ may have designed your exit codes to react that way, but some of us use exit codes to determine all manner of things. Exit `0` _is_ the _standard_ return for a _successful exit_ of a program, but that doesn't somehow mean `0` in a shell somehow universally becomes synonymous with `true`.

    I find it odd how PayPal RESULT values use 1(or any non-zero) for not approved and 0 for approved which i related to 1(or non-zero) being false and 0 being true in this case(whether a transaction was successful without error or not): https://www.paypalobjects.com/en_US/vhelp/paypalmanager_help/result_values_for_transaction_declines_or_errors.htm

    I lament my opinion that non-primes are false and primes are true has never really been taken seriously.

  • Jon Purdy

    Jon Purdy Correct answer

    7 years ago

    0 is false because they’re both zero elements in common semirings. Even though they are distinct data types, it makes intuitive sense to convert between them because they belong to isomorphic algebraic structures.

    • 0 is the identity for addition and zero for multiplication. This is true for integers and rationals, but not IEEE-754 floating-point numbers: 0.0 * NaN = NaN and 0.0 * Infinity = NaN.

    • false is the identity for Boolean xor (⊻) and zero for Boolean and (∧). If Booleans are represented as {0, 1}—the set of integers modulo 2—you can think of ⊻ as addition without carry and ∧ as multiplication.

    • "" and [] are identity for concatenation, but there are several operations for which they make sense as zero. Repetition is one, but repetition and concatenation do not distribute, so these operations don’t form a semiring.

    Such implicit conversions are helpful in small programs, but in the large can make programs more difficult to reason about. Just one of the many tradeoffs in language design.

    Nice that you mentioned lists. (BTW, `nil` is both the empty list `[]` and the `false` value in Common Lisp; is there a tendency to merge identities from different data types?) You still have to explain why it is natural to consider false as an additive identity and true as a multiplicative identity and not the other way around. Isn't possible to consider `true` as the identify for `AND` and zero for `OR`?

    +1 for referring to similar identities. Finally an answer which doesn't just boil down to "convention, deal with it".

    +1 for giving details of a concrete and very old maths in which this has been followed and long made sense

    Boolean OR has no inverse. How does that form a ring?

    I'm unclear whether the fact that they are isomorphic algebras is being proved or used as an assumption here.

    This answer doesn't make sense. `true` is also the identity and the zero of semirings (Boolean and/or). There is no reason, appart convention, to consider `false` is closer to 0 than `true`.

    @TonioElGringo: The difference between true and false is the difference between XOR and XNOR. One can form isomorphic rings using AND/XOR, where true is the multiplicative identity and false the additive one, or with OR and XNOR, where false is the multiplicative identity and true is the additive one, but XNOR is not usually regarded as a common fundamental operation the way XOR is.

    @supercat: On the other hand, one could argue that the main reason why XNOR is not regarded as fundamental is because it corresponds to the notion of coincidence, which can already be rendered as ‘==’. And equality /is/ undoubtedly a fundamental operator…

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM