Where did the notion of "one return only" come from?

  • I often talk to programmers who say "Don't put multiple return statements in the same method." When I ask them to tell me the reasons why, all I get is "The coding standard says so." or "It's confusing." When they show me solutions with a single return statement, the code looks uglier to me. For example:

    if (condition)
       return 42;
       return 97;

    "This is ugly, you have to use a local variable!"

    int result;
    if (condition)
       result = 42;
       result = 97;
    return result;

    How does this 50% code bloat make the program any easier to understand? Personally, I find it harder, because the state space has just increased by another variable that could easily have been prevented.

    Of course, normally I would just write:

    return (condition) ? 42 : 97;

    But many programmers eschew the conditional operator and prefer the long form.

    Where did this notion of "one return only" come from? Is there a historical reason why this convention came about?

    This is somewhat connected to Guard Clause refactoring. http://stackoverflow.com/a/8493256/679340 Guard Clause will add returns to the beginning of your methods. And it makes code a lot cleaner in my opinion.

    It came from the notion of structured programming. Some may argue that having just one return allows you to easily modify the code to do something just before returning or to easily debug.

    i think the example is a simple enough case where i wouldn't have a strong opinion one way or the other. the single-entry-single-exit ideal is more to guide us away from crazy situations like 15 return statements and two more branches that don't return at all!

    John Carmack did it all the time in Doom. It's easier to get work done faster. Your tests should prove your work works. not mantras that make more work. Like Goto is evil. https://github.com/id-Software/DOOM/blob/77735c3ff0772609e9c8d29e3ce2ab42ff54d20b/linuxdoom-1.10/p_doors.c#L207

    That is one of the worst articles I have ever read. It seems like the author spends more time fantasising about the purity of his OOP than actually figuring out how to achieve anything. Expression and evaluation trees have value but not when you can just write a normal function instead.

    You should remove the condition altogether. The answer is 42.

    Early returns to simplify a function and readability are usually good, but one responsibility for a function usually leads to more modular and readable code. If you have a condition that is not purely for an early return, chances are higher your function is breaching single responsibility principals (if you care about that concept).

    I'm finding myself fighting against the single return habit in some performance-critical code, where I'm noticing it makes a speed difference in certain hot spots. In some cases, bailing out of the function early is cheaper than having to test a bunch of "ok to proceed" logic.

  • kevin cline

    kevin cline Correct answer

    9 years ago

    "Single Entry, Single Exit" was written when most programming was done in assembly language, FORTRAN, or COBOL. It has been widely misinterpreted, because modern languages do not support the practices Dijkstra was warning against.

    "Single Entry" meant "do not create alternate entry points for functions". In assembly language, of course, it is possible to enter a function at any instruction. FORTRAN supported multiple entries to functions with the ENTRY statement:

          SUBROUTINE S(X, Y)
          R = SQRT(X*X + Y*Y)
          ENTRY S2(R)
          CALL S(3,4)
          CALL S2(5)

    "Single Exit" meant that a function should only return to one place: the statement immediately following the call. It did not mean that a function should only return from one place. When Structured Programming was written, it was common practice for a function to indicate an error by returning to an alternate location. FORTRAN supported this via "alternate return":

          SUBROUTINE QSOLVE(A, B, C, X1, X2, *)
          DISCR = B*B - 4*A*C
          IF DISCR .LT. 0 RETURN 1
          SD = SQRT(DISCR)
          DENOM = 2*A
          X1 = (-B + SD) / DENOM
          X2 = (-B - SD) / DENOM
          CALL QSOLVE(1, 0, 1, X1, X2, *99)

    Both these techniques were highly error prone. Use of alternate entries often left some variable uninitialized. Use of alternate returns had all the problems of a GOTO statement, with the additional complication that the branch condition was not adjacent to the branch, but somewhere in the subroutine.

    And don't forget *spaghetti code*. It was not unknown for subroutines to exit using a GOTO instead of a return, leaving the function call parameters and return address on the stack. Single exit was promoted as a way to at least funnel all the code paths to a RETURN statement.

    @TMN: in the early days, most machines didn't have a hardware stack. Recursion generally wasn't supported. Subroutine arguments and return address were stored in fixed locations adjacent to the subroutine code. Return was just an indirect goto.

    @kevin: Yeah, but according to you this doesn't even mean anymore what it was invented as. (BTW, I'm actually reasonably sure that Fred asked were the preference for the _current_ interpretation of "Single Exit" comes from.) Also, C has had `const` since before many of the users here were born, so no need for capital constants anymore even in C. But Java preserved all those bad old C habits.

    So do exceptions violate this interpretation of Single Exit? (Or their more primitive cousin, `setjmp/longjmp`?)

    @Mason: The original article on SESE was focused on provable code. Unrestricted branching creates a combinatorial explosion in possible program states. The use of exceptions could also complicate a correctness proof, if the catch clause accesses local variables that may or may not have been initialized in the main line of code.

    Even though the op asked about the current interpretation of single return, this answer is the one with the most historical roots. There's no point in using a single return as a **rule**, unless you want your language to match the awesomeness of VB (not .NET). Just remember to use non-short-circuit boolean logic as well.

    Is that an early form of the sort of continuation passing now seen in promise-oriented programming?

    In addition to Algol, Scheme also supports continuations, and enables some pretty cool options, if the dev can accept the risks.

    This doesn't just apply to assembly. There was also some research done at Microsoft (on C codebases) that found multiple returns contributed to higher bug frequency. See: https://www.amazon.com/Writing-Solid-Code-20th-Anniversary/dp/1570740550/ref=pd_lpo_sbs_14_t_0?_encoding=UTF8&;psc=1&refRID=D352JQB8YPXZW4V8HPHC

    @JivanAmara: Multiple returns from functions that allocate resources can lead to resource leaks. In C memory allocation happens frequently. Multiple returns from functions that allocate memory or other resources should indeed be forbidden. Memory management should be separated from computation. C# and Java have the same problem with non-memory resources. This was somewhat ameliorated by the introduction of the C# using statement and Java's try variables (or Lombok @Cleanup).

    That's nonsense. It has nothing to do with a specific language with the exception perhaps of assembly. A single return gives you a single point of return at the risk of stating the obvious. You can set a single breakpoint there, for example, and you're sure to get there. Now, if you want to have an assertion mechanism or pre-check of arguments at the head, for bad arguments, that preserves your single return point for the normal logic of your method, without cluttering on bad argument checks.

    @RickO'Shea: How long does it take to set a breakpoint? I'm not willing to complexify the code to avoid having to set multiple breakpoints in an IDE. I keep my functions short enough to fit on the screen, so it's no problem to set a breakpoint on all the returns.

    @RickO'Shea My IDE lets me put a breakpoint on the final `}` of a function, and breaks there after any of the `return` statements is executed.

    "Single Exit meant that a function should only return _to_ one place" — {{citation needed}}. This answer could be greatly improved by a link to the place Dijkstra (or anyone else) is supposed to have talked about Single Exit with this particular meaning. As it is, it smells like a folk etymology to me.

License under CC-BY-SA with attribution

Content dated before 6/26/2020 9:53 AM