Thou Shalt Have One Exit Point Per Subroutine -- A Monkey Cage Conundrum
Back when I was a wee lad in Engineerin' school, fussin' with
malloc and all of that hurty stuff, a particular lesson was drilled into us:
Thou Shalt Have One Exit Point Per Subroutine
This was given as gospel and we learnt it as such. It was such a fundamental principle of the coding style they drummed into us that the reasons for it were glossed over quickly -- "it make's maintenance easier".
I stuck to that principle for years -- until finally I thought about the underlying reasons, the trade-offs involved, and realised that the benefits were redundant now. The rule had become, mostly, bunk.
Yet, every time I write a subroutine that has more than one exit point, I find that other monkeys in my monkey cage jump up and down and screech and throw excrement and chatter excitedly.
So go and read about that monkey cage experiment now. I'll be here. Grinning demonically. Then we'll continue.
So why was that commandment so important once? And why is it no longer of such importance?
The big reason used to be around memory management.
You wanted to make sure that any allocated memory was cleaned up at the end of a routine, regardless of what other logical decisions were made during that routine.
Failure to clean up memory properly was terrible -- not just because it caused catastrophic failure. The real problem was that it caused symptoms that were very far removed from the cause itself. When you have a memory leak in a program, you can't simply read a stack trace that tells you where to go to debug the problem. You often end up having to crawl meticulously over the entire program.
In such cases you curse bitterly at any methods that have more than one exit point, because you can't check them quickly -- you instead have to analyse every last piece of logic.
We're Finally Using the Right Tools
In this age of managed-memory, that sort of memory leak is not an issue (yes, there are still ways to leak managed-memory). And better still, we have two programming constructs (in C# and others) that give the exact kind of flow control we want -- without getting in the way of the program's intent. So if you've got objects to clean up -- use '
using' or '
Well, the surface lesson is simple. The context of a 'Best Practice' changes over time, so a best practice shouldn't be considered in absence of its context. And every practice must be revisited every once in a while, to see if the context still applies.
The deeper lesson is that monkeys can't be separated from the tortures they've undergone in the past. Maybe it's futile to try and teach old monkeys new coding paradigms. It's certainly likely to get you pelted with symian scat.
I don't like preaching -- so i'm going to stop right there. But if you're unconvinced, read up on guard clauses, and question your own beliefs a little. It's a deep deep vortex, the inner mind of the inner code monkey.
To finish that topic, here's a comment from Phil Haack that sums up my feelings well:
Get all the failure conditions out of the way at the top of the method so the body can focus on the real meat.
Ah -- one last detour...
I found a java thread on the same topic, wherein the discussion veered onto people doing things because 'that's the way they've always been'.
This lead to a discussion of driving on the right side of the road (the dominant paradigm on earth) and JosAH claimed this had deep roots in times long past:
Riding a horse on the right side of the road in the dark ages made it nearly impossible for right handed sword fighters to combat each other when they were passing each other. It's the 'peaceful' side.
to which George123 replied:
And having the steering wheel on the left side makes it easier to hit the kids in the back seat. Just thought I'll clear that up.
I am enlightened. I am at one.Next → ← Previous
My book "Choose Your First Product" is available now.
It gives you 4 easy steps to find and validate a humble product idea.