Agile and Test-Driven: A Marriage Made In Hell?
Agile and Test-Driven go together like a horse and carriage... or do they?
Thanks to an excellent series of articles at Raganwald, I'm currently rethinking some of my old prejudices against agile and test-driven practices, to see if i'll start adopting them as a more central part of my style. (Note: I'm not against either of these practices, but i'm certainly not a true believer)
Thus i've been trying to uncover my hidden objections, to see if they can be overcome. But along the way a few boggling conundrums have presented themselves. See what you think.
Problem Number One.
One of the premises of test-driven development is that bugs cost hundreds of times more to fix if caught later. That's cool right?
One of the promises of agile is that when you follow its practices you flatten out the cost of making changes late in the game. That's also pretty cool.
But doesn't this mean that the use of one practice lowers the economic incentive to use the other?
I.e. when using agile practices, the benefits of test-driven development have less economic impact than they would have on a non-agile project. True? Crazy?
Problem Number Two.
One of the mantras of agile is 'YAGNI' -- You Aint Gonna Need It -- meaning, don't waste time writing extra code on the off chance it's needed later.
Test-Driven Development (TDD), on the other, states unequivocally that it's worth writing extra code, lots and lots of extra code, up to five times as much code you'd write otherwise. The reasons for this are: it may stop bugs from escaping your desk -- and it may help out when you need to alter the design later.
Hello? Maybe just maybe YAGNI can be applied to TDD? Or do we pick and choose when YAGNI applies?
So what gives? These seem like obvious reasons why TDD and Agile should be preferred in isolation from each other, not together. Why are they so often co-practiced?
An example where TDD trumps Agile?
If building a Mars Rover, go with TDD, not agile. Because once the Mars Rover has landed on the surface you can't say "okay, let's do another iteration, and really get these features right."
An example where Agile trumps TDD?
On the other hand, if trying to develop software for a vague client, go with agile and forget TDD. There's little value in having a completely bug free system that the client will take one look at and say "y'know, I realise now that what I really want is something else altogether..."
Questions over. Your turn.
[note -- by saying "I'm not against either of these practices, but i'm certainly not a true believer" i realise i'm 'damning them with faint praise'. So be it ;-) ]
'Jason' on Sun, 11 Mar 2007 21:58:25 GMT, sez:
I thought TDD involved writing extra code for the tests themselves but YAGNI still applied to the application code where you only code enough to pass the tests and no more.
'lb' on Sun, 11 Mar 2007 22:07:27 GMT, sez:
But how do you know you're going to need the test itself?
Is there no limit to the number of possible tests that can be written?
[i'm overthinking this, i know ;-)]
Since tests don't prove correctness, you can often add more tests and still not be certain that you've covered every possible contingency. At some point you have to stop adding tests, based on yagni -- but it seems very arbitrary.
>only code enough to pass the tests and no more
So, this is the "Do the Simplest Thing That Works" mantra and it's similar to the YAGNI mantra.
'Haacked' on Sun, 11 Mar 2007 23:40:14 GMT, sez:
When agile methodologies discuss flattening out the cost of making changes, we're talking about changes due to stakeholders changing their minds as they see the application come into fruition.
I don't know about you, but I've *never* worked with a client who could describe in exacting detail everything they wanted at the very beginning of a project and never changed their minds afterwards about any feature.
'Owen' on Sun, 11 Mar 2007 23:49:09 GMT, sez:
<begin thought dump>
As with Coding Horror http://www.codinghorror.com/blog/archives/000801.html TDD is a great tool for writing code for other people. There's always a need for other developers to understand the code who may have not been involved from day one. I can safely say that i didn't break anything major in my application that i've never seen before because each of my 3000 tests that cover the entire application work.
However how many tests is enough? well that's the age old question. During CITCON london (it's coming to Australia and i would recommend booking a seat!) an interesting metric came out that there should be 100 tests a second, (how many tests can you write that only take 1/100th of a second to run?) and coupled with CI you can keep the tests to the best level. no one wants a build taking more than 10-20 mins because no one will wait for a result and tests slow the build so we need to choose.
Remember its test DRIVEN development not Test Development, the tests should be driving something.
So I am a huge advocate of TDD when used correctly and i really think it complements an agile iterative process because it allows more features to be considered as plausible. Sure i can change that because i don't mind breaking some code, i know where to look to fix it.
'lb' on Sun, 11 Mar 2007 23:51:37 GMT, sez:
Haacked -- i agree that most project involve stakeholders changing their minds (again and again).
OTOH, the mars rover project is an example of a real project where this isn't an option (and there's a whole class of projects like this) so "never say never"
so the costs are flattened out for changes to functionality -- but are they at all flattened out for the resolution of bugs??
and also if features are likely to change, does this diminish the need to make those features 100% bug free? (aka, if a job's not worth doing, it's not worth doing properly.)
'Haacked' on Sun, 11 Mar 2007 23:52:04 GMT, sez:
As for the Yagni issue, I agree with Jason.
The idea is, if your client wants the code to do X, just have it do X. If you also do Y because client might want to do Y, but never said so. YAGNI!
You've wasted time (== money) implementing something that may never be used, and created more possible points where your app could be buggy.
> But how do you know you're going to
> need the test itself?
Well you don't know in which specific ways your client will chang his/her mind, which is why YAGNI is applied there.
But you *do* know somewhere down the line, you're code will have a bug or have to change, but you don't know in which way it will have to change up front.
Having automated unit tests serve will help identify regressions.
Not to mention, the benefit of TDD isn't just helping to reduce bugs. The act of writing a test forces you to think through your code from the caller's perspectiv. You're more likely to write clearer code.
For example, if you write a method you can't test, you should think about how to refactor it so it can be tested (NOTE: Not all methods can easily be unit tested. But at least you thought it through).
When do you stop writing tests? It is arbitrary. For some, it's when you have a certain code coverage percentage. For me, it's when I cover the main exception cases and the main logical cases I would expect.
I also focus more tests on code that is more complicated.
Lastly, part of TDD is also how you react to a bug. Say you find a bug in method X. First, write a unit test that exposes the bug. In other words, write a test that fails because of the bug, but should succeed if the bug doesn't exist. Now fix the bug. Make sure the test passes. And rejoice that you now have an automated regression test for that bug. It'll never get introduced (in that way) again.
'Haacked' on Sun, 11 Mar 2007 23:55:42 GMT, sez:
LB, yeah, the Mars Rover might take a different approach. But I imagine most of your audience is not writing Mars Rover code.
For the Mars Rover, you'd probably use the "Clean Room" approach. But for the 99.9999% of developers not writing code for the Mars Rover, the Clean Room approach might take too much time to get anything done and doesn't work because our clients are humans, not a robot and physics.
> and also if features are likely to change,
> does this diminish the need to make those
> features 100% bug free?
Who said anything about 100% bug free. I assume you want the code to have as little bugs as possible. And again, because you cannot predict which features will change ahead of time, which features would you "allow" to be buggy?
'lb' on Sun, 11 Mar 2007 23:58:40 GMT, sez:
i understand when you say "i really think it [TDD] complements an agile iterative process" -- and i'm inclined to agree, but i want to agree for solid reasons that i can back up with logical arguments (or stats!).
specifically i want to overcome these particular conundrums i've stumbled onto, written above.
where i'm up to is this question:
Agile flattens the cost for changes made late in the game: does it flatten the cost for bug fixes made late in the game?
Does yagni apply to tdd? is not why not? if so when?
'Haacked' on Mon, 12 Mar 2007 00:00:50 GMT, sez:
I forgot to add. A large percentage (I forget which) of bugs are added when fixing bugs and making changes to code.
So for a vague client, I think TDD helps even more. You've probably seen it many times. Developer changes method A, breaks Method B written by another developer.
Nobody notices for a really long time.
'Haacked' on Mon, 12 Mar 2007 00:02:39 GMT, sez:
p.s. great topic.
In my last comment, I meant I forget the exact percentage of bugs are introduced when making changes to code. But it's significant. It's in "Code Complete" somewhere.
So yes, I believe it does flatten the cost for bug fixes because it flattens the cost of making any changes.
'lb' on Mon, 12 Mar 2007 00:02:41 GMT, sez:
@Haacked -- so you write the least possible tests you think you can get away with today.
you don't say "hmmm, what are some freakishly hard edge cases that might one day defeat this function?" -- yagni.
you stop at "how do i expect __today__ that this code is going to be used?"
'lb' on Mon, 12 Mar 2007 00:09:41 GMT, sez:
@haaaaacked, owen et al.
so i think that Yagni both does and doesn't apply to TDD.
When the developers says "hmm, i may as well make this code extra generic so that if in the future..." -- the YAGNI card pops up and the agilists says 'stop! bad prediction, you don't know how it will be used in future'
But when the developer says "i don't know how this code is going to change later, so i'll write some tests around its behaviour today" -- then the YAGNI card is withheld because the agilists say 'yep -- good prediction, you only expect it to change'
but what about this angle: a big percentage of projects/code get dropped and, even prior to that, a lot of features get dropped. so if a feature will ultimately be dropped -- we can save ourselves some effort by avoiding writing unit tests around that feature.
again: "if it's not worth doing, it's not worth doing properly" --
or to put it another way:
by putting unit tests around a feature you're making a bet that this particular feature won't be dropped, it will just be altered.
is that a safe bet? how so?
'Owen' on Mon, 12 Mar 2007 00:14:00 GMT, sez:
How often does a dropped feature mean just taking code out? On experience there's often a feature taken out that affects the front end (Perhaps an argument for the View -> Presenter Model could go here) and not in essence correct the business logic behind. Also why do you need to drop the whole feature, surly the STTCPW (Simplest thing that can possibly work) would be just to unwire it without touching the logic, in which case the tests are just and added bonus if you want to do something later.
You don't code expecting features to be dropped do you?
Oh and my Agile and TDD dogma comes courtesy of an old employer. Cresta and Scott Syme ;-)
'Owen' on Mon, 12 Mar 2007 00:15:25 GMT, sez:
"and not in essence correct business logic" should read "and not in essence effect business logic"
oh and again i'm not backing myself up with numbers.
'lb' on Mon, 12 Mar 2007 00:16:32 GMT, sez:
"How often does a dropped feature mean just taking code out?"
"You don't code expecting features to be dropped do you?"
not exactly -- i just mean that there's a fairly good chance that a feature will be dropped.
ahh Cresta! SymeS! Great times!
'Owen' on Mon, 12 Mar 2007 01:04:07 GMT, sez:
on a further extension TDD can help flatten the costs if used correctly, because it's not all about Unit tests, what if (as I've experienced in the past) rather than having your specification, you used FIT acceptance tests to drive your development in the right direction, this is also part of TDD but everyone seems to focus on Unit test driven development. I really enjoy a situation where you can have tests as specification. it's more in line with customers expectations and much more fluid to changing needs
'Haacked' on Mon, 12 Mar 2007 02:00:29 GMT, sez:
Even if dropping a feature *just* means code out. Would you rather do that and hope you didn't break anything else in the process?
Or would you like to have a degree of confidence due to a suite of regression tests.
Tell you what, remove the implementation of a random function in your code base and quickly tell me how many other features (and methods) are broken by that change.
A lot easier to do with some amount of test coverage.
'punky' on Mon, 12 Mar 2007 07:52:04 GMT, sez:
Great topic, and great discussion (the brew of bright minds is never bitter).
One of my personal problems with test code is how to test it. I think of it as a turtles-all-the-way-down kind of problem [see http://en.wikipedia.org/wiki/Turtles_all_the_way_down] that needs the pragmatism of yagni to reach a fixpoint.
'Scott' on Mon, 12 Mar 2007 09:11:36 GMT, sez:
"but what about this angle: a big percentage of projects/code get dropped and, even prior to that, a lot of features get dropped. so if a feature will ultimately be dropped -- we can save ourselves some effort by avoiding writing unit tests around that feature."
Remember that TDD should also be JIT. As Owen points out, the tests DRIVE the development. So if we have reached the point where the feature/code is due to be put in, that is the time to put in the unit tests. The test is the gate between the requirement and the code itself - no test = no code = no feature = no test ...
Bringing together some points above then: write the tests to cover the feature as foreseen/specified now. If a bug slips through, write a test that fails before the fix and passes after the fix has been written. If the feature changes, edit the tests to reflect the changes first, then run the tests - there might be less work than you thought to actually implement said changes.
IMHO you can employ TDD alongside several development approaches, not just Agile. On the other hand I feel that TDD is *essential* when using an Agile approach. The dynamic nature of Agile needs to be offset by a safety net of a structured approach such as TDD - together they make a powerful combination.
I changed from "using _the_ Agile approach", to "using _an_ Agile approach" above because this is another important point. Use existing methodologies (e.g. XP) as a starting point, but adapt them into something that works for the current team on the current project - and review your model regularly. Part of this process will be agreeing to what extent tests need to be written before coding begins.
Very interesting thread this one. I look forward to some more insightful comments.
'The Onion' on Mon, 12 Mar 2007 10:06:37 GMT, sez:
Hey lb is this article about you?
From the Onion: Man Who Plays Devil's Advocate Really Just Wants To Be Asshole
Only kidding ;->
'Michael' on Mon, 12 Mar 2007 12:44:47 GMT, sez:
>>by putting unit tests around a feature you're making a bet that this particular feature won't be dropped, it will just be altered.
By not putting unit tests around a feature you're betting that the feature actually works. HA!
Consider this: for many projects, unit tests are the front line and often the only line of [automated] defense against bugs. And let's face it; if your tests aren't automated then they might as well not exist.
Agile or not, TDD or not, that's the hard reality of life. People don't go back and write tests later, most projects don't have dedicated QA or test teams (or even a single person), and many projects don't have the time to look back. Therefore, if it doesn't get done the first time around then it isn't going to ever get done.
Without the developer testing his own code, you are making the (in my opinion wrong) assumption that someone else will test it for him. I guess there is always the end user.
'Helen' on Mon, 12 Mar 2007 16:40:44 GMT, sez:
The way I understand it is that the extra information that TDD gives you about the side effects of changes gives you more freedom to make larger changes to your code without worrying as much about the how it's affecting everything else.
So this complements the agile ideas of refining things in iterations. If you didn't have the tests telling you when your changes impacted on existing code you wouble be less able to work in an agile way.
'Haacked' on Mon, 12 Mar 2007 22:54:45 GMT, sez:
Now that I read the Raganwald article that spurred this series, I have to ask if I've been "Feeding the Troll"? ;)
> On the face of it, an objection is an expression of a discrepancy between your idea and what someone wants.
So what is it you want? If your code isn't suffering from brittleness to change, then maybe TDD isn't for you.
'lb' on Mon, 12 Mar 2007 22:58:12 GMT, sez:
Good work Haacked!
My code is undiagnosed, but i suspect it has bubonic plague.
'DT' on Tue, 13 Mar 2007 01:27:03 GMT, sez:
Another way to phrase YAGNI is "do the simplest thing that will work". No more, no less. How do you know it works if it is untested? So TDD enables you to do the simplest thing that will work - everything else YouArentGoingToNeed. I think that's the theory anyway. :)
Is this too contrived to solve the TDD vs. Agile/YAGNI debate? :)
'Adi' on Tue, 13 Mar 2007 07:05:20 GMT, sez:
I think TDD is best practice when you are writing a framework or an API, I'm not so sure it's the best approach when you are writing a custom application.
'Eric' on Thu, 15 Mar 2007 19:21:29 GMT, sez:
problem # 1:
I'm not sure exactly what you mean by "agile", but I'll use scrum as an example.
TDD and scrum address two different scopes.
TDD without scrum will certainly improve code quality (and reduce stabilization time), but won't regularly expose functionality to users, so while the software may have decent quality, it may not be done on time or have the right mix of features.
Scrum without TDD will likely run into a lot of problems getting to "done done".
Your argument here isn't against using the two together, it's an argument against TDD.
If you believe that is false, then YAGNI would say that you shouldn't write tests (though I think it's pretty much a tautology to say that if you don't think unit tests are important you won't write unit tests).
If you believe that TDD is the right way to go, then all the test code is code that you need, so YAGNI doesn't apply.
BTW, if you are writing 5x as much code to test, you are likely doing it wrong. Typical numbers I've seen are in the 1.3x to 2x range.
Yes, that's a lot of code, but it's cheap code to write, compared to mainline code.
TDD Trumps Agile:
Two problems with this example:
1) There is a considerable amount of effort pre-launch where agile would help.
2) Spacecraft can all be re-programmed remotely, to deal with issues that come up (typically degrading hardware, but sometimes software).
3) Avionics software is typically developed with very rigorous techniques - it's not something where you would use agile or TDD, because neither are stringent enough.
Agile trumps TDD:
Umm... If you're doing agile, you're asking the client at regular intervals (both in planning and at the end of iterations), so this isn't going to happen.
I can understand your skepticism with agile in general, as you need a team that is willing to try it. But TDD is easy to try yourself.
I recommend Michael Feathers' excellent "working with legacy code"
'BusmasterJones' on Thu, 17 May 2007 14:26:37 GMT, sez:
The Mars Rover example is one where agile is a poor choice. The requirements would be (should be) relatively stable. Your target doesn't change (Mars) and your knowledge of it changes very little. The scope could certainly vary, but once it is determined, there is little that needs to be rethought.
You could develop within the confined of a delivery time using Agile -- that is not inconsistent -- but the backlog should not increase much nor the requirements change much. The biggest benefit would be iterative testing instead of getting a big bang out of managing moving requirements.
'BarryH' on Sat, 26 May 2007 02:12:53 GMT, sez:
First, although it has been said, let me say it again, automated unit tests <> TDD. You may be able to add unit tests after the fact with other development methodologies, but if you don’t start with tests you are not doing TDD.
Next, who’s to say that the ‘Mars Rover’ could not be created using Agile methods. The original point seems to be that the Agile approach would not work because we could not do another iteration once it left the earth, not because of the scientific/critical nature of the app. But, who’s to say the a Mars Rover type application could not go through many agile iterations to get to the point where we are ready to be send it off on its own. And if this were the case, I would definitely appreciate a suite of test that I could look at to give me the confidence. Sure we may miss an iteration, but who’s to say that the Rover is not up there right now with the team back on earth realizing they should have added a certain feature?
Finally, adding test when you need them may not be that easy. I am in agreement with Haacked regarding the introduction of bugs. My current employment exists mainly due to the fact that fixing bugs produces bugs. And because the code was written without unit tests in mind, it is now next to impossible to introduce tests to help in fixing the bugs.
So, TDD is not an Agile methodology in and of itself. You could do TDD with any mythology you like. However, if you do not use TDD or at least have a good understanding of coding for unit tests, adding the tests after the fact becomes a chore that may never get done. I wonder how many test you could write up front for the cost of one test written after the fact?
'portrait artists' on Wed, 28 Nov 2007 01:29:20 GMT, sez:
One problem I noticed about agile is that it really requires members of a group to work together. In a company that utilizes outsourced people working from various parts of the word, agile doesn’t give much benefit.
'Pablo' on Thu, 02 Jul 2009 21:21:00 GMT, sez:
Agile best practices - Doing the simplest things that work. Iterative development. Continuous refactoring. Embracing change. etc.
All of these are enhanced by TDD.
The complexity of a test is directly related to the complexity of the method being tested. TDD leads to simpler test and more cohesive methods. Which in turn leads to code that is more reusable, easier to change and refactor.