- WTF twitter.com/realdonaldtrum… 10 hours ago
- Nice I love D3! gag.gl/eK0juj 21 hours ago
- RT @blackgirltech: If you're a Black woman & want to learn how to code but need some financial help, you should apply for the https://t.co/… 1 day ago
- Louie is growing so fast! https://t.co/hfvOqScGp3 1 day ago
- RT @vibronet: Open standards should be table stakes for everyone - but won’t be enough to save you from vendor lock-in https://t.co/TbMebtG… 1 day ago
Yo Dawg, I Heard Your Build Failed Because You Refactored Unit Tests…
July 31, 2009Posted by on
I have been working on a project with my wife the last few weeks. Since this is a totally personal project I am free to use whatever tools I want to use without having to justify my decisions to a team.
So I have been going nuts with the tooling. I set up a SVN repository on my Windows Home Server. I set up TeamCity and two separate CI builds using UppercuT that run NUnit tests, NCover code coverage analysis and NDepend reports. I am writing my app using ASP.NET MVC and the S#arp Architecture frameworks. Heck, I’ve even setup Selenium tests that get run every night. And I have NHibernate Profiler wired in there too JUST BECAUSE I CAN. It’s a lot of infrastructure set up for one page so far. 8)
So when I couldn’t get my single Selenium web test to pass in the TeamCity nightly build, I was a bit obsessed with figuring out why. Turns out, I had made a modification to a test base class that was causing the problem. I didn’t discover this issue when I created it because I typically don’t run Selenium tests prior to check in. I prefer to let the nightly build do that. So it took me a couple days to figure it out. Which prompted this post to Twitter which made its way to Facebook:
Yo Dawg, I heard your build failed because you refactored your unit tests. So, I wrote some unit tests for your unit tests, so you can…
To which my buddy Drew Welliver, an awesome developer over at L&I, commented:
Code so nice… I refactored it twice, then thrice, then…. At some point ya gotta wonder, wouldn’t it have been easier to just run the application yerself and find the friggin’ bugs?…
And followed up with:
And code coverage is cool, but scripts running code running scripts running code running scripts running… The users are still gonna break it, SOMEHOW…. And all that testing becomes academic. The reality SEEMS to be that when I’ve worked on systems that used testing frameworks, their bug lists never seemed to be any shorter than the ones that didn’t.
The Fallacy of Unit Testing Proving Quality
I don’t think any TDD-er will say they don’t produce bugs because they have tests. I would hope that it makes you think about your code a bit more and help you to discover more bugs prior to them getting out, but you never know if you are testing the right assumptions.
I do know that, if I have a bug and I write a test that demonstrates that bug, I know I have fixed it when that test passes and that that test gets run every time a check in is made to source control, ensuring that bug never returns.
The value of unit tests is not only the immediate feed back that the code you are working on does what you think it does but also sustained confidence that that code still does what you think it does though out the life of the development effort and beyond.
Spending ten minutes writing a test that gets run thousands of times though out the life of the project and into the maintenance phase of the product lifecycle is value.
A tight feed back cycle between change=>flaw=>fix does yield better code. The wider your gap in feedback cycle the longer it will take you to fix the problem because you have to reestablish all that context.
If you have to monkey test over the course of days, how do you know what change might have caused the problem? (no offense to monkeys of course)
The bottom line comes down to this question:
“Have you ever hesitated to make a change in a large system because you had no idea what effects it might have elsewhere in that system?”
A logical ,well thought out test suit gives you the confidence to make that change. If your tests start failing you simply back the change out right then and there and try something else while you have all the context for what you are working on.
The alternative of waiting days to weeks for unmotivated half qualified monkey testers to find that bug and then wondering which of the 40 change sets checked in during that time might have caused the issue is not one for me.
And Drew, I take you up on that beer offer brother. I am always down for testing out some beer.
UPDATE: And you want hard numbers? Here are some hard number for you.