Velocity often grabs a lot of managerial attention on a project. But while velocity is the effect, build time is a common cause. A slow build has several insidious side effects. Say a build takes 20 minutes:
10:00am | My pair and I work for the day by doing a fresh checkout and initiating a local build. 20 minutes to kill? Maybe check email etc, talk to analyst about my story, take a coffee break. OK, build passes. |
10:20am | Work begins. |
10:50am | We are ready for checkin. Oh no, we can't wait for a 20 minute build every half hour can we? There goes the motivation for frequent checkins. We decide to develop locally for another hour at least before trying a checkin. Signs of discontinuity in continuous integration! |
12:00pm | Ready for checkin again. Run local build. |
12:20pm | oops. Build broke. Our code broke some old test. Good catch by the test suite! |
12:30pm | Attempt another build after fixing the code. |
12:50pm | Build passed! Now we just need to check once again after merging with latest code from the repository. Is the build green? Yes. Safe to checkout from repository. We checkout, couple of auto-merges, no conflicts. One more local build and we are good to go. |
1:10pm | Local build passed. We are about to checkin when we notice that the build is yellow. Darn, someone just checked in. Now we have to wait for the build to go green, then checkout and build locally again before we checkin. |
Anything short of this rigour leaves a window open for build failure on the CI server. But hey, surely you can't expect us to play the waiting game indefinitely. What if someone checks in again while we are diligently verifying locally? Besides its time for lunch. So we gamble and checkout. No conflicts or even auto-merges. Good sign. It isn't worth building locally again. We checkin and go for lunch. Halfway through lunch, I get a call.
"@#$!, why did you checkin while the build was still running?"
"Why what happened? Don't tell me your build didn't go through"
"Yes it didn't and now I am having trouble fixing it on top of your changes"
As build time increases, it tests my patience as a developer. I am tempted to take shortcuts that occasionally backfire. When this becomes standard team behaviour, the occasional turns into regular. Secondly, it increases the syncing window (12:50pm to 1:10pm above). This window is at least equal to build time. Greater this window, greater the chance that someone else might check in while I am still getting ready.
Now what if the build time were five minutes instead? There is much less waiting time associated with frequent bite sized checkins. So it encourages frequent checkins. This in turn reduces the likelihood of merge conflicts when I sync with trunk. Finally, if someone else does check in during my syncing window, I only have to wait five more minutes to see if it goes green.
In summary, everything else constant, team performance correlates well with build time.
1 comment:
How about executing only those tests that your change-set affects? If it is merely unit tests, this is surely possible.
IMO, the best approach is to do everything related to the task/feature in your own local repository (or a remote, dedicated day/task branch), and apply for a push when you feel you've achieved it all. Drastic merge conflicts shouldn't arise with a properly planned task list board (and a slightly domain-divided, yet multi-functional when needed, team).
For some reason, I couldn't stop thinking about the old Rational ClearCase SCM model as I read this post.
Post a Comment
Note: Only a member of this blog may post a comment.