A recent thread on LinkedIn asks "Which is better: single checkouts or multiple checkouts?" In my experience, many, if not most, projects use a parallel checkout philosophy. I've worked on projects, both large and small, where exclusive checkout has been the norm; I highly recommend this approach.
I've often walked into a project floor to find developers complaining about the constant need to merge and re-test software because of parallel changes made by other developers. Perhaps you've heard a question similar to this: "Who should do the merging, the programmer or the CM Manager?"
If you change the rules and don't allow parallel checkouts, developers might complain that they can't get their work done because someone has a file that they need checked out.
Parallel checkouts result from a high level of file contention. The amount of file contention grows dramatically with the average amount of time a file is checked out, leaving us with the following key question: How can I minimize the average file checkout duration?
Here are my suggestions:
1. Let developers check in files when work is completed. Many processes force them to hold on to their code until the next build is completed. Tools and processes must allow developers to check in code without having to "commit" it to the next build; otherwise checkout times will grow.
2. “Main trunk” philosophies cause problems at the beginning and end of "releases." Use a trunk-per-release philosophy as discussed in my article “Top Ten CM Best Practices #7” or you'll have people holding on to their next release changes until the current release "closes." That closing date tends to slip causing even longer checkout times.
3. Break features into smaller changes. Do the interface and infrastructure work first, so that current functionality continues to work no matter how much of the feature is present; then do the rest. Instead of having a dozen files checked out for a month, you might have six checked out for four days, four for two weeks, and six more for another week—an average of two-thirds less checkout duration per file.
4. Break up large files into a few smaller ones, as shown on Stackoverflow’s “Many vs Few Source Files.” When you have files that are thousands of lines long, not only will you have a higher rate of contention but you'll lose productivity looking for a line of code, compiling, scrolling, etc.
5. Change your process so that the person most familiar with a file is the one changing it. This will reduce the time needed to do a change as well as the amount of rework needed—and improve quality.
6. If you have to make the same change in two parallel releases, do the less busy release first. The second one will go much faster (merge and retest).
7. Speed up your peer-code review process. Instead of forcing a meeting, put your work in the repository (still checked out) so that others can review it online. Make reviews high-priority items.
8. Avoid unnecessary branching. See my CM Journal article “To Branch or Not To Branch.” A branch is often like a long duration checkout that eventually has to merge.
You’ll need adequate tools and process to implement these suggestions. By implementing them, you'll find that exclusive checkouts work great, and your branch and merge requirements will drop dramatically.
Don't close the door on parallel checkouts; use them only as a last resort.
President and CEO of Neuma Technology, Joe Farah is a regular contributor to the CM Journal. Prior to cofounding Neuma in 1990, he was a director of software at Mitel. In the 1970s, Joe developed the Program Library System (PLS), still heavily used by Nortel (Bell-Northern Research), where he worked at the time. He's been a software developer since the late 1960s.