This may or may not be a series, but I wanted to dash off a few thoughts. I have a feeling this post will come across as very stream-of-consciousness but will be clarified by followup posts.
The essence of estimating projects is to evaluate two things:
- How long is it expected to take?
- What risks are there?
There are some brilliant writings on the first issue, but I haven’t seen much done on the second. I’d like to propose the following:
- Each task, in addition to whatever size rank you want to give it (e.g., Small, Medium, Large, eXtra Large) gets ranked for risk.
- For example, let’s suppose there is a “grab webpage” task, and every member of the team agrees it is Small. That implies low variance.
- Let’s suppose you want to parse the webpage you’ve just grabbed, and you get the following votes as to its size: S, S, L, M, XL. That implies a large variance, and higher risk. Note, however, that this level of disparity might also imply that the task isn’t well scoped, that different team members are using different assumptions, or some other definitional or implementation issue.
- These variances should be accounted for in the budget and as part of burndown.
- One feature that I haven’t seen in any tool is the ability to do “actual” vs “budgeted” burndown: the comparison between actual time on project versus the budget. The useful thing about taking risk into account upfront is that you may very well find that while your actual burn exceeds a straight-line estimate, the numbers actually do fall within your predicted risk range (Nate Silver can tell you all about this one). Of course, you can’t know until you have the tools to do this. I’m currently exploring abstractions to allow various permutations of Kanban and other project monitoring and tie them against commits, support tickets, and testing.
- Now, one additional layer to consider is prospective maintenance cost. Here, parsing a webpage is a “brittle” task- obviously, if someone upstream changes the format of the web page, you may very well have to start this task over from scratch, or it may cause future outages. This whole “ongoing cost” and/or “brittleness” risk factor is something I’ve rarely seen accounted for in projects.
- Finally, most groups don’t really estimate to account for interdependencies, especially internal ones, or testing.
- Interdependencies are a major problem: how many times did the Boeing 787 slip? Obviously, you can’t ship just the wings of an airplane. But in software, you can often ship some intermediate product even if waiting on some “important” piece. In fact, sometimes software is better without that “essential” feature.
- I’m not necessarily a fan of writing tests before writing code, but I do think that organizations tend to focus on “features first” at the expense of some very brittle systems. For the love of Pete, at least build a regression suite as you close dev and support tickets.