Assessing risk (and pitfalls)

I’ve already mused a bit on project estimation, and I have an upcoming post about how to think about sunsetting systems.  It occurs to me that there are at least two more fundamental questions to address:  what does it mean to talk about the risk in a system, and what biases might we have when making that estimate?

What is risk?

Finance types define risk as the “standard deviation”.  In other words, it’s a measure of things that might not go strictly according to plan, and how far away from your estimate those things might send you.  In other words, if you say that the project estimated cost is $100 with no risk, you’re saying that the cost should be exactly $100.  However, if you have a cost of $100 with a risk of $50 (we’ll naively assume normally distributed), then that implies your project cost could vary a lot from that $100 estimate.

I often see risk discussed by technical people in terms of roadblocks: “it might be really slow when we hook widget X up to database Y.”  “We might not be able to translate records directly from System Q to System W.”  These are fine things to know about- but if you can quantify them in terms of time lost, revenue lost, or in raw dollar terms, that will get you closer to being to put a bottom line on your estimates.

When doing estimates, teams often use a sort of placeholder.  Instead of estimating that a task will cost, say, $1000, we tend to say it’s a “Small” task, and other tasks might be “Medium”, Large”, or “Extra-Large”.  This is fine.  However, I also think it would be useful to score each task for risk.  For example, I think most developers would agree that “fetch a web page” is a relatively low-risk task, while doing bleeding-edge machine vision is riskier.  From here, perhaps you extrapolate that the “retrieve a webpage” task is estimated to cost $800-1200, while a risker task of similar size might be $500-2000.

Bias

We all have biases- there are some tropes in industry, such as “not invented here” syndrome.  Most technical folks, I think, have some bias either toward platforms they’ve already used and understand, or toward the latest “sexy” technology.  I suffer from a variant of this where I prefer open-source products.

There’s another bias inherent to humans: the sunk cost fallacy.  We tend to impart value to things we’ve paid for, even if that past transaction has nothing to do with the current decision.  Sunk cost bias is fairly intuitive if you think about it, but it’s difficult to consciously minimize it when decisionmaking.  The essence: It doesn’t matter if you just spent a million dollars last year.  The question is what the cost is of a new system versus the ongoing cost of the current system.