Imagine that we’ve scheduled a reader picnic for the weekend. But the weather forecast says that there is a 5% chance of rain. We could invest in renting a tent - or even reschedule, but with such a low likelihood of rain and the minor inconvenience it would cause, it’s not really worth it. The risk is minimal, and the effort to mitigate it would far outweigh the potential impact.
Or, consider a scenario where we’re organizing an online event. There’s a fairly high likelihood that Teams will fail with a large group, but I can easily mitigate it by moving to another video platform. In this case, the likelihood is higher, but the impact is easily mitigated.
As obvious as these examples are, I frequently see teams fail to look at both likelihood and impact as inputs to risk.
Failure-Mode
Somewhere close to a million years ago, I came across Failure Mode and Effect Analysis. It’s overly documented and feels like overkill, but conceptually, I’ve found it pretty effective. In its rawest form, it looks at “Failure modes" (the ways in which something might fail), and “Effects analysis” (the consequences of those failures). This analysis includes ratings for Severity (impact), and Occurrence (along with root causes, mitigation and a few other inputs. In my experience, with FMEA, the Severity and Occurrence ratings (along with a guess on mitigation approaches) are fantastic ways to look at risk from a much more balanced perspective.
In addition to FMEA (and even more document heavy) ISO 31000 which also talks about risk from this perspective.
Uncertainty
Douglas Hubbard (author of How to Measure Anything), wrote another book that I like a lot- The Failure of Risk Management, which dives deeper into this subject and has some intersting insights.
Hubbard states that the “Risk = Likelihood × Impact" formula is too simple - that this approach is a useful starting point, but it misses critical aspects of real-world risk assessment. Hubbard talks a lot about the value of looking at uncertainty when evaluating risk. For instance, the likelihood of an earthquake in a region can be estimated, but the precise timing, magnitude, and secondary effects (e.g., tsunamis) remain uncertain. Hubbard suggests monte-carlo simulations or similar to look at how uncertainty may play into overall risk - but I’ve found that simply acknowledging undertainty is a huge step forward in evaluating risk.
Huh, What?
An example could be helpful right now.
Imagine that your team is about to release a significant update to a widely-used software application or service. The update includes major architectural changes and new features. It’s big.
The impact of a bug here could cause widespread outages or data loss, or even reputation loss. But that doesn’t mean you shouldn’t release the new software. You also have to look at the likelihood that a major bug could be introduced. If you’ve done a lot of testing, and done good testing in advance, the likelihood of a major error is much less. Add in the mitigation plans that FMEA suggest, and the likelihood of a major impact is minimal.
The above paragraph is probably pretty much standard operation procedure for most groups. But it doesn’t look at uncertainty. Hubbard (and I) say that you especially need to look at the system - including everything you can’t control when evaluating risk. If our new software relies on new third party software or services, we need to consider those when evaluating risk. If the widget we’re using from foo.com is changing constantly and has had prior issues, I should be a lot more worried about this release (or mitigate the worry by getting a hotline to foo.com’s CTO).
The short story is that risk is a system. You can’t just look at any single factor of risk and get an accurate take on what could actually go wrong.
Risk Is A System
Risk management isn’t about avoiding every possible problem—it’s about understanding the full picture. By looking at likelihood, impact, and uncertainty together, you can make smarter decisions about where to invest your time and resources. Frameworks like FMEA and insights from thinkers like Hubbard remind us that risk isn’t one-dimensional. It’s a dynamic, interconnected system that demands thoughtful analysis.
The next time you’re faced with a risky decision, take a step back. Don’t just react to the loudest concern or the most obvious threat. Consider the broader system, embrace the unknowns, and focus on the risks that truly matter. It’s not about eliminating risk entirely—because that’s impossible. It’s about managing it well enough to move forward with confidence.
-A
Excellent
Another great article Alan! One thing I've noticed is a tendency for certain types of "leaders" to ignore risk in the pursuit of results. I think results and accountability are very important, but some leaders become overly focused on the "what" without any consideration at all for the "how." These people tend to oversimplify everything and demand certainty from their teams, while refusing to engage in discussions about managing risk. The result is that their teams learn to sandbag and avoid risk, instead of taking (reasonable) risks in pursuit of innovation. This kind of results-only culture ultimately leads to churn and underperformance. By contrast, leaders who have past experience in delivering real work themselves (or at least the humility to understand what they *don't* understand) don't tend to fall into this trap. They know that things can and will go wrong, and foster a culture where risks are openly discussed. In such a culture, teams can debate the risks, spend time and money to explore risks and mitigation plans, and as a result are able to safely innovate.
It’s probably a futile hope, but I would like to see more companies stop the practice of hiring and promoting MBA-wielding fast talkers, and instead focus on developing leaders with a track record of sustainable results.