Special thanks to my friends Deepa Joshi and Sam DeCesare for input/feedback!
The 80/20 Rule, otherwise known as the Pareto Principle has wide-reaching implications for productivity. If you can get 80% of the results you want for 20% of the effort in a given endeavour it really lets you consider an efficiency/thoroughness trade-off that’s right for your desired results. There’s a real opportunity for saving a whole lot of effort and still raking in big results for that effort when you find a scenario that conforms to the 80/20 rule. Unfortunately there are a bunch of scenarios in life that don’t conform to this nice 80/20 distribution and so I’d like to propose another one that I see pretty often.
The All-or-Nothing Principle: For many scenarios in life, there are 0% of the effects until there are 100% of the causes.
So while a Pareto distribution looks something like this:
…an All-or-Nothing distribution is much simpler; you’ve either got the effect or you don’t.
It applies only to systems where results are binary which can also be found all over the place: everywhere from electronic examples like light switches or mouse-clicks, to the more biological — like being pregnant or being dead.
Like the Pareto Principle, the All-or-nothing Principle can have a huge effect on productivity if you recognize it and handle it appropriately.
Here’s a simple example: Bridges are expensive to build. Building a bridge even 99% of the way across a river gets you 0% of the results. This isn’t at all the type of distribution that the Pareto Principle describes and so you can’t make the same efficiency/thoroughness trade-off with it. You simply can’t get 80% of the cars across with 20% of a bridge, so in order to extract maximum productivity from these scenarios, you have to approach the situation fundamentally differently. The important thing to remember in these scenarios (assuming you’ve identified one correctly) is that there is no difference at all in effect between 0% effort and 99% effort. If all you can give is 99% effort, you should instead give no effort at all.
Let’s talk through the bridge example a little more: If you were a bridge-building company with 99% of the resources and time to build a bridge, you would be in a much better spot in general BEFORE you started at all than you’d be at the 99% mark, even though the 99% mark is so much closer to being complete. In fact, the almost-complete state is a sort of worst-case scenario where you’ve spent the most money/time possible and reaped the least amount of benefit. Where 80/20 distributions reap quite a lot of rewards for very little effort upfront, all-or-nothing scenarios reap no reward whatsoever until effort meets a specific criteria.
There are lots of examples of this in the real world too. Medical triage sometimes considers this trade-off especially when resources are constrained like in wars or other scenarios with massive casualties. Medical professionals will prioritize spending their time not necessarily on the people closest to death, but the people that are close to death that have a reasonable chance of being saved. Any particular person’s survival is an all-or-nothing proposition. It’s a brutal trade-off, but of course the medical professionals know that if they don’t think they can save a person, they should instead spend their efforts on the other urgent cases that have a better chance; the goal afterall is to maximize lives saved. In the real world, probabilities about outcomes are never certain either so I’m sure this makes these trade-offs even more harrowing. (Interestingly, if a triage system like this allows 80% of the affected population to survive by prioritizing 20% of them, the effort on the population as a whole conforms to the Pareto Principle)
The All-or-Nothing Principle and Software Development
Fortunately for me, I work in software development where being inefficient (at least in the product areas I work in) doesn’t cost anyone their life. All-or-nothing scenarios actually exist in dozens of important software development scenarios though.
Of all the scenarios, I don’t think any has shaped the practices and processes of software development in the last 25 years as much as the all-or-nothing scenario of “shipping” or “delivering” product to actual users. The reason for that is that software only gets results when it’s actually put in front of users. It really doesn’t matter how much documenting, planning, development, or testing occurs if the software isn’t put in front of users.
This specific all-or-nothing scenario in software is particularly vicious because of gnarly problems on both sides of the “distribution”:
- Expectations of the effort required to get to a shippable state can be way off. You never actually know until you actually ship.
- Expectations of the effectiveness of the software to meet our goals (customer value, learning, etc) can be way off. Will users like it? Will it work?
Where more linear effort-result distributions (including even 80/20 distributions) get you results and learning along the way, and a smoother feedback cycle on your tactics, all-or-nothing scenarios offer no such comforts. It’s not until the software crosses the threshold into the production environment that you truly know your efforts were not in vain. With cost and expected value being so very difficult to determine in software development the prospects become so much more difficult/risky. These are the reasons that a core principle of the agile manifesto is that “Working software is the primary measure of progress.”.
It should be no surprise then that one of the best solutions that we’ve come up with is to try to get clarity on both the cost and expected value as early as possible by shipping as early as possible. Ultimately shipping as early as possible requires shipping as little as possible, and when done repeatedly this becomes Continuous Delivery.
Pushing to cross the release threshold often leads us to massive waste reduction:
Users get value delivered to them earlier, and the product development team gets learnings earlier that in turn lead to better more valuable solutions.
All of this of course depends on how inexpensive releasing is. In the above scenario, we’re assuming it’s free, and it can be virtually free in SaaS environments with comprehensive automated test suites. In other scenarios, the cost of release weighs more heavily. For example, mobile app users probably don’t want to download updates more often than every 2-4 weeks. Even in those cases, you probably want a proxy for production release, like beta users that are comfortable with a faster release schedule.
We often find ways to release software to smaller segments of users (beta groups, feature toggles, etc) so we can both reduce some of the upfront effort required (maybe the software can’t handle all the users under full production load, isn’t internationalized for all locales or doesn’t have full brand-compliant polish) but also get us seeing some value immediately so we can extrapolate as to what kind of value to ultimately expect. The criteria of even releasing to one user has a surprising ability to force the unveiling of most of the unknown unknowns that must be confronted for releasing to all users, and so it’s a super-valuable way of converging on discovering the actual cost for delivering to all users. I’ve personally found these practices to be absolutely critical to setting the proper expectations about cost/value of a software project and delivering reliably.
Furthermore, there is a lot more to learning early than just the knowledge gained. Learning early means you have the opportunity to actually change what you expected to deliver based on things you learned. It means that the final result of all this effort is actually closer to the ideal solution than the one that loaded more features into the release without learning along the way.
Navigating All-or-Nothing Scenarios
Again, the benefit of identifying All-or-Nothing scenarios is that there are particular ways of dealing with them that are often more effective than solutions that would be common with other types of scenarios (like 80/20 scenarios). I’ll try to dig into a few in more detail now.
Try to make them less all-or-nothing
We’ve seen from above how continuous delivery breaks up one all-or-nothing scenario into many, thereby reducing the risk and improving the flow of value.
More theoretically, it’s best to try to change unforgiving all-or-nothing scenarios into more linear scenarios where possible. That completely theoretical bridge builder could consider building single lane bridges, or cheaper, less heavy-duty bridges that restrict against heavier traffic, or even entirely different river crossing mechanisms. These are just examples for the sake of example though… it’s probably pretty obvious by now that I know nothing about building bridges!
Software is much more malleable and forgiving than steel and concrete, so us software developers get a much better opportunity to start with much lighter (but shippable) initial passes and to iterate into the better solution later. Continuous delivery, the avoidance of Big-Bang releases, is an example of this.
Minimize everything except quality
In all-or-nothing scenarios, reducing the scope of the effort is the easiest way to minimize the risk and the ever-increasing waste. It can be tempting sometimes to consider also reducing quality efforts as part of reducing the scope, but I’ve personally found that reducing the quality unilaterally often results in not actually releasing the software as intended. The result is that you leave a lot of easy value on the table, don’t capture the proper learnings early, and set yourself up to be interrupted by your own defects later on as they ultimately need to be addressed. If you don’t really care about the quality of some detail of a feature, it’s almost always better to just cut that detail out.
Realize that time magnifies waste
For the types of work that need to be complete before the core value of an endeavour can be realized, there’s not only a real cost to how long that endeavour remains incomplete, but the more it becomes complete without actually being completed, the more it costs. In classical Lean parlance, this is called “inventory”. Like a retail shoe store holding a large unsold shoe inventory in a back storeroom (it turns out that selling a shoe is generally an all-or-nothing scenario too!), effort in an all-or-nothing scenario has costs-of-delay. A shoe store’s costs-of-delay include paying rent for a back store-room and risking buying shoes it can’t sell. It’s for this reason that Toyota employs a method called “Just-in-time” production. It’s the cornerstone of Lean manufacturing (that has coincidentally also had a huge influence on software development!).
In the case of software, larger releases mean more effort is unrealized for longer. This results in less value for the user over time and less learning for the company. If you can imagine a company that only releases once a week, high-value features (or bug fixes) that are completed on Monday systematically wait 4 days before users can get value from them. Any feature worth building has a real cost of delay.
Similarly, more parallel work in progress at once means there’s multiple building inventories of unfinished work. If a team can instead collaborate to reduce the calendar time of a single most-valuable piece of work, this waste is greatly reduced.
Stay on the critical path
In all-or-nothing scenarios, it’s crucial to be well aware of the critical path, the sequence of things necessary to get you to the “all” side of an all-or-nothing scenario. These are the things that will affect calendar time and affect how quickly you learn from users and deliver value. In software engineering, this is often called the steel thread solution. If you find yourself prioritizing non-critical path work over critical path work (which is common if you’re not ever-vigilant), you’d do well to stop and get yourself back on track.
Pick your battles!
If you’re faced with an all-or-nothing scenario, and you know or come to realize that you can’t make the 100% effort required, the most productive thing you can do is to not even try. Like the 99% complete bridge, your efforts will just be pure waste.
Sometimes this can be extremely hard. It’s why the term death march) exists for software projects and project management in general. People just have a really hard time admitting when a project cannot succeed, even when they know so intellectually. It can be particularly difficult when the project has already had effort spent on it, due to sunk cost fallacy, but that doesn’t change the proper course of action.
If you find yourself in a truly unwinnable all-or-nothing scenario, it’s best to forgive yourself for not pursuing that project anymore and to put your efforts/resources somewhere with better prospects. It’s not quitting — it’s focusing.