I like theory too much. But hey, it’s what helps me think about problems. This simple feedback loop has proven its worth to me time and again. It’s inspired by the classic OODA Loop and is really just a simplified version of that concept, applied specifically to creating a software product development team.
There are three stages:
- We start with ideas about what our product could be.
- We’re a software company, so what we do everyday is turn ideas into code.
- Hopefully, we find out what happened when people use that code, creating data.
Giving rise to three verbs:
- Implement (programming!) where we turn ideas into code the best way possible.
- Measure what happened, as quickly as possible.
- Learn from the data, letting it influence our ideas for the next iteration through the loop
So far, it’s all obvious. What helped me is the insight from the Theory of Constraints that in a dynamic system anything that optimizes the sub-parts tends to sub-optimize the whole. Which is a fancy way of saying: focus on total time through the loop, not on the time of any individual activity.
Optimize speed through the whole loop. This sometimes steps on the favored opinions of functional specialists in any organization.
My personal favorite: “Code without data collection? Faster but…” Ever heard a programmer argue for ripping out all that pesky data monitoring code? It’s slowing down the system, wasting resources, creating ugly code and uglier scaling problems. If we just stopped measuring, we could write code a hell of a lot faster.
If you have worked with a Professional Data Warehouse Expert, you might have seen: “Measure 10000 things? Comprehensive but…” No human being can learn from 10,000 graphs. It’s overwhelming. To turn data into learning, you have to focus on the few key pieces of data the everyone agrees are important. And you have to get the decision makers and implementers to look at (and believe!) the data on a regular basis.
How about documentation that nobody reads? Reports that go unnoticed? Alerts that go off so often that they get ignored? Split-test experiments that go on forever? All of these are true waste, and they generally happen because somebody is optimizing for their particular part of the puzzle, not for the team as a whole.