December 9, 2013
On Making Mistakes
The full cycle of productive mistake-making: acknowledgment, extraction of information, reconstruction. The mistakes that teach are the ones where the full cycle completes rather than short-circuiting.
5 min read
The Productive Mistake
There are mistakes that teach and mistakes that don't. The difference is not in the nature of the error but in what happens after it.
The mistakes that teach are the ones where the full cycle completes: you make the error, you notice it, you attribute it correctly (to your model or reasoning rather than to external factors), you extract the information it contains, and you update your model accordingly. The next time you encounter the same kind of situation, you are better equipped.
Most mistakes short-circuit this cycle. They are noticed but attributed to circumstances. They are acknowledged abstractly without extracting the specific information they contain. They produce "I'll do better next time" without specifying what doing better actually means.
Attribution
The bottleneck is usually attribution. Mistakes that can be attributed to external causes - bad luck, the failure of others, circumstances beyond your control - are comfortable. They require no update to your model of yourself or the world. They leave your capabilities and beliefs intact.
Mistakes that require internal attribution - "my model was wrong" or "my judgment failed" - are uncomfortable. They require updating something about how you understood the situation, and therefore about what you would do next time.
The external attribution is often partially correct. External factors usually do play a role. But when a mistake is attributed entirely to external factors, the internal portion of the cause goes unexamined, and the mistake teaches nothing.
The skill is holding the internal attribution even when external factors were involved - looking for the part of the cause that was yours, even in cases where it was small.
Extracting Information
Once correctly attributed, a mistake contains a specific piece of information about your model of the world. The task is to identify what that information is.
The question is: what would you have had to believe about the situation for the action to have been correct? If you can answer that question, you have identified the model error. The belief that made the action seem sensible is the belief that needs updating.
This requires more specificity than most post-mortems achieve. "I underestimated how long it would take" is a description of the outcome. "I underestimated because I did not account for the coordination steps between the two teams, which I assumed would be fast" is a description of the model error. The second is what you can actually update.
The Update
The update should be specific to the model error identified. General resolutions - "I'll be more careful next time" - do not update the model. They just add a layer of caution that is likely to erode.
A specific update changes how you will think about a class of situations. If the error was underestimating coordination costs, the update is to add a coordination cost estimate to all projects involving multiple teams - not just to be careful in general.
The test of whether the update is specific enough: can you state what you would do differently in a future situation of the same type? If yes, the update is specific enough to affect behavior. If the answer is "just be more careful," the update is too general to be useful.