July 23, 2011
Anthropomorphism and Other Design Errors
When we project human qualities onto systems and the design mistakes that follow - seeing faces in clouds and intentions in algorithms.
6 min read
Seeing Faces Everywhere
Humans have a well-documented tendency to see faces in random patterns. Toast, clouds, electrical outlets, the front of cars. The technical term is pareidolia, and it reflects a deep cognitive bias: we are wired to detect agents in our environment, even when none exist.
This bias extends far beyond visual perception. We attribute intentions to weather systems. We describe markets as "wanting" to go up or being "nervous." We say our computers are "thinking" or "being stubborn." We treat algorithms as if they have preferences.
Each of these attributions is a small anthropomorphism. Individually, they are harmless figures of speech. Collectively, they shape how we design systems, how we interact with technology, and how we make decisions about complex processes.
The Design Consequences
When you think of a system as having human-like qualities, you design for a human-like interaction. This creates a specific category of errors.
Consider the thermostat. A thermostat has one job: maintain a target temperature. It has no concept of urgency, no sense of your discomfort, no desire to please. Yet people routinely set the thermostat to extreme values when they want faster results - cranking it to 90 when they want the room at 72. The implicit assumption is that the thermostat will "try harder" at a higher setting. It will not. It will simply overshoot.
This thermostat error is anthropomorphism in action. The user models the system as an agent that can be motivated by extreme requests. The system is actually a simple control loop that does not care about the magnitude of the error signal in the way the user imagines.
The same pattern appears in software design. Users expect software to "understand" context, to "know" what they mean, to "figure out" ambiguous instructions. Designers who anthropomorphize their own systems make the same mistake from the other side - they assume the system has more interpretive capacity than it does and fail to build adequate constraints.
Intention Attribution in Organizations
The anthropomorphism problem becomes more subtle and more consequential in organizational contexts. People regularly attribute intentions to organizations as if they were individual agents. "The company decided to..." or "Management wants..." or "The market is telling us..."
None of these entities actually decide, want, or tell in the way that individual humans do. An organization is a collection of processes, incentives, and individuals. Its "decisions" emerge from the interaction of these components, not from a unified intention. But treating organizational outputs as intentional choices leads to a specific kind of confusion.
When a company produces a bad outcome, the intention-attribution bias leads people to search for the individual who "decided" to cause it. Sometimes there is such an individual. Often, though, the outcome emerged from a system in which everyone made locally reasonable choices that combined into a globally unreasonable result. No villain required.
Searching for the villain in a systemic failure is an anthropomorphism error applied to organizational behavior. It feels satisfying - stories need characters - but it misidentifies the problem and therefore misdirects the solution.
The Temporal Dimension
Anthropomorphism has a temporal component that is particularly relevant to the tempo framework. When we attribute agency to systems, we implicitly expect them to operate on human timescales and with human-like temporal awareness.
We expect markets to "react" to news in the way a person would react - quickly processing the information and adjusting behavior. In reality, market responses emerge from thousands of independent actors operating on different timescales, with different information, and different processing speeds.
We expect organizations to "learn" from mistakes in the way a person learns - experiencing the mistake, reflecting on it, and changing behavior. In reality, organizational learning is a slow, lossy process that depends on information flow, incentive structures, and institutional memory - none of which work like human cognition.
The temporal mismatch between human agency and system behavior is a persistent source of frustration. We get impatient with systems because they do not respond on our timescale. We get confused when systems produce outputs that seem "irrational" because we are applying a model of rational agency to a process that has no agent.
Better Models
The alternative to anthropomorphism is not cold mechanistic thinking. It is richer systems thinking. Instead of asking "what does the system want?" ask "what does the system do?" Instead of "why did it decide this?" ask "what processes produced this output?"
These questions lead to better understanding and better design. When you model a thermostat as a control loop, you understand why extreme settings cause overshoot. When you model an organization as a network of incentives, you understand why bad outcomes can emerge from good intentions.
The challenge is that systems thinking is harder than agent thinking. Our brains are optimized for reasoning about agents - their beliefs, desires, and intentions. Reasoning about feedback loops, emergent properties, and distributed processes requires more effort.
But the effort pays off. The design errors that follow from anthropomorphism - the cranked thermostat, the villain search, the impatient market participant - are all expensive in their own ways. Better models produce better outcomes, even when the models require more cognitive work.
The first step is noticing. The next time you catch yourself saying a system "wants" something, pause. Ask what it actually does. The answer will be less satisfying as a story but more useful as a guide to action.