The Unaccountability Machine, by Dan Davies

Amazon link

I read an online excerpt of this book and was immediately intrigued by the idea of an “accountability sink”, which is a mechanism by which “The communication between the decision-maker and the decided-upon has been broken – they have created a handy sink into which negative feedback can be poured without any danger of it affecting anything.” This lack of accountability where people shrug their shoulders and ask “What else could we do?” really frustrates me, as I believe that we should take responsibility for our actions and their consequences. So I bought the book and inhaled it in a few days.

The book starts by explaining the history of “management cybernetics”, the application of systems theory and cybernetics to management. While I think these ideas are interesting, I thought this part dragged because I didn’t really care how Stafford Beer came up with these ideas and tried (mostly unsuccessfully) to apply them. A couple fun trivia from these chapters: Beer was asked by Chile’s president Salvador Allende to apply these concepts to the Chilean economy, but Allende was deposed by General Augusto Pinochet in a coup before Beer’s system was complete. Later, Beer asked Brian Eno to take over his cybernetics work, but Eno decided to focus on music instead.

Beer’s main idea was “The purpose of a system is what it does”, abbreviated as POSIWID, where we pay attention to the impact of a system, not the intention of the humans involved, while also looking at the “black boxes” (subsystems) that compose the system, each with their own purpose. POSIWID is essential because any changes we make will be ineffective unless we are starting from the current system reality. Davies outlines the five subsystems that Beer deemed necessary for a healthy organization according to Beer, and then uses that framework to identify where modern corporations have lost their accountability.

For those interested, here are the five systems of management cybernetics:

  1. System 1, aka operations: The part of the system that does things, the people who are making changes in the real world. This could be factory workers, it could be professors delivering lectures, it could be engineers writing code, and can also include support functions like cleaning, maintenance or tech support.
  2. System 2, aka regulatory: The part of the system that enforces rules for sharing and scheduling resources, such that different parts of the organization are not clashing or conflicting e.g. making sure each factory worker has their assigned time and task, or that different university classes are not scheduled at the same time. This is the part that puts bounds and limits on what System 1 can do.
  3. System 3, aka optimization or integration. This is what we typically think of as management or administration, coordinating among the various System 1 and System 2 operations to achieve a purpose. Davies observes that System 2 operations is “the management functions which everyone agrees to be necessary, as opposed to those that they complain about”, which are System 3 operators making resource tradeoffs and decisions.
  4. System 4, aka intelligence or policy. “System 3 manages things happening ‘here-and-now’, System 4 is responsible for ‘there-and-then'”. I understood this as System 3 optimizing for the current situation, while System 4 is strategic planning, anticipating what might change and directing System 3 and System 2 to reprioritize accordingly to plan for that potential future. There’s a tradeoff here in that if System 4 grows too strong, the system will change too often and not optimize enough for any particular conditions, but if System 3 is too strong, the system will change too little and stagnate or founder when conditions change. Which leads to…
  5. System 5, aka philosophy or identity. I took this as the function that balances short-term System 3 priorities with long-term System 4 priorities in service of a larger purpose or identity. System 5 is ultimately “making the decisions which determine ‘what it does’ and, consequently, its purpose” (POSIWID).

While I hadn’t been previously familiar with this framework, I like how it maps nicely to a couple of my favorite management books:

  • The Advantage, by Patrick Lencioni outlines six questions, which boil down to these systems, including mission (System 5), values (constraints on how we do things aka System 2), strategy (System 4), tactics (System 3), and “What do we do?” (System 1). He also talks about the importance of a cohesive leadership team to create collective accountability and trust, presumably so they can function well as a System 5 unit to make consistent trade-off decisions for the company that balance among the different priorities.
  • In Built to Last, Jim Collins offers the principle that stable organizations must “Preserve the core, but stimulate progress”, again describing the tension between System 3 (optimize for what is working) and System 4 (prepare for the future) that must be managed by System 5. Also, the book had a great quote from David Packard which embodied System 5 thinking: “Profit is not the proper end and aim of management – it is what makes all of the proper ends and aims possible.” (we will return to this idea!)

Unsurprisingly, I think these ideas also map well onto my book. Most people are trapped by “parts” of themselves that were designed for a set of conditions and are continuing to operate as unconscious commitments that keep them from changing, which feel like Systems 1 (“You must do this”) and 2 (“You can’t do that”) running without oversight. In this framework, my book is designed to activate the reader’s System 5, creating a new identity and vision for the self, which then empowers System 4 to make some changes to prepare for a different future, or a current reality that is different than what System 3 is optimized for. In other words, “what got you here won’t get you there”, so you have to let go of previous behaviors and optimizations (particularly System 2 rules and constraints) to get to somewhere new.

I also liked the management cybernetics idea that “Information only counts if it’s being delivered in a form in which it can be translated into action, and this means that it needs to arrive quickly enough.” In other words, information doesn’t matter unless it affects decisions, so even if one part of the organizations “knows” something (often System 1), it’s not meaningful information unless it gets to the leaders elsewhere at the organization who might recognize it as a signal to change (aka System 4). One of my pet peeves is people filling themselves up with information that is not relevant to how they act in their lives; they aren’t using it to make decisions. They read every email and Slack and message, but that input doesn’t translate into different action, which means that none of that counts as information in the cybernetic sense.

Another cybernetic principle is that “Systems preserve their viability by dealing with problems as much as possible at the same level at which they arrive, but they also need to have communication channels that cross multiple levels of management, to deal with big shocks that require immediate change.” Toyota’s stop cord, where any worker could stop the assembly line by pulling a red handle if they saw a problem, is an example of how that system was designed with that principle in mind. Davies notes that management consultants can also serve as such a communication channel – they sometimes get paid to talk to the front-line workers (System 1) and communicate back what they learn to management (System 3 or 4) in companies where those links are broken.

Counterintuitively, “accountability sinks” are actually a feature of the system, not a bug. There is far too much information and context in the world for the managers in Systems 3, 4 and 5 to absorb, so the system needs to compress the information transmitted by optimizing for “relevant” factors (and dropping the rest), and to compress the directions transmitted back by using decision-making principles and values. Accountability sinks are set up as a protocol for how to respond to certain situations without having to consult “management”. But problems arise when the protocol response no longer makes sense, unless there’s a way to transmit that information back to “management” to update the protocol, like the Toyota stop cord.

How the information is compressed is a decision that biases the whole system:

“Every decision about what to measure is implicitly a decision about what not to measure, effectively deciding what aspects of environmental variety are going to be ignored or attenuated. … Everything in the model will be backed up with data; everything that the model leaves out will be ‘difficult to quantify’, soft and fuzzy. … it’s easy to create an information system that will always give a particular answer, whatever the truth is. And that answer will appear to be an objective fact, even though it’s actually the result of a lot of implicit assumptions.”

This idea resonated with me because when I was Chief of Staff at Google, people used to try to convince me with data because they thought that I was a numbers guy. What they didn’t realize is that because I was a numbers guy, I knew how to make the numbers say anything I wanted by choosing which numbers to include and making certain assumptions, so I didn’t actually trust “the numbers”. People tried to drive a decision by saying “Look what the data says!”, using their analysis as an accountability sink where they didn’t have to take responsibility, but I would pry apart their black boxes to deduce their desired goals from the implicit assumptions they had made (POSIWID).

Davies then uses the management cybernetics framework to explore how modern corporations lost their System 5, and even System 4, capacities, which he traces to the rise of modern shareholder capitalism, as most popularly described by Milton Friedman. In one sense, Friedman’s theory of maximizing shareholder value was a brilliant compression; rather than asking managers to balance an increasing set of divergent stakeholders in a VUCA (volatile, uncertain, complex, ambigious) world, he simplified their decision making process to focus on share price, and trust that “the market” would integrate all available information into a single number for them to optimize.

Davies observes that “the market” then becomes the ultimate accountability sink. The managers don’t have to take responsibility for their decisions, because it’s what “the market” wanted. Investors in the stock don’t have to take responsibility, because it’s the managers making decisions about the company. There is nobody left to be responsible, as System 5 has been completely abandoned to “the market”, which, in practice, means System 4 is disempowered, and System 3 optimizes without constraints to the current system, which leaves the whole system vulnerable to unanticipated changes (Davies analyzes the 2008 Great Recession through this lens).

This tendency was amplified by the rise of private equity and leveraged buyouts, which worked to create shareholder profits by buying companies that were poorly operated, loading them up with debt, and thus forcing them to optimize for immediate cashflow to pay the interest on the debt. This is an example of System 3 over-optimization, which led to a catastrophic lack of resilience when conditions changed. Even though most companies aren’t owned by private equity, the threat of such a buy-out means that most corporate leaders act to maximize short-term profits in response, which leads to a more volatile and unstable system.

This is the result one would expect for a system missing Systems 4 and 5 to anticipate changes. But that lack of forward thinking isn’t a problem in a stable environment, such as the one experienced during what economists call the Great Moderation from the mid-1980s to 2007. The Milton Friedman assumptions grew to take over the ecosystem by working well in that environment, outperforming other companies that reserved some capacity to handle volatility. That’s how we ended up with what Davies calls The Unaccountability Machine, where the feedback links between what people wanted and what companies provided were broken by the over-emphasis on shareholder value.

Davies suggests that stable systems “don’t work by defining an objective function and seeking to maximise it.” Instead, they seek to survive by using part of their capacity to innovate and explore other possibilities, rather than solely maximizing for their current environment. They have a functional System 4 keeping an eye out for how things might be changing, and the System 5 capacity to reallocate resources to explore something new. Rather than following a private equity model focused on immediate profit, or a venture capital model where companies are pressed to chase exponential returns on investment which paradoxically ensures that most of them will fail, an ecosystem designed for stability would allow companies to grow slower and have the capacity to adapt to changing customer needs.

“By taking away the pressure to maximise a single metric [like shareholder value] (and therefore to throw away information that doesn’t relate to it), organisations could apply their decision-making capabilities much more effectively. They could innovate more, design more sustainable solutions and build less adversarial, longer-term relationships with their people.”

Davies closes the book by suggesting that

“what’s really intolerable about unaccountability is the broken feedback link, and that if we can solve the problem of communicating with the system – pay more attention to the ‘red-handle alert’ mechanisms that indicate an unbearable outcome – people might not be so furious about the death of personal responsibility. In general, people in my experience are a lot less angry about everything when they feel like they’re being listened to.”

Combining these last two points, I would love an ecosystem where companies listened to all of its stakeholders (not just the investors and customers), and where stability was valued over maximization. This will be particularly important as we observe the rise of AI, which will serve as the next accountability sink; managers will duck responsibility by deferring to the AI or algorithm that can only pay attention to a biased compressed set of factors in making decisions.

I found this book to be a fascinating introduction to management cybernetics while also telling a compelling narrative about how our economy got to be so broken without having to assume evil intentions or ill will. That’s the beauty of the Unaccountability Machine – nobody is directly at fault for adopting the market mechanisms that led to these outcomes, even if all suffer from the results (POSIWID). We can do better, but only if we start taking responsibility ourselves for our actions, while holding others accountable for the implicit decisions that reinforce and perpetuate the current system.

2 thoughts on “The Unaccountability Machine, by Dan Davies

  1. In one of Davies’s recent newsletters, he talked about a situation where “Something was outside the management information set until it started to scale up, and then it created enough of an alarm signal that the banks had to reorganise their information gathering to find out what was causing the problem; then they had to take action and change their operating procedures.” In other words, the system found a way to respond to new elements rather than ignore their feedback as he described in The Unaccountability Machine.

    His description reminded me of Bruno Latour’s work The Politics of Nature, which describes a similar process, where he discusses how systems evolve (which he calls “Collectives”). The “Collective” includes what is known and considered. Then something from outside that system arises and sends signals that the system is incomplete as is. The system considers the “petition” and decides whether to reconfigure to include that new element or not (as happened in Davies’s example where the banks did reorganize to change their operating procedures).

    What I loved about Latour’s description of the process was that it mapped to both human and non-human systems. In politics, the concerns of women or minorities have in times past been excluded from the system (e.g. the original US Constitution), before they rose in political power and were eventually included. In non-human systems, he uses the example of quantum mechanics, where physicists were noticing weird experimental results for a couple decades before they found a way to incorporate them into their theories. More examples and thoughts at https://www.nehrlich.com/blog/2005/05/09/politics-of-nature-part-2/

    This connects to The Unaccountability Machine in that that feedback of something outside the system knocking to be let in is what’s broken in the systems described. They are missing a non-catastrophic way to consider what’s outside the system, and would benefit from a more structured process like what Latour describes to evolve more smoothly.

Leave a Reply

Your email address will not be published. Required fields are marked *