AI and unaccountability sinks

“The Unaccountability Machine: Why Big Systems Make Terrible Decisions – and How The World Lost its Mind” is a brilliant book if you are into system dynamics. It is a book about the industrialisation of decision-making.

Building systems

The Unaccountability Machine is about the biggest problem of modern industrial life – the problem of being overloaded with information, of trying to get a drink from a firehose. When people are overwhelmed by information, they always react in the same way – by building systems and by narrowing the information space. That has implications:

That has implications:

  • As organisations grow, they become more complicated, making it difficult to add enough capacity to manage this complexity.
  • Because they narrow the information space, every decision-making system is, implicitly, a model.
  • It creates a fundamental change in the relationship between decision makers and those affected by their decisions.
  • Every year, more of the decisions that affect our lives are made not by people but by systems.

Data versus humans

You need to think about this. Systems are data-based. Not human-based. Read “Uncharted”. Mathematical models. When that happens, many other things follow. Accountability gets a different dimension. It breaks the connection between the worker and the product. The organisation starts to operate independently of humans. 

The accountability sink

It creates an accountability sink (the system says “no”) and it establishes the fundamental flaws in the dismal science of economics. Economics has a frighteningly simplistic view of the world in which everything could be reduced to a single goal of shareholder value maximisation. And drive policy. We are not simple decision-making machines. We are not logic AI agents.

Trust

It shows you why you should not trust (big) organisations. Read:

Systems don’t have motivations, so they don’t have hidden motivations. If the system consistently produces a particular outcome, then that’s its purpose. But on the other hand, systems don’t make mistakes. Just as it’s impossible to get lost if you don’t know where you’re going, a decision-making system does what it does and then either lives with the consequences or dies of them. These organisations are set up to deliver lots of decisions, quickly, cheaply and with reasonable quality – on average. But that’s the problem; you can’t be fair to an average.

Skin in the game

Everything becomes more complicated, and the only way to deal with this is to start weakening the failsafe concepts of accountability that protect us from bad decisions and bad people. And so, unless we find a new way to make decisions at scale, we’re in trouble. The decline in individual accountability for unpopular decisions is not – or not only – a form of moral decline on the part of our rulers. Nearly all the commands and constraints which afflict the modern individual, the decisions which used to be made by identifiable rulers and bosses, are now the result of systems and processes. Read Nassim Nicholas Taleb´s  “Skin in the Game

Criminogenic organisation

It creates the accountability shield, criminogenic organisations and self-organising control fraud. The banking crisis in 2008 is a shining example. After investigating some of the biggest scandals of the financial crisis, there was no paper trail between the low-level crooks and the bosses. In most cases, there was actually no connection. There are many more examples. The recipe combines unrealistic profitability targets with underinvestment in legal departments and compliance systems.

Polycrisis

That creates another crisis, which is the crisis of trust in the system. The author calls it a polycrisis—the loss of control by the previously existing hierarchy. Bad things happen when things do not seem to be anybody’s fault, particularly when there is an incompetent technocratic consensus. When the relationship between experts, decision makers and the general public becomes completely dysfunctional.

The rule book

The rule book for an accountability sink:

  • The delegation of the decision to a rule book, 
  • Removing the human from the process and thereby severing the connection that’s needed for the concept of accountability to make sense.
  • No way to appeal the decision by communicating with a higher level of management.
  • Prevent the feedback of the person affected by the decision from affecting the operation of the system.
  • Do not accept exceptions.

Examples

The book covers examples such as the shredded squirrels in Schiphol (you read that right), academia, insurance, politics, EU regulations, the Euro, IMF, France, Boeing, Milton Friedman, GE, Central banks, etc. 

Impersonal and abstract

If you trace back many important decisions of the last few decades, you will regularly come up against the uncomfortable sensation that the unacknowledged legislators are relatively junior civil servants who put placeholder numbers in spreadsheets. That’s one big reason why we have riots, Brexit, Trump, Bolsonaro, Le Pen, Farage, Wilders, etc.  It matters a lot to populations all over the world whether the policies they have to suffer under are being imposed on them by people and institutions they understand and feel they can communicate with, or by impersonal and abstract entities that have to be experienced as forces of nature.

A Rubik’s Cube has more than 43 quintillion possible states

As I wrote in the first paragraph. Lots of system dynamics.  Even moderately complex systems have so many working parts that they have to be understood as a whole or not at all. The trouble with complex systems is that combinations of things tend to multiply together rather than adding up, so the number of possible states gets out of control very quickly. A Rubik’s Cube has more than 43 quintillion possible states; clearly, a brain or an organisation has far more.

Black boxes

Complex systems are like back boxes. In a complex system, it’s likely to be pointless – or even dangerous – to try to understand its inner workings and use that understanding to manipulate a precise outcome. A genuinely complex system is one in which you cannot hope to get full or perfect information about the internal structure, and cannot have any acceptable degree of confidence that the bits of information you don’t have can be safely ignored. The idea that systems can have different properties from the sums of their components, with properties that can’t be deduced from those components, is dangerous.

Conspiracy versus cock-up 

The only sensible thing to say about a lot of important events is that the actions which led to them were carried out by organisations rather than individuals. They are black boxes; their purpose is just what they do. An organisation does things, and it systematically does some things rather than others. But that’s as far as it goes. Systems don’t make mistakes – if they do something, that’s their purpose. There’s just a network of cause and effect. We might think they’re conspiring, but they’re working within structures that made the outcome inevitable.

POSIWID

The purpose of a system is what it does (POSIWID). Factors such as vibes, purpose, power, control, hierarchy, autonomy, delegation, bargaining, values, feedback, variety, strong and weak signals, exceptions, regulation, system 1, 2, 3, 4, and 5, balance, blind spots, boundaries, translation, transduction, communication, and most importantly: information.

Information

When you study data, you understand that “information” is itself a tricky concept, and that it can’t necessarily be treated like any other commodity. Tacit, implicit, explicit, emotional, part of a story, fact, belief, interpretation, etc. You should read “Data, Strategy, Culture & Power: Win with Data-Centric AI by making human nature work for you”.

Markets as computing fabric

Markets as computing fabric really got me going. I am sure you understand why this is relevant to AI. If the economy is an information-processing system, does that mean that every corporation is an artificial intelligence? The science fiction writer Charlie Stross, for example, described corporations as ‘very old, very slow AIs’ in 2017. The basis of the idea is that ‘an algorithm’ is a mathematical concept, and it’s meant to be neutral with respect to the system that implements it. If the process which produces the output is inscrutable, then the output has to be regarded as being from the system. After all, whether or not there is any intelligence present in a system, real or artificial, there definitely are decisions being made.

Corporations are decision-making systems

They have homeostatic forces that aim to maintain their equilibrium, and higher-order decision-making systems that enable them to reorganise themselves in response to shocks beyond the scope of their original design. They are unable to respond to signals from long-term planning because the short-term planning function must operate within the constraints of the financial market disciplinary system.

Clueless

Working inside a corporation (or any large organisation) is the quickest way to realise that you have only a partial understanding of how it works. If your workplace is a small or medium-sized company with no conflicts of interest among its owners, you might be able to understand or predict its activities; if it’s any larger or more complicated, you’d go crazy trying. It’s not what you’d call an attractive view of the world – most of us are just cogs in a machine, working on what’s in front of us while the big picture is determined elsewhere. It’s particularly unattractive if you’re a CEO or head of government, and you feel like you ought to be making the decisions.

The system does what the system does

Think of the investigation into the Grenfell Tower disaster, or into the Bloody Sunday massacre. They’re unsatisfactory because an inquiry is partly an attempt to investigate causation in a system too complicated to understand, and partly because it’s an attempt to assign responsibility where it’s hardly applicable. The idea that corporations are frightening alien intelligences has to be treated with scepticism. Still, the idea that merciless and incomprehensible decision-making systems are increasingly ruling our lives is practically mainstream.

AI

Surely the link with AI is not lost on you. AI as an accountability sink. The heart of a modern machine learning system is usually a big matrix where most of the entries are zeroes, and a lot of the science of ‘data science’ lies in the invention of techniques to exploit that fact and carry out your calculations quickly enough to be useful. Simplified and complex. With an urgent problem of ‘explainability’. A sufficiently complicated computerised system will soon produce decisions that are not immediately obvious, given the input data. The artificial intelligence guys also talk about the ‘responsibility gap’, which refers to the problem of assigning liability when, say, a self-driving car kills a pedestrian or when a scoring system turns down every single loan application from a Black neighbourhood. Computer algorithms are now close to becoming black box systems, and we instinctively think that’s problematic – but organisations of all kinds have been working at that level of complexity for centuries.

The systems

System 1, or operations, is the bit of the organisation that is involved in making change in the real world.

System 2, enforcing rules for sharing and scheduling.

System 3, optimisation or integration. Here and now.

System 4, often described as the intelligence function. There-and-then.

System 5 is philosophy or identity. Yellow highlight | Location: 1,807

Zombie organisation

Very old, very slow AIs can create decrepit organisations. An organisation in this situation is one that has, for one reason or another, stopped paying attention to some kinds of information. It’s only aware of its immediate surroundings – this quarter’s revenue, the current staffing level, things like that – and has lost the ability to make plans. Like the cat, it can continue to survive as long as nothing major changes, but the next time it encounters a shock, it will go into crisis.

Question the information diet of the system

Because AI and systems are data-driven, you need to question what feeds it. That means you need to question the economics and the financial systems. We are more than profit and loss, balance sheets, metrics, etc. Homo economicus does not exist. Information is a lot richer than numbers. It is an indication of a complete lack of trust in human judgment.

Economics

Economics tend to: a) make a model of some feature of the economy, stripping away nearly all the complexity; b) make a lot of simplifying assumptions, often questionable in terms of their empirical relevance; c) show that their conclusion follows from their assumptions, which ought to be quite easy if they’ve made the assumptions strong enough; d) act as if their conclusion has now been proved in the real world.

Semantic drift

This ongoing focus on the number is part of what the author calls a semantic drift. The word ‘increase’ was replaced with ‘maximise’, to make the language consistent with that of economics. The shift from profit to value was partly a solution to a problem and partly a reflection of how the economy was changing. The greater the emphasis placed on accounting-based targets set by the CEO, the greater the filtering effect.

You get business hallucinations

The problem here is that unless a lot of effort is expended, it’s easy to create an information system that will always give a particular answer, regardless of the actual truth. And that answer will appear to be an objective fact. An accounting system is an almost perfect accountability sink – even the people responsible for constructing it don’t necessarily understand what they’re doing.

Management consultancy 

Management consultants often act as accountability sinks. Even if they’re not acting as scapegoats, though, management consultants will often work by telling a company something that its employees already know. When things go wrong with management consultancy, it’s more likely to be because the consultants are tackling a new problem and there isn’t anyone in the company who knows the answer. When you write this idea down in black and white, it’s pretty easy to see why it won’t work except by pure luck. The reason is invariably a failure to respect the complexity of the problem.

The failure engine

The blind spots of economics and the blind spots of management systems work together to produce a model of the world that can drift away from reality and start producing bad decisions. There’s no self-correcting tendency, because the problem is in the information-processing system itself. Consequently, the non-human decision-making systems of the world have begun to go a little bit mad. That applies to business, politics and policy.

The links are broken

Economics, financialisation, outsourcing, modelling, simplification, and systems have cut the ties with risk and accountability. It has broken the contract with the workers. Not only does it impact trust in the system, but it also has enormous health implications. Being in control of your life is extremely important as a source of well-being. Read “Dying for a paycheck. People are overloaded with information that they can’t process; the world requires more decisions from them than they’re capable of making, and the systems that are meant to shield them from that volatility have stopped doing the job.

We have a problem 

The world isn’t going to stop growing, so it will only get more complex. The populist movements of the 2010s all promised a simpler world; they were, in the words of J. K. Galbraith, taking on the great anxiety of their people and addressing it. They were also promising to restore the broken communication channels – to make voices heard, to force the managerial class to listen. You can’t promise a simpler world – that’s equivalent to claiming to be able to reverse the direction of time. In the future, ‘I blame the system’ is something we will have to get used to saying, and meaning it literally. The ultimate accountability sink.

The solution

  • Businesses ought to be like artists, not paperclip maximisers. An artist doesn’t have a successful career by maximising their art; they do it by repeatedly producing work that they are proud of. That’s what the world could look like if we got rid of the blind spots.
  • By removing the pressure to maximise a single metric (and therefore discarding information that doesn’t relate to it), organisations can apply their decision-making capabilities much more effectively.
  • Do something about private equity and limited liability. 
  • Viability over maximisation. If you remove maximisation pressure, the natural equilibrium of corporate decision-making systems will likely become less hostile to human life.
  • You can’t have the economists in charge, not in the way they currently are.
  • What’s really intolerable about unaccountability is the broken feedback link. Social media, in principle, could play a crucial role in conveying information from the grassroots into the heart of our decision-making systems.

Some solutions from other books

Also read “The Glass Case”. 

Loved it

Loved this book. The big question that I am left with is how AI will impact the accountability sink. After reading this book, I am less optimistic. 

Ron-Immink.jpg

Daily #MindCandy

Subscribe to my (free!) near-daily scenario prompts—designed to spark strategic thinking.


Each edition delivers fresh insights, emerging trends, thought-provoking prompts, and must-read business books to keep your mind bubbling and your strategy sharp.

Scroll to Top
0 Shares
Share
Share
WhatsApp
Email