“Fatal Abstraction: Why the Managerial Class Loses Control of Software” is an indictment of large industries taking shortcuts in software. Using examples from companies such as Boeing, Microsoft, Uber, Facebook, and Enron. Making money as the main focus. At a huge cost to people. It is not a pretty picture. A book that is a bit like “The Four”.
The lost promise of the digital age
How the promise of the digital age has turned to ash in front of our eyes. How the rhetoric of salvation and techno-optimism lead not only to petty disappointments but also to society-breaking catastrophes. The potential of software versus what the author calls “managerialism”. Command and control, the rise of financialisation and dumbing down of business leadership. The lack of thinking deeply about the consequences of decisions. Only focussing on productivity, profitability, and shareholder returns—in some cases doubling them or more—through “good management” alone. Generalistic MBA thinking.
Misunderstanding software
Combined with a complete misunderstanding of what software can (and, more importantly, cannot do). Software development is invisible, intangible, and unmeasurable until the last possible moment. Once that software is finalised, it can be launched in the blink of an eye, Unlike previous innovations, software can be deployed with stunning speed and at a global scale—which which means that its externalities follow close behind, at much the same pace.
Managerialism as the culprit
Fifty years ago, the computing revolution promised to reshape the world. It has done so, just not in the ways we expected. The default is managerialism, which reduces everything about a business to a simplistic financial abstraction and defines good and bad only in terms of impacts on cash flows and profits. They regard software in managerial terms only:
- A weapon that can help achieve market dominance
- As a potential tool to exploit users
- A universal solution to cut costs and reduce headcount.
However, software is an inherently flawed product with its own set of tradeoffs and limitations, many of which cannot be seen until it’s too late.
Software is dumb
The universal laws of physics govern all of these tools and technologies, regardless of whether or not we fully grasp them. Not software. Every time we write a new program, we must also teach it the laws that will govern it and the rules of the system in which it will operate.
Software abhors chaos
The whole point of abstracting the real world into the digital one is to impose order. But chaos cannot be predicted or modelled using math and logic, a computer’s only tools for understanding the world. Chaos is an unsolvable flaw in software, a universal risk that every programmer can only minimise and never eliminate.
Pure logic
Computers are logic itself: they have no emotion, no intuition, no ability to improvise. They will follow every single one of your instructions to the letter. Yet they are unforgiving. A computer that could rival a human not just in terms of a few abilities (like playing chess) but in all abilities would still act like a brute-force machine, going through each of its lines of code one by one in sequence until it found just the one suited to its task. Select something deep in its codebase, and it might take hours just to make a decision that a human might make in milliseconds.
Software has no sense
Software has no sense of its limitations or its own powers. It has no grasp of external reality. It cannot tell right from wrong, helpful from hurtful, humane from inhumane. To the extent it understands its purpose, it is only the narrow purpose that managerialism gives it. And no single market mechanism puts sufficient pressure on companies to clean up their own messes before they cause irreversible damage. Despite the powers we ascribe to it, software has no ability to set its own limits on what it will or won’t do. Companies do.
Sell, sell, sell
Software, which is quick to scale and impossible to fully test, can reach global ubiquity even as it slips from the grasp of its creators. In fact, the managers who direct our tech companies almost encourage this loss of control. They dictate corporate strategy from behind their financial abstractions. They don’t care what their products do, only whether they sell.
Bad companies
The book argues that the tech disasters we have endured over the last twenty years have all shared a common origin: what I call managerial software. It has warped the decision-making process of executives as well as the internal politics of the companies over which they preside. Managerial software has proven itself too powerful to be constrained by regulation,and too profitable to be reined in by its executive masters. Computers have overwhelmed the guardrails. They have all but eliminated the distance between thought and reality, making it easier for companies to bring a bad concept to market.
Corporate blindness
The dangerous union of corporate blindness and technological power has caused many of the global scandals of the last twenty years. Only in the last few years have people begun to speak out against the excesses of Big Tech. They all share a common target: the managerial class that currently monopolises the modern tech industry. As a warning: Facebook got rid of its civic integrity team, Microsoft its ethics and society team. Amazon and Google dismissed several dozen AI ethics researchers as part of broader cost-saving measures.
Money, money, money
The book tells some chilling stories about how management made the 747 MAX unsafe, knowingly ignored the signals, and took fatal shortcuts to save money. Prioritising profit over safety/people. As Uber did with the self-driving car. The book also makes the link between PowerPoint and social media (deceit and dumbing down). I am not sure about that one. PowerPoint is not responsible for the space shuttle disaster in NASA and the Theranos fraud. There is no question, however, that you cannot trust large corporations with your well-being. It is all about money.
Doing it right is too expensive
Managerial companies don’t like building something right before releasing it into the world. It’s much easier on a balance sheet to launch a minimum viable product and then improve it later; you can generate revenue faster and keep upfront development costs relatively low. However, all software has limitations; good software respects those limitations.
Humans
Fortunately, humans are not as rigid in their thinking as computers. We also use bottom-up reasoning to make sense of the world. Rather than strict rulesets that fall apart in the face of contradictory data, bottom-up reasoning operates through heuristics: flexible rules-of-thumb that are likely to explain the situation at hand but that can be discarded or refined if one starts to receive information that doesn’t fit perfectly. This ability to spontaneously modify or generate heuristics comes from the plasticity of our neurons and synapses.
LLMs
In contrast, LLM is a neural network that has been trained to process language. It has gorged on a vast dataset of text (and sometimes images) so that it can respond to a natural-language user query with a reasonably pertinent response. From the LLM’s perspective, though, it is only responding to its programming in the mechanistic, logical way of all software. LLMs have no mechanism for understanding reality, to see whether they are generating true content or not. Like MCAS on the 737 MAX, the computer vision software on Uber’s ATG prototype, PowerPoint, Facebook, or TikTok, LLMs can only “see” the data they are given and process it in the narrow way their programming dictates.
ChatGPT is not human
ChatGPT’s latest model, for example, has a rumoured 1.7 trillion parameters. Its developers gave its neural network 1.7 trillion nodes that decide whether to fire for their specific word or phrase, depending on the prompt. If you ask it for a hamburger recipe, the nodes for “beef”, “lettuce”, and “mustard” are guaranteed to rank highly, whereas ones for “blue”, “January”, or “Aristotle” are not. Now, generative AI can produce material that looks true to life. That is not because it “understands” what it creates. Instead, it has gotten very good at learning formulas and conventions.
The danger
Once software reaches a certain complexity, it becomes difficult to understand. Once it reaches a certain scale, it becomes difficult to control. Generative AI may be the most pernicious software ever devised, precisely because it has reached a complexity and scale far beyond any of our previous creations in such a short amount of time. Check out my upcoming book here.
Regulation
The most obvious response to the perils of technology put out by the managerial corporation is to call for stronger regulation. Regulators lack the tools to understand, let alone prevent, software catastrophes. They also lack the power and the resources of the modern managerial corporation. They cannot contend with the millions of dollars that corporations spend on lobbying and campaign donations. And even if they had the resources, they have no idea what to do with software. In other words, tech companies are so powerful that they can reframe any attempt to control them as an attack on “free speech,” and can use the language of popular opinion to ward off any threats to their business practices.
New guard rails
New doctors must recite the Hippocratic Oath, new lawyers must swear to uphold the Constitution, and even architects must sign a six-part code of ethics before anyone trusts them to design a building. Not software engineers, however. This cannot be the permanent condition of software development. Creating software demands expertise, yes. But it also requires humility. Until we acknowledge this, our software will continue to make us less tolerant, joyful, and free: a digital straitjacket in which we bind ourselves ever more tightly, even as its embrace suffocates us. Technology was supposed to amplify our common humanity, not suppress it.
Our responsibility as creators
Software may be far more powerful than we are, but it is also far dumber. We must embrace our responsibility as its creators and teach it to safely exist in the world. Until we understand this fundamental truth about what it is like to be a computer, we will never be able to bring our greatest technology under control.