Making sense of model makers. Starting with computers and AI. As computation gets more advanced, it gets more mysterious. For example, if you subtract the number of possible chess moves from the number of possible moves in the Chinese game of go, the remainder is many times larger than the number of atoms in the universe. Deep learning’s algorithms work because they capture better than any human can the complexity, fluidity, and even beauty of a universe in which everything affects everything else, all at once.
The world is super complex
But now that our new tools, especially machine learning and the internet, are bringing home to us the immensity of the data and information around us, we’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it.
Logic does no longer apply
Since the ancient Greeks, we’ve defined ourselves as rational animals who are able to see the logic and order beneath the apparent chaos of the world. Giving up on this traditional self-image of our species is wrenching and painful. Evolution has given us minds tuned for survival and only incidentally for truth. Our claims about what makes our species special—emotion, intuition, creativity—are beginning to sound over-insistent and a bit desperate.
We have no idea. We do not know why things happen. We do not understand cause and effect, nor do we understand ourselves and our behaviour. For example, it’s entirely plausible that the factors affecting people’s preferences are microscopic and fleeting.
Hence the need for principles. Because principles are what we use when we can’t handle the fine grains of reality. Principles should be linked to morality. There is a danger in assigning moral decisions to AI systems. This approach skirts around the problems we humans have with applying principles. Behind the engineering design, there are, of course, values: we program autonomous cars to minimise fatalities because we value life. But the AI only has the instructions, not the values or principles. Operationalising values means getting as specific and exact as computers require. They are just machines, not just machines. If we outsource morality to AI unchecked, the vulnerable can be tyrannised by faceless statistical engines that literally do not hear their voices.
Science of time
Taleb refers to the science of time in “Skin in the game”. That’s why we took aspirin—initially in the form of willow bark—for thousands of years before we understood why it works. But also the butterfly effect.
The author makes a distinction between different levels of complexity. Level 1, 2 and 3. Your car is what they call a Level I complex system because you can open up the hood and figure out how it works. A shopping mall owes its existence to Level II complexity: malls weren’t feasible before there were cars, yet you could not predict their rise just by examining a car. The ambulance is explicable only as part of a Level III system that exists because of the intersection of multiple systems: cars, roads, traffic laws, a health care system that relies on centralised facilities, and more.
Applying level 1 solutions to chaos
We continue to apply Level I solutions to Level III problems such as climate change. Climate change is chaos. And Chaos Theory makes sense. Machine learning lets us put data to use without always requiring us to understand how it fits together, and before the internet let us directly experience just how unpredictable a complex system can be. When a small effect produces a large change in how a system works, you’ve got a nonlinear system. Not long after Chaos Theory started taking shape, a related type of phenomenon became an object of study: complex adaptive systems.
Such complex systems can have emergent effects that can’t be understood just by looking at their constituent parts: no matter how finely you dissect a brain, you won’t find an idea, a pain, or a person. Instead, we are beginning to see that the factors that determine what happens are so complex, so difficult, and so dependent on the finest-grained particularities of situations that to understand them, we have had to turn them into stories far simpler than the phenomena themselves. System dynamics.
We cannot predict
We’re coming to realise that even systems ruled by relatively simple physical laws can be so complex that they are subject to these cascades and to other sorts of causal weirdness. Our enhanced predictive powers are based on our new technology’s ability to grapple with a world so detailed, so densely interconnected, and so variable that its complexity overwhelms our understanding. When the future is so unknowable that we think of it as perpetually behind us, predictions are no more possible than are prayers for societies of atheists or limericks in languages that have no rhymes.
The world is not a clock
Therefore, the story of prediction is also the story of how we have understood how what happens happens. For a while, we thought the world was like a mechanical clock. Regular and incremental. Like a computer. Now we know that spreadsheets, statistics, models and data are no longer enough. When small changes can have giant effects, even when we know the rules, we may not be able to predict the future. To know it, we have to live through it. Anthropology.
A clock (AI) making sense
The promise of machine learning is that there are times when the machine’s inscrutable models may be more accurately predictive than manually constructed, human-intelligible ones. Deep learning systems escape human biases. Deep learning systems do not have to simplify the world to what humans can understand. Deep learning systems typically put their data through artificial neural networks to identify the factors (or “dimensions”) that matter and to discern their interrelationships. Simplification is no longer required to create a useful working model.
Everything is connected
As we gasp at what our machines can now do, we are also gasping at the clear proof of what we have long known but often suppressed: our old, oversimplified models were nothing more than the rough guess of a couple of pounds of brains trying to understand a realm in which everything is connected to, and influenced by, everything. When everything affects everything else, and when some of those relationships are complex and nonlinear—that is, tiny changes can dramatically change the course of events—butterflies can be as important as levers.
The only real continuity between our old types of models and our new ones is that both are representations of the world. But one is a representation that we have created based on our understanding, a process that works by reducing the complexity of what it encounters. The other is generated by a machine we have created and into which we have streamed oceans of data about everything we have thought might possibly be worth noticing.
We can adopt strategies of unanticipation. Applying agile. Using open-source. Introducing MVPs. From “anticipate and prepare” to “unanticipate and learn.” From “push” to “pull.” It is hard for even the most diligent of companies to anticipate customer needs because customers don’t know what they want. In a world with so much change, what is the shelf life of a best practice? A version of unlearning.
New models of change
We are so far down the path of this new model of change that we take for granted minimum viable products, open application programming interfaces (APIs), and so much more, even though they fly in the face of tens of thousands of years of anticipating the future and preparing for it.
Interoperability is the ability to use elements designed for one system in another system the designers may never have heard of and in ways that they did not anticipate. A 3.5 mm audio plug is a better example of interoperability because it can be plugged into so many different types of devices from many, many different manufacturers. We’ve gone with interoperable solutions because interoperability makes systems more efficient, flexible, sustainable, and expandable. When the resources and services designed for one system are interoperable with other systems, unexpected
Your thermostat can interact with your smartphone, your smartphone can interact with your baby monitor, your baby monitor can send JPGs to your smartwatch, and all of them can tweet at you angrily behind your back. (Be sure to check your toaster’s privacy statement.)
In our new world, interoperability does the job of connecting every object across every length. The interoperable universe has gotten us more used to small events triggering huge ones: the invention of hashtags turns Twitter into a new type of news medium. A video from a mobile phone triggers weeks of demonstrations, a software program written by a college student ends up connecting billions of people. Small causes can trigger huge events because interoperability enables more pieces to interact with more pieces more easily in a universe that is already ineffably complex. Everything affects everything.
Control is gone
Control is fragile; interoperability is resilient. Control is the narrow path a flashlight shows. Interoperability is the way light illuminates, feeds, warms, and liberates, all depending on what it touches.
So, in an age as chaotic, uncontrollable, and unpredictable as the one we have entered, you’d think strategic focus—making the hard decisions about how to best use limited resources—would be more important than ever. As a result, many organisations are beginning to think about strategy differently. Applying scenario planning. To help break managers out of the existing models that assumed the business environment would continue pretty much as it was at the moment.
Developing a transient advantage
A strategy of continuous reconfiguration. Competitive advantage is no longer sustainable and “no longer relevant for more and more companies” because digitalisation, globalisation, and other factors have made the environment far too dynamic. An open-source approach to strategy. Like Tesla and Google. Letting go. They have decided to embrace the essential unpredictability of the interoperable universe rather than resist. Tesla and Google are building ecosystems. Read “Rethinking strategy”.
It is now quite common—the norm, even—for a business to think of itself as embedded in a messy network of suppliers, customers, partners, and even competitors. Functionally, this shift toward a networked, or permeable, view of business can be characterised as an increase in interoperability.
Two Facebook bots talking to one another in a language they invented. Two AlphaGo programs played each other and came up with what seemed like nonhuman strategies. When a machine learning system goes not from A to B but from A to G or perhaps from A to mauve, we have tick marks but no lines. We have advances but no story. We’ve built a world together in which anything can be connected in any way that one of us imagines. Precisely the reason why we go back to stories. We love long narratives more than ever. Storytelling is entrenching itself just about everywhere we look. Stories are a crucial tool but an inadequate architecture for understanding the future. There’s no harm in telling those stories to ourselves. There’s only harm in thinking they are the whole or highest truth. Read “Transforming the future”.
In an interoperable world in which everything affects everything else, the strategic path forward may be to open as many paths as possible and enable everyone to charge down them all at once, together and apart. Complexity beyond prediction. Quantum.
This future isn’t going to settle down, resolve itself, or yield to simple rules and expectations. We are at the beginning of a new paradox: We can control more of our future than ever, but our means of doing so reveals the world as further beyond our understanding than we’ve let ourselves believe. But awe always opens outward, letting the unthought ground our ideas and the winds wash through our words. One way or another, awe opens more of the world.
PS Steven Weinberger (author) asked me to include the link to his website: https://everydaychaosbook.com