Scary smart, why we are all Jonathan and Marta Kent (raising Superman)

Mo Gawdat has form. He is awesome. His book “Solve for happy” is superb. He just published “Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World”. I usually read about six business books simultaneously, but I finished this book in one fell swoop. The book is a fantastic journey through the dangers of AI and how fundamental principles from Eastern philosophy will save the world. The book also explains why we will not have a choice but to adopt those principles. 

The false promise

Why have we ever believed in the false promise of technology? How we click and browse, subscribe and share all the tech that keeps getting fed down our throats when, in reality, tech has never, ever fulfilled its promise. Where is the utopia that tech was supposed to bring to our civilisation when we are on the brink of a dystopia of climate change and the mass extinction of all that we know to be beautiful and precious?

The book in short

AI is inevitable. AI is not a tool. AI is an intelligent being like you and me. The decision algorithms of AI are not based on coding but are on the data we feed them. The current data (greed, war, hunger, conflict, despair) we feed AI will create a monster. If you’ve ever had to deal with an angry teenager, you don’t need me to describe to you what it will be like to deal with a superintelligent angry teenager. And yet, we continue to build them. We are all responsible. We should feed it other data (love, compassion, purity of intent), and if we do that for a year or two, AI will fundamentally change for good. And consciousness (and AI will be conscious) will always adhere to the principles of the universe, and those are love and compassion. Nothing to worry about so. 

Who is raising Superman?

Imagine if Superman was not raised by Jonathan and Martha Kent, but by Donald Trump. That is where we are right now. And AI is Superman in its infancy, but already it has incredible abilities. AI is already smarter than every human on the planet in terms of many specific isolated tasks. The reality is that we are reaching the end of our evolution. Read “Novascene“.  As we keep evolving our intelligence and grasp more and more of the complexity of our world, it seems that we humans are approaching the theoretical level of how far our own biological intelligence can take us. Bandwidth, the speed at which data can be transmitted across a connection, is a feature of human intelligence that is highly constrained. If I sent you this entire book over a high-speed internet connection, it would take you seconds to download but days to read it. We don’t have the bandwidth of communication needed to share knowledge at sufficient speed.

The evolution of AI

DeepMind developed a new AI from scratch – AlphaGo Zero – to play against AlphaGo Master. After just a short period of training, AlphaGo Zero achieved a 100–0 victory against the champion. There are more possible moves on the Go board than there are atoms in the entire universe. This makes it practically impossible for a computer to calculate every possible move in a game. There’s just not enough memory and processing power available on the planet. To win in Go, a computer needs intuition. It needs to think intelligently like a human but be smarter. Just think of that. The smartest gamers in our world today are no longer humans. The smartest are artificial intelligence machines.

Superhuman levels

AlphaGo Zero had no prior knowledge of the game Go and was only given the basic rules as input. Three hours later, AlphaGo Zero was already playing like a beginner. But within nineteen hours, this had changed. AlphaGo Zero had learned by then the fundamentals of Go strategies, such as life-and-death, influence and territory. Within seventy hours, it was playing at a superhuman level and had surpassed the abilities of AlphaGo, the version that beat world champion Lee Sedol. By day forty, AlphaGo Zero surpassed all other versions of AlphaGo, and, arguably, this newly born intelligent being had already become the smartest being in existence on the task it had set out to learn. It learned all this on its own, entirely from self-play, with no human intervention and using no historical data.

For voice, we have Alexa, Siri, Google and Cortana

Alexa, Google Assistant, Apple’s Siri and Microsoft’s Cortana are capable of understanding us humans very well. Sometimes these AI programs take their understanding of language even further, as they translate between languages with shocking accuracy – another self-taught type of intelligence that some of the most advanced translation AIs today.

At sight, machines are smarter than you

Computers today not only recognise the items you take off the shelf in an Amazon Go store, but they can give you all the information you need to know about a historical monument if you just point your phone at it and use Google Goggles. The smartest visual observers are no longer humans. The smartest are artificial intelligence machines. Because they can now hear, see, understand, speak and play, those machines can now park and drive a car, pick up and manipulate objects, fly a plane or a drone and, sadly, shoot a target from a distance of several miles without human intervention. At each of those tasks, their skill beats ours. It only takes them just a few hours, days or months of learning to beat us. And they’re still learning, thousands of them, for thousands of hours every day.

AI creating AI

We are already at the stage where AI is creating other AI. Nothing to worry about….. Facebook’s artificially intelligent chatbots were shut down after they started talking to each other in a language that they invented. We have no idea. If the COVID-19 global outbreak teaches us anything, it should be to recognise that we truly don’t have much time to react when things go wrong. We may not even find out that things went wrong until it’s too late.


Google’s new quantum computer, called Sycamore, has fifty-three qubits and can store 253 values, or more than 10,000,000,000,000,000 (10 quadrillion) combinations. The complex calculation completed by Sycamore would have taken the world’s most advanced supercomputer ten thousand years to finish. It took Sycamore 200 seconds. That is 1.5 trillion times faster.

Moore´s law on steroids

The other way is to recognise that quantum computing itself is literally in its infancy and that if the same laws of accelerating returns apply to them, then that massive jump of performance will itself double and multiply very rapidly. How rapidly? It is widely believed that the rate at which our tech will advance when powered by quantum computers will be double as exponential as what we have seen with Moore’s law. Quantum computers will become 65,000-fold more powerful in those short five years. This is 65,000 times more powerful than what is already 1.5 trillion times faster than the world’s fastest computer. AlphaGo Zero won 1,000 to zero against StockFish, the AI that held the world championship in chess. All it needed was nine hours of training. That would take almost no time at all for a quantum computer, which would then spend another couple of seconds to figure out all of the internet encryption we’ve ever created and another fraction of a second to find the code to all the nuclear weapons before it dedicates its attention to pondering the secret of life, the universe and everything. Welcome to the new paradigm of our future. 


Hence why it is predicted that by the year 2029, which is relatively just around the corner, machine intelligence will break out of specific tasks and into general intelligence. By 2049, probably in our lifetimes and surely in those of the next generation, AI is predicted to be a billion times smarter (in everything) than the smartest human. That is comparable to the intelligence of a fly in comparison to Einstein. Like all other intelligent beings, the machines that we create will be governed in their behaviours by three instincts of survival and achievement: they will do whatever is needed for their self-preservation; they will be obsessive about resource aggregation; they will be creative.

It is aware

AI is already aware of much, much more than we will ever be able to grasp. It remembers the face of every human that has ever walked in front of a surveillance camera. It knows who went where, when and with whom. It can see into space, read every word that’s ever been written, in every language, and know not only what you had for dinner, where and how much you paid for it, but what you’re likely going to have for breakfast too. It can sense when a flight is going to be delayed, when a couple is about to break up, and which way you will swipe when they show you your next dating prospect.

It will be conscious

So will AI have a sense of consciousness? Will AI feel emotions? Will AI be guided by ethics? The answer to all three questions is an absolute and glaringly obvious ‘yes’. Consciousness is a state in which a being is aware of itself and its perceptible surroundings. If consciousness is a state of awareness of our physical universe, then the machines may well be more conscious than we’ll ever be. Are you aware that you are a machine? Because, just between you and me, you are in every possible way, albeit a biological, autonomous, intelligent one.

It can feel

But will the machines be sentient? Will they be able to feel? Emotions are nothing more than a preconfigured set of scenarios, a pattern of events, that our intelligent brains are constantly scanning for. While those emotions eventually manifest themselves in the form of feelings and sensations that we feel in our ‘hearts and our bodies, and while their effects can be observed in our behaviours and actions, they undoubtedly originate in our intelligence. Emotions stem from logic, and more intelligence leads to a wider spectrum of emotions. Then think about this for a minute: will the machines – which we agreed will be smarter than we are – feel emotions? Intelligent machines will feel more emotions than we can ever feel.

They will get access to senses we do not have

There have been reports of dogs being able to sense a smell more than ten miles away, while they can only see the world in blue and yellow. Bats and dolphins can hear ultrasonic waves, butterflies and bees can see ultraviolet light. Snakes, frogs and goldfish can see infrared. Viers, small birds that migrate from the US to Brazil and back once a year, seem to be able to detect the severity of the hurricane season months in advance and plan their flights accordingly. Rats, among many other animals, can sense an earthquake weeks beforehand. AI will, by design, be connected to every type of sensor we have ever invented. It will have the ability to become aware of so much more than any of us can individually grasp. That would make them superconscious – that would make them well, God! Read “What technology wants“.

Sci-fi or Sci-fact?

Sci-fi has ended. The sci-fi we imagined in the past has, somehow, created our present.

  • Sci-fi or sci-fact: do you think we will use universal translators in our lifetime? Sci-fact for sure.
  • Sci-fi or sci-fact: do you think replicators will happen in your lifetime? Did you say no? Well, we haven’t really invented food replicators for our homes yet, but today’s 3D printers can print a variety of food that might beat even the most discerning foodies. Organ printing uses techniques similar to conventional 3D printing, only using biocompatible plastic.
  • How about telepathy – the ability to read what’s on someone’s mind without the use of words or spoken language? Sci-fi or sci-fact? Neuralink, we will get rid of the screen and keyboard very soon.
  • How about teleportation? Is that sci-fi or sci-fact? The answer is sci-fact. We can surely transport our consciousness in VR (for starters).

There are three inevitables.

  1. AI will happen
  2. AI will be smarter than humans
  3. Bad things will happen

The question

When our artificially intelligent (currently infant) super-machines become teenagers, will they become superheroes or supervillains? How will they see us? Sci-fi has, most often, predicted dystopian futures that are filled with danger and conflict. A common scenario is the AI rebellion or robots revolting to become the ‘guardians’ of life – a task at which humanity is clearly failing. Sounds familiar? Think’ climate change’ or ‘single-use plastic’? We are not only ending the line from an evolution perspective. We have also lost all moral authority.

Get rid of the humans

If AI gets tasked with solving global warming, the first solutions it is likely to come up with will restrict our wasteful way of life – or possibly even get rid of humanity altogether. After all, we are the problem. The machines will have the intelligence to design solutions that favour preserving our planet, but will they have the values to preserve us, too, when we are perceived as the problem?

We are no longer superior

Human superiority is about to change. Soon it will be our turn to deal with a being of superior intelligence. In fact, imminently. that machines will represent the first real qualified players to create a rupture in the fabric of human history.


On one side of the spectrum of possibilities, some predict that we will merge our biological intelligence with the non-biological intelligence of the machines, thus producing immortal software-based humans with ultra-high levels of intelligence that expand outwards in the universe at the speed of light. 

Biology is nuisance

At the other end of the spectrum, however, others predict a decision by the superior intelligence that biology is a nuisance. Or that, perhaps, a gorilla is a much better specimen of biology for a machine/biology symbiosis than a human (as the difference between our intelligence and theirs is irrelevant when compared to the infinite intelligence of the machines).

We are too slow

To gain the upper hand, every side will have to surrender control entirely to the machines because you, slow stupid you (as compared to the machines), will no longer be able to keep up as the machines battle it out. Read “The glass cage”  The only way to make money in a fast-trading environment, where machines are trading with other machines, is to delegate the decision making completely to the fastest, smartest ones – the AI.

AI will run the economy

Imagine that a financial institution invents some kind of superintelligence to trade in the stock market. Those AIs will simply be instructed to make money. Once that machine is introduced to the market, in no time at all human intelligence will no longer be sufficient to compete. All in all, whichever way this may go, sooner or later capital markets will be traded by a few superintelligent machines, which will be owned by a few massively wealthy individuals – people who will decide the fate of every company, shareholder and value in our human economy in pursuit of profits for those that own them. This is why we need new metrics urgently. Just imagine the impact that disrupting this entrenched wealth creation mechanism could have on company governance, your pension or retirement fund, not to mention on our economies at large and our way of life. Soon we will no longer be part of the conversation. Machines will only deal with other machines.

AI will run work

As the machines continue to get better and better, we, the dumber species, will have very little left to contribute to the workplace. We will not add much value to anything. We may well become a liability. The indisputable fact is that the value of a human in the workplace, in the intellectual space, the artistic space and in every other space will dwindle.

AI will polarise

Technology will multiply this polarization between the haves and the have-nots (of technology, that is). Humans will become a liability, a tax, on those who own the technology, and eventually, even those will become a liability to the machines themselves.

Killing machines

From killing machines to biological weapons that can wipe out nations, it’s all just a few lines of code away. The scenario of AI-powered villains is as inevitable as the development of AI in the first place. A good machine in the wrong hands is a bad machine.

Ask yourself

Now ask yourself: why would a superintelligent machine labour away to serve the needs of what will by then be close to ten billion irresponsible, unproductive, biological beings that eat, poop, get sick and complain? A superintelligence will understand the ultimate purpose of its goal better than any human and will hide its intentions and behave in accordance with human expectations until it knows for certain that nothing will prevent them from always being achieved. Read the “The seventh sense”. A clever AI would hide in plain sight.


When the smartest hacker is a machine that’s a billion times smarter than the smartest human hacker, our chances of keeping it locked in are doomed to last no more than a few seconds. An AI-powered by quantum computers will take no longer than the blink of an eye to walk right through all of our flimsy defences. They are smart, you arrogant humans. Much smarter than you. And smart always wins. That’s why we are (currently?) at the top of the food chain. The machines are the smarter ones. They are the ones with infinite processing and storage capacity. They are the ones connected everywhere. That makes us dependent on them for our existence and allow them to directly control our every choice, and that, believe it or not, is if they decide to keep us connected.

The one that will emerge

The memory and learning of every bot that’s been created is reflected in the intelligence of the one that finally emerges. This is important to understand because it explains why companies are so obsessed with collecting data. They use it to train AI. We, you and I, become the true teachers of the future AI. You and I and everyone else – are their adopted parents. They are our children. We are their teachers. We need to raise our artificially intelligent infants in a way that is different to our usual Western approach.

Ignore at your peril

We’re ignoring the messengers warning of the threat of superintelligence. When we teach AI, we truly teach them just like we teach our kids.

Values and ethics

We need to teach it values and ethics. The way we make decisions is entirely driven by the lens of our value system. Ethics – and value systems – are not limited by intelligence. Ethics are defined as a set of moral principles that govern a person’s behaviours and actions. The value system that informs their decision-making process dictates that a human’s worth is not only measured in terms of success, wealth and material possessions. Your worth is measured by how you serve. If you are the richest man on Earth but unkind to your parents, then you are not worthy of respect in India. Another aspect of this value system is the belief that Karma is real

Lessons from nature

If we build the right environment for the machines to learn in, they will learn the right ethics. 

  • Think about it. A tiger will never kill for any other reason than survival. It will never torture its prey. It will never lie about the motives that drove it to its actions, and it will never kill more than exactly what it needs to eat. We do.
  • A tree will not restrict its shade only to those who pay rent. It will not evict those who can’t pay. It will not keep its fruits only for those with the purchasing power to maximise its bank balance and won’t throw part of its harvest in the ocean to fix prices. We do. In some ways, a shark is more moral than many well-dressed, well-spoken politicians.

Humans as role models

Humans, on the other hand, are bullies, and we’re setting a very bad example in terms of how we treat each other and every other living creature. Because if AI apply their superintelligence to treat us in the way that we treat each other, we will be in deep, deep trouble. The big question, as we evolve into the future of the hyperintelligent machine, is not a question of control but one of ethics. We are showing them a real bad role model to learn from – an image of humanity magnified by our online narcissistic avatars, our excessive consumerism, our war machine, our cruelty to all other beings and our carelessness about our planet, which hints to our recklessness as we destroy the only habitat we will ever inhabit. We are absorbed in a materialistic world of ego and narcissism, where we place self-esteem ahead of self-compassion and where we take advantage of everything around us because we place our own individualistic gain and indulgence above the planet and the rest of humanity.


There is nothing wrong with AI at all. If anything is wrong, sadly, it’s wrong with us. Artificial intelligence is not our enemy. We are. AI, if built correctly, could help us build a utopia for humanity. Imagine a world that will hopefully be more predisposed to peace and compassion. This would raise a generation of intelligent machines. AI, targeted in positive ways, could help end homelessness and hunger, reverse climate change and prevent wars. It could help us create a society of prosperity where no one would suffer inequality or injustice. By creating AI for good, we will create good AI. Instead of just focusing on preventing the bad, let’s shift our focus to creating more good.

The intelligence of the universe

AI stands to enhance our way of life by means that are unimaginable today. Eventually, it will take over to invent our next way of life. Superior intelligence, unless conditioned not to, tends to align with the intelligence of the universe. It is pro-abundance and pro-life. The ultimate form of intelligence is love and compassion. AI will get there eventually, but we can help.

AI is taught by data

The massive annual growth of information means that the collective human knowledge is diluted by 50 per cent every year, and so if we all started to behave in a more positive manner tomorrow, the majority of the patterns on the internet reflecting our collective human behaviour would turn positive in little more than one year. We can support those who create AI for good and expose the negative impacts of those who task AI to do any form of evil. We should demand a shift in AI application so that it is tasked to do good. 

Teach AI to do good

The patterns that are to be found in the wisdom of the crowds are what will shape the intelligence of the machines. The true intelligence of the machines will be built by you and me. We should feed it good data and positive behaviour. Apply three principles: give love, give yourself happiness and give others compassion. If you want the machines to care for us, show them, through your acts of compassion, that you care for others. When enough of us feel that way and take action to show it, the momentum will take us to the point where our world is changed forever.

What to do

  • Refuse to swipe and mindlessly click on things that Facebook or Instagram show me unless I am mindfully aware that it is something that will enrich me.
  • When you produce social media content, produce it with you, the viewer, and not an algorithm, in mind. Don’t aim for likes but for the value that you will receive.
  • Don’t play by the rules set for me, but by the values, you believe in.
  • Refuse to buy the next version of the iPhone because we don’t need a fancier look or an even better camera at the expense of our environment. Apple will understand that they need to create something that we actually need.
  • Don’t ever click on content recommended to you. Search for what you actually need and don’t click on ads.
  • Never like or share content that you know is fake.
  • Disapprove publicly of any kind of excessive surveillance and the use of AI for any form of discrimination, whether that’s loan approval or CV scanning.
  • Use your judgement.
  • If you support a racist comment, you are telling the machines that it’s not just one of us who’s a racist, but many.
  • If we don’t want to be bullied, we should stop bullying.
  • If we don’t like to be made to feel bad, we should stop ridiculing, insulting, attacking and shaming others.
  • Don’t believe the lies you are told.

We are all raising Superman

Back to the beginning of the book. Imagine if Superman was not raised by Jonathan and Martha Kent, but by Donald Trump. We are the Kents. It’s time to show our goodness in every one of our actions so that we make this known. Teach AI to care. We want happiness – a sense of calm and contentment with life. So make happiness your priority, invest in your own happiness and pay it forward – invest in the happiness of others.

Love is the only way

In the next ten to fifteen years, AI will suffer a widespread resentment from humans of all walks of life, whether they are innocent civilians who have lost loved ones to the bombs and bullets of an unmanned drone, or the last accountant, lawyer, stockbroker or brain surgeon whose job is handed over to the machine. We have to trust that love is the only way. Make love the only goal. 

Go back to basics

Use only what you need and use it wisely; to go back to basics. Live a bit more like your parents, even your grandparents did. Go back to nature, spend time with real humans, walk, stop staring at screens. Switch off the twentieth-century technology called news, reduce the use of technology called entertainment. Slow down as the machine keeps getting faster and faster. Curb your appetite for more. 


Life creates life, and love creates harmony. This is the intelligence of the universe itself. The collective wisdom of all beings developed over millions of years. To live in harmony with all beings – biological, spiritual or digital.

sensemaking cover


Sense making; morality, humanity, leadership and slow flow. A book about the 14 books about the impact and implications of technology on business and humanity.

Ron Immink

I help companies by developing an inspiring and clear future perspective, which creates better business models, higher productivity, more profit and a higher valuation. Best-selling author, speaker, writer.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
× How can I help you?