Narrow artificial intelligence for your business
AI is capturing the imagination of everyone I talk to. Most of us are concerned. It does not matter what part of the spectrum you are, (from Skynet to Utopia) it will impact on your business model. You can start by considering how narrow AI can improve your business model. It is the broad AI that is a bit more existential.
Life 3.0
“Life 3.0: Being Human in the Age of Artificial Intelligence” is a “What technology wants” type of book. Interesting question on how AI will evolve and what that might mean. With questions about what defines intelligence, consciousness and life itself.
Intelligence
In his view intelligence = ability to accomplish complex goals. In the end, it is about the definition of the goals. Doing good or doing evil. We can cure cancer, but we can also apply AI to phishing and hacking your bank account. Don Watson as the head of the five families. Imagine the Cali Cartel with an AI.
Genie in the lamp
The definition of goals is like the genie in the lamp with the three wishes. Be careful what you ask or wish for. Because everything is possible.
Robojudges
For example, consider robojudges (I am speaking at the LondonLaw conference in October, so this has my particular interest). Robojudges could in principle ensure that, for the first time in history, everyone becomes truly equal under the law: they could be programmed to all be identical and to treat everyone equally, transparently applying the law in a truly unbiased fashion.
Minority report
Moreover, recent studies have shown that if you train a deep neural learning system with massive amounts of prisoner data, it can predict who’s likely to return to crime (and should, therefore, be denied parole) better than human judges.
AI lie detector
Machine-learning techniques have gotten better at analysing brain data from fMRI scanners to determine what a person is thinking about and, in particular, whether they’re telling the truth or lying. If AI-assisted brain scanning technology became commonplace in courtrooms, the currently tedious process of establishing the facts of a case could be dramatically simplified and expedited, enabling faster trials and fairer judgments. If AI-assisted brain scanning technology became commonplace in courtrooms, the currently tedious process of establishing the facts of a case could be dramatically simplified and expedited, enabling faster trials and fairer judgments.
Jailing an AI
But how do you judge the action of Don Watson? Can you send an AI to jail? Lots of questions.
Alibi
Once AI becomes able to generate entirely realistic fake videos of you committing crimes, will you vote for a system where the government tracks everyone’s whereabouts at all times and can provide you with an ironclad alibi if needed?
Dark or light
It can get dark very quickly, AI-driven killer drones, AI terrorism, AI cyber-attacks, but also AI curing cancer, AI in space, AI manufacturing, AI communication, AI managing energy, AI running transportation, etc. AI for good.
We need to think these things through
- Who will be insuring the self-driving car? If machines such as cars are allowed to hold their own insurance policies, should they also be able to own money and property? If so, there’s nothing legally stopping smart computers from making money on the stock market and using it to buy online services.
- If AI systems eventually get better than humans at investing (which they already are in some domains), this could lead to a situation where most of our economy is owned and controlled by machines. Which will lead to machines running our GDP. Why not let AI run our Gross Happiness Index? It reminds me of a scene in Lord of the Rings where the elven queen is offered the ring, possibly creating a creature of terrible beauty.
- If you’re OK with granting machines the rights to own property, then how about giving them the right to vote?
- Do you want there to be super-intelligence? Do you want humans to still exist, be replaced, cyborgised and/or uploaded/simulated? Do you want humans or machines in control? Do you want AIs to be conscious or not? Do you want to maximise positive experiences, minimise suffering or leave this to sort itself out? Do you want life spreading into the cosmos? Do you want a civilisation striving toward a higher purpose that you sympathise with, or are you OK with future life forms that appear content even if you view their goals as pointlessly banal?
- Do you want zombie AI (without emotion), benevolent dictators, an AI gatekeeper (to stop too many AIs roaming the world), a Zookeeper (minding people), an AI god, Dyson spheres, black hole power plants, Quasars, Sphalerons or a universal quantum computer?
The genie is out of the bottle
Because there is where it is going. The genie is out of the bottle. Figuring out how to align the goals of a superintelligent AI with our goals is not just crucial, but also very hard.
Be careful what you wish for
Humans accomplish this so effortlessly that it’s easy to forget how hard the task is for a computer, and how easy it is to misunderstand. If you ask a future self-driving car to take you to the airport as fast as possible and it takes you literally, you’ll get there chased by helicopters and covered in vomit. The same theme recurs in many famous stories. In the ancient Greek legend, King Midas asked that everything he touched turn to gold, but was disappointed when this prevented him from eating and even more so when he inadvertently turned his daughter to gold. All these examples show that to figure out what people really want, you can’t merely go by what they say. You also need a detailed model of the world, including the many shared preferences that we tend to leave unstated because we consider them obvious, such as that we don’t like vomiting or eating gold.
Exponential AI
Here is the kicker, the time window during which you can load your goals into an AI may be quite short: the brief period between when it’s too dumb to get you and too smart to let you.
We need a moral compass for AI
We are facing defining humanities moral compass or philosophy with a deadline. Before the AI has decided for us. Maybe you should start with the moral compass of your own and that of your company. Let’s start a bottom-up movement to ensure we get the AI we deserve.