Is augmented intelligence the future of artificial intelligence? Such is Luc Julia’s opinion, who invites us to forget about the hackneyed ‘artificial intelligence’ monicker, the origin of which he traces back to the 17th century. This was the gist of his speech at the 5th edition of the MaddyKeynote (an event organized by French Web news website Maddyness) held at the CENTQUATRE in Paris on 30 and 31 January. Visionary Marketing was there and didn’t miss a thing.
Luc Julia is the co-creator of Siri, vice-president of innovation at Samsung Electronics, and author of the book “There is no such thing as Artificial intelligence“.
Artificial intelligence goes back to the 17th century, augmented intelligence is here and now
Luc Julia proposed us during this conference to go back over the history of Artificial Intelligence even though, according to him, there is no such thing. He gave us his view about AI as we have been hearing about for the last 4 or 5 years. Ai as it’s debated in the media, AI that will kill us, that will vaporise* us, the Hollywood and Robocop version of AI.
*this is an open and obvious reference to a famous Orwellian opus
Luc Julia traces artificial intelligence back to 1642 when Pascal invented the first arithmetic machine ( dubbed the Pascaline). It could add and subtract sums in just 3 seconds and made no mistakes.
But officially, artificial intelligence was initiated in 1956. Researchers believed they had succeeded in modelling a neuron with mathematical functions. Having modelled a neuron, they thought they had created an artificial neuron network, therefore mimicking the human brain in some form or fashion, hence, supposedly, intelligence.
They named this artificial intelligence and that was a bad idea. They also tackled something a tad more complex: natural language. Their failure resulted in the first winter of AI. When people realised they had been led to believe things that weren’t, research budgets dried up and died.
From Artificial Intelligence to Augmented Intelligence
For Luc Julia, this situation could very well happen again today. It is quite possible that the public will tire of hearing AI stories, and funding might vanish again. So, one should be very careful about how we handle AI and what we say about it.
Then came the expert systems, based on mathematics, which resulted in 1997 in the machine that beat Kasparov at chess. For Luc Julia, what was demonstrated back then was not that complex, the chess game is a finite game with rules, it’s not really something one could call ‘intelligence’.
From Machine Learning to Deep Learning
In the 80s, we started working again on Machine Learning, neural networks. But what was missing was the data, which arrived in the mid-1990s thanks to the Internet, allowing for what is called Deep Learning.
(Read on this subject “Luc Julia’s deep learning under the microscope – with AI Paris 2019“).
In particular, there were many pictures of cats on the Internet, annotated by their owners.
This large annotated database allowed us to verify the neural network methods, the statistical methods that are Machine Learning and Deep Learning.
“Yet, to achieve a system capable of recognizing cats at 98%, this required about 100,000 images of cats … whereas a human being needs no more than two images of cats to recognize them, even at night!” Julia said
“The equivalent of 440 kWh was required to power the machine that beat the Go player”
Another example shows that this kind of ‘intelligence’ cannot compare with what our brains are capable of: in 2016, a machine beat the world champion of Go.
Go is a more complicated game than chess, and, this time, 2000 computers were used to play Go. One required a small datacenter used 440 kWh to play that game too, while humans only need 20 Watts to power their brains, which can still perform other tasks at the same time.
It’s important to think about what you want to do with machines, because there won’t be enough energy to power them.
Also in 2016, Microsoft tried to make a chatbot on Twitter to promote its products and dialogue with customers. Tay, that chatbot, instantly became the most sexist and racist bot in the world.
It first encountered an adaptability bug when trying to match the target audience: on Twitter. After 2 or 3 interactions, insults started to fly. The system worked rather well in fact. And correcting the issue was rather straightforward. All one had to do was lower the instability factor a bit to control the way the system behaved and thus regulate the message.
The other bug was a data bias issue. Unlike images of cats, similar patterns cannot easily be found on the Internet. Nonetheless, a database has been available to developers working on natural language processing since the 1950s.
It brings together millions of conversations between Americans from all states and the call centres of their washing machine vendors. All these conversations have been transcribed since the 1950s.
Microsoft had to pull the plug on its chatbot after just 16 hours, for it had grown utterly racist and sexist.
Similarly, Apple credit cards have recently discriminated against women, giving them 50% less credit than men with the same income and profile.
It’s a data bias issue, the system using false premises. Luc Julia believes that in AI, everything can be explained. There is no such thing as the “black box” people often refer to. Everything can be explained by those who created the algorithms, those who chose the data.
It is possible to make mistakes in algorithms, in the choice of data, but one should always be able to explain what is happening. There is no such thing as mathematical black boxes. There can be practical issues, linked to the number of calculations that are made on millions and millions of data, and it is possible to lose track of the data, as with Microsoft’s chatbot.
One day it may take an AI to explain how another AI behaved, because it will have performed calculations so quickly, that one would be able to retrace its steps and understand what happened.
Augmented Intelligence: “The level five self-driving car will never exist”
Last but not least, let’s demonstrate that there is no such thing as Artificial Intelligence.
At CES 2018, Self-driving cars were said to be available within 5 years. A year later, at CES 2019, one heard that they would be available within 15 years. Lastly, at CES 2020, self-driving cars were deemed to be available in 2050.
Take some of the worst congested areas of London such as Holland Road, Kensington or Wood Lane in White City, or the much-dreaded Place de l’Etoile around the Arc de Triomphe in Paris during peak traffic hours. Cars will not move. No need to read the highway code, it just doesn’t apply there. It’s every man for himself, Julia said.
Another example proposed by Luc Julia shows that the autonomous car will never be able to adapt to all situations like a human being. Watching the training sessions that Waymo, the Alphabet subsidiary in charge of self-driving cars, publishes on YouTube, a video shows a car that repeatedly and without reason stops in the middle of a street and then drives off again. The explanation came from a passer-by who was carrying a STOP sign sticking out of his bag, misleading the vehicle.
“Artificial intelligence is like a tool and we, humans, are in control.”
Rather than artificial intelligence, Luc Julia had rather we talked about augmented intelligence. This is not a discipline, it is just a matter of using our own intelligence augmented by tools.
In conclusion, artificial intelligence is only a tool. We are the ones in control, we are the ones calling the shots, we are the ones with the tools in our hands. But machines aren’t always used properly and that’s why regulation must be put in place. So, the show must go on, Julia concludes, hence we should focus on creating more trustworthy systems. And one will have to make choices: use those bots to cure breast cancer or play Go for instance.