digital Transformation

Deep learning under Luc Julia's microscope – with AI Paris 2019

Deep Learning is a term that has been used many times in recent years since the rebirth of artificial intelligence. But what exactly is Deep Learning? How to dissociate Deep Learning from Machine Learning? And above all, what is the purpose of it? To find out, I asked Luc Julia, author of the recent best-selling book “Artificial Intelligence Doesn’t Exist” [in French only] to answer a few questions. For this interview, I did not want to return to the main theme of the book which is widely discussed in the press and blogs (I recommend the video hereafter). Instead, I have decided to take advantage of the competence and expertise of a veteran of AI in Silicon Valley to look at the positive side of things and narrow down the subject. This chronicle was prepared as part of the AI Paris 2019 symposium, of which Visionary Marketing was a media partner.

Deep Learning: what is it and what is it for?

Luc Julia “has lived in Silicon Valley for twenty years, is one of Siri’s fathers, the iPhone’s vocal assistant, worked at Apple, and then Samsung, where he is one of the few non-Koreans to hold a vice-presidency position”, French business newspaper “Les Echos” tells us. His professor was Jean-Gabriel Ganascia, author of the “Myth of Singularity” [in French only]. Jean-Gabriel is also acclaimed as a philosopher, in addition to his status as an expert of AI. Here, I wanted to focus on a rather mysterious subject, Deep Learning, which I asked Luc to define.

The History of AI to explain Deep Learning

We need to explore some history in order to fathom how Deep Learning works. It is necessary to understand how this concept has come to be known in the history of artificial intelligence in the last sixty years. With AI, we started by trying to model the brain. But very quickly, in the 1950s, we realized that it was very problematic and that we would not succeed. So, we switched to expert systems. Then, we advanced to neural networks, which is what we call Machine Learning.
AIIt really started gaining ground in the 1980s, then in the 2000s we saw an evolution from machine learning to what is now called deep learning. It’s an evolution of these neural networks.

How to describe neural networks?

Neural networks are a statistical approximation of any problem. It all starts from a large amount of available data. It is for this reason that machine learning and deep learning developed much after the emergence of the Internet. Everything else emerged with the advent of Big Data.
In machine learning, we import data, then tag it with labels to describe as accurately as possible. Data Scientists will look at this data and label it by assigning it tags and characteristics. Statistical methods will then be used to describe the problem. Then, we create what is called a model that will be reused the next time we encounter a similar problem.
Deep learning is very similar but involves either one less step less, or one step more, depending on how you look at it. The last step is to abandon the labeling of the data, because it will label itself. This means that we will create a system that will find its own parameters (the ones we were talking about earlier for machine learning). These will no longer be assigned by data scientists who defined the problem.
This is achieved by feeding the machine with massive data, and asking it to find parameters by organizing them into several layers of nodes (a node is a place where computation happens). In general, there were only one or two layers in the neural networks of the machine learning, but with Deep Learning, there will be a multitude of layers.
When a human being looks at the parameters that have been found by the machine to describe a problem, it is not certain that he understands the logic of these parameters. It simply means that the mathematical calculations that are made to automatically find these labels are not compatible with human logic since no human being is involved in this process. Each time data is added, the parameters are automatically recalculated.

Some applications of deep learning

Deep Learning arrived in the mid-2000s and really boomed in the 2010’s. The first application was image recognition. We feed an image to the computer, and it will classify it and deduce weather it is a cat, a dog or a cow, etc. It was already working very well with machine learning, but with Deep learning, the result was even better. This is due to the abundance of image models.
Another area in which this technology has been applied with very good results, and where recent advances (about ten years) have been spectacular, is that of speech recognition and machine translation, for example. It’s not perfect yet, but it works much better than in the past, you just have to somehow feed the system with data.

What other positive implications of AI can we envision for the near future ?

I believe very strongly in the application of machine learning and deep learning in the field of medicine, which is a field where we have a lot of data to exploit. Let’s take DNA as an example, it’s statistically very interesting because of the huge mass of data sequences. We can imagine that with these AI technologies, we will be able to recover all the data from sick or healthy humans and create models to know if we observe or mutations, even before they happen. There is also visual medical imaging. We can apply what we have done to a range of activities, from recognising the images of animals to diagnosing breast cancer, for example. Millions of different breast cancer images can be provided to a machine, well labelled by doctors or not, and this machine will necessarily know much more than any doctor, no matter how learned he or she has ever been. These machines will be extraordinary helpers. Howevern they will probably remain as such, because AI is just a tool. Ultimately, it is human beings who can control what a system does.
Then, let’s take autonomous cars. From the statistical data generated by cars while driving, we can imagine that they can increasingly react by themselves in cases where speed is crucial and humans would not have reached the necessary speed. So, these visual sensors (by radar or lidars or cameras) are also image recognition systems that will increase the vehicle range. However, I personally do not think that with the current statistical techniques, a car can be 100% autonomous (level 5 as specialists say).

A discipline yet in its infancy

We are still in the early stages of this domain, which is only about ten years old. In future, we will find more fields of application. But what we need to be very specific and clear about is that these systems will be compartmentalized. We often talk about universal , but this will not exist because statistical systems, with their diversity, cannot come close to the functioning of our brains. So, we must not think that artificial intelligence will replace us, it is very important to repeat it and I wrote the book to explain this.
On the other hand, in each of these possible fields of application, tremendous progress will be made in the coming years.
 
 

Yann Gourvennec
Follow me

Yann Gourvennec

Yann Gourvennec created visionarymarketing.com in 1996. He is a speaker and author of 6 books. In 2014 he went from intrapreneur to entrepreneur, when he created his digital marketing agency. ———————————————————— Yann Gourvennec a créé visionarymarketing.com en 1996. Il est conférencier et auteur de 6 livres. En 2014, il est passé d'intrapreneur à entrepreneur en créant son agence de marketing numérique. More »
Back to top button