AGI (General Artificial Intelligence), Myth or Reality?
Podcast (english): Play in new window | Download (Duration: 18:17 — 17.2MB)
Subscribe: Apple Podcasts | Spotify | Android | RSS
Whereas Ed Zitron is castigating the major Tech players responsible for the peak of inflated expectations surrounding AI, many tech pundits are still touting that AGI (Artificial General Intelligence) is within reach. To find out if AGI is a myth or a reality, I interviewed J.G. Ganascia, a long-time AI researcher and philosopher. In the course of our discussion, I gathered that the singularity and AGI weren’t the same thing. This interview set a lot of the record straight, particularly regarding the notions of intelligence and sentience or consciousness. But its striking conclusion is undoubtedly that, like Ray Bradbury, we should certainly be less wary of pseudo-intelligent AIs, let alone AGI, than of the wily intelligent humans behind these technologies.
General Artificial Intelligence (AGI), Myth or Reality?
The Singularity, AGI and Superintelligence
J.G Ganascia. Transhumanism led to many projections about artificial intelligence, of which the technological singularity was one of the avatars. There are others today like Nick Bostrom’s Superintelligence.
But these terms are not interchangeable.
The singularity, technological dream or nightmare
JGG. The technological singularity is an idea from the 1950s. It claimed that at some point machines would become as powerful as humans, causing a shift in human history.
This meant that at some point, machines would have taken over. Either they would overtake us completely, at which point humanity as we know it would disappear. Or humanity would submit to the power of machines, and humans would become their slaves.
Another possibility was that we grafted ourselves onto machines and downloaded our consciousness onto computers, and that this consciousness could then be reincarnated onto robots. According to this theory, we could then continue to exist beyond our biological bodies. This is what I described in a novel written under the name Gabriel Naëj, this morning, Mum was uploaded (in French only).
This is the story of a young man whose mother decides, once deceased, one should download her consciousness and reincarnate her as a robot. What is very disconcerting for this young man is that she has chosen the most beautiful body possible, that of a sex robot!
AGI and superintelligence
JGG. What we call AGI, Artificial General Intelligence is a different kettle of fish. It’s the idea that, with current artificial intelligence techniques, there are specific human cognitive functions that can be mimicked by machines, and that one day we’ll be able to emulate them all.
It means there is a way of deciphering intelligence, and that once we find it, it opens up infinite possibilities. In essence it’s a gateway to superintelligence. The very principle of the technological singularity assumed that there was a general intelligence and that all cognitive capacities could be emulated by machines.
General intelligence isn’t quite on par with the technological singularity and at the same time suggests it’s the ultimate goal. AGI has nothing to do with downloading human consciousness, though. this is just the ability to build machines with very high intellectual power.
This ties in with Nick Bostrom’s plans for superintelligence, which focuses on the day when the intelligence of machines is greater than that of humans.
There are links between these concepts, but they’re not quite the same thing.
As of 2024, is the singularity still a myth?
JGG. The early science fiction writers who mentioned the technological singularity, including Vernor Vinge, predicted that it would happen in 2023. Now, clearly, it’s not here yet. Unless we’ve all already been downloaded onto machines without knowing…
And yet these AIs are amazing!
JGG. Artificial intelligence has made considerable headway. Machines are capable of mastering language to the point where, when asked a question, they generate texts that are well formulated, even though not always relevant.
We can also produce images of people that bear an uncanny resemblance to real humans. Videos too. It’s all very intriguing.
Until now, one thought that language was first and foremost a matter of grammar, then syntax and vocabulary. Now we are realising that these linguistic abilities can be reproduced with just a few probabilities.
It’s really exciting from an intellectual point of view.
But that doesn’t mean that the machine will suddenly take over, or that it will have a will of its own. It doesn’t even mean that it will tell the truth.
These AIs almost write like humans. Most of the time their content is based on common knowledge. But sometimes this “common knowledge” is a little absurd. And as soon as you shift the situation a little, they produce results that are completely wrong. I’m often playing tricks on them with logic puzzles and I’m having great fun as they fail.
It’s understandable , in fact, because that’s not what they were made for. They are just made of modules capable of selecting words based on probabilities.
Yann Le Cun is dead against GenAI, yet he believes in AGI. Are you prepared to change you mind about the subject?
JGG. Absolutely not! I think there’s a misunderstanding regarding the meaning of the term ‘intelligence’. Besides, artificial intelligence is a scientific discipline.
What AI does is stimulate different cognitive functions. What are they? Perception, reasoning, memory (in the sense of processing information, not storing it) and communication. We have made considerable progress in these areas.
Take perception, for example. AI is capable of recognising an individual out of hundreds of thousands, whereas we ourselves can’t always remember the people we met a day before. These performances are extraordinary.
But where there is a misunderstanding when one states that the machine will be more intelligent than man. Intelligence is a set of cognitive abilities. It may well be that each cognitive capacity is better emulated by machines than by humans. Yet, that doesn’t mean that machines will be more intelligent than us, since they have no consciousness.
Machines do not “see” things nor have a will of their own. In any case, consciousness is the crux of the problem.
There’s another meaning for the word ‘intelligence’, which is related to ingenuity or inventiveness.
An ingenious or clever pupil is said to be ‘intelligent’ because he or she can solve everyday life or mathematical problems. Are machines more clever than we are, though? It depends. There are some cases, of course, where they outdo us. We’ve known for a very long time, 25 years now, that machines play better chess than we do. More recently so for the game of Go. Thus, from that point of view, of course, they are more intelligent, but that doesn’t mean they’re better than we are. In any case, they have no willpower per se.
Blaise Pascal, just over 400 years ago, explained that his calculating machine came closer to thinking than anything animals could do, but that there was a limit to it.
340. The arithmetical machine produces effects which approach nearer to thought than all the actions of animals. But it does nothing which would enable us to attribute will to it, as to the animals.
Blaise Pascal, Pensées (Musings)- page 69
As it happens, computers are like Blaise Pascal’s arithmetical machine. Their effects are closer to thought than anything done by any animal, including humans. But there’s nothing to say that they can have willpower like animals.
I think that’s where the misunderstanding really lies.
After that, of course, you can list all the performances of the machines, and you’d be right to label them as extraordinary. But it can’t be compared to man’s thinking.
When it comes to consciousness, we can dig a little further. One of the AI pioneers, Yoshua Bengio co-authored last August a long 88-page article in which he explained that machines today are showing signs of consciousness. He has taken up the work of neuroscientists on consciousness and declares that this is a possibility. Above all, he suggests that machines will soon have such sentience.
Once again, this is the result of a misunderstanding.
The term sentience, or consciousness, like the term intelligence, is one of many meanings.
First of all, we can say that a machine is sentient in the sense that we project an animal onto it. This is what happens with your mobile phone when you say “Siri is completely mistaken today” as if Siri were a real person. Or with a robot vacuum cleaner when you say “Well, he went there because he knows there’s dust out there”. One tends to assume these inanimate objects are like humans, but they aren’t.
This is called, in technical terms, a cognitive agent. An American philosopher, Daniel Dennett, calls it intentional systems. And there’s nothing wrong with that.
The second meaning of sentience or consciousness is that of ‘musing’ or ‘reflecting’. It’s sentience as self-knowledge as in “Know thyself!“. In other words, we are in the process of becoming aware of ourselves and wondering, “I’m doing this, now is it the right thing to do?” That’s why we talk about moral awareness, where we can say to ourselves, “I’ve done this or that in the past, and I can do a lot better now”.
We can have machines, for example, that learn by looking at what they have done in the past, and then try to ensure that their future behaviour will be more effective moving forward.
If they have hesitated between different possible paths before, in a similar situation, they will no longer hesitate, but will only take the right path. The same applies to moral consciousness.
My team is working on computational ethics, which means that before acting, the machine tries to look at the consequences of its actions, and from that moment on, it will take the decisions that are most in line with the prescriptions given.
There is also a third meaning of sentience or consciousness, which is very likely to be the most important: that of emotion. Can a machine experience emotions? And what does that mean?
If a machine were to feel this way, it might think: “I want those good vibes!”, and if you ask it to do something at that moment, it won’t give in. So you ask an autonomous car, “I want to go to the beach” and it says, “No, because there’s too much sand over there. I’m going to take you to the pictures, to a place where there are very clean car parks.”
Such a machine would be a disaster. Fortunately, it doesn’t exist. It’s absolutely essential that machines don’t make decisions on their own; they must always be submitted to our will and control.
When major AI players like Sam Altman tell us that these machines are going to take over, we have to be wary. It’s a bit like them telling us
We’re the ones with the knowledge, because we’re the pundits of artificial intelligence, and you don’t know anything. So leave it all to us and we will help you!
Like many of the engineers working for major digital companies, Altman is fascinated by these machines. So he thinks there are no limits to what they will do in the future. He simply means that they will do all sorts of tasks better than we can.
An open letter was signed by some major Internet players over a year ago. Sam Altman was not a signatory. But this initiative did include Yoshua Bengio, Geoffrey Hinton, Elon Musk… They told us we had to stop Generative Artificial Intelligence because it’s a potential threat to us.
Should we develop non-human minds that could one day be more numerous, more intelligent, more obsolete and replace us? Should we risk losing control of our civilisation?
Pause Giant AI Experiments: An Open Letter
I’m sorry, but I disagree strongly with this vision. I’ve been working on artificial intelligence for years on end. I have never seen a “non-human mind”. These machines are competing with us on high-level tasks. And more generally, cognitive science has been telling us for a long time, and Howard Gardner in particular, that there are multiple intelligences. There are as many kinds of intelligence as there are people.
Functional neuroimaging allows us to visualise the active areas of our brain according to the tasks we perform, and these areas vary according to each individual. Similarly, when we map them out, we realise that the areas of the brain are not developed in the same way for all individuals, depending on their upbringing, genetics and so on.
All this suggests that intelligence cannot be general, since it varies for each individual.
The machine could, however, reprogram itself or correct some of its errors
JGG. That’s exactly the definition of machine learning. It’s a machine that is capable of rewriting its own programme based on a certain number of observations, experiments. From that point of view, it’s nothing new.
The question is rather whether this machine has a will. That’s why Pascal poses the problem admirably.
Other philosophers like Daniel Andler aren’t sure that machines are not sentient, though
JGG. I think we also need to go back to the definition of the term sentience. Scientists have been musing about creative machines for a very long time. Alan Turing, in his 1950 article Computing Machinery and Intelligence, contradicted a number of objections to the idea that a machine could be intelligent. And among these objections was one that said “A machine cannot create”.
And his point was that a machine can very well create. But what is creation? It’s about producing something that will take us by surprise. But he added that he could easily devise a very short programme of just a few lines whose behaviour could not be anticipated. From that point of view, one can make machines that create.
The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false.
Alan Turing, The Computing Machinery and Intelligence, 1950
There is a whole history of creativity in machines that predates generative AI.
The first poems, incidentally, date from 1957
JGG. In the musical composition programme Illiac Suite by Lejaren Hiller and Leonard Isaacson (1957), the final movement included elements of random programming and creativity. Indeed, the use of randomness in this context was seen as a means of producing something ‘new’ or unpredictable, emulating a form of creativity.
Some artists have also used computers. This is the case with Pierre Barbaud (1911-1990), who was a great pioneer in that field. Painters too, including Vera Molnar (1924-2023), who created some magnificent paintings with her machines.
One could debate about the quality of what is generated by AI. Just because I made a fake Van Gogh with AI doesn’t mean it has anything to do with Van Gogh or that it’s interesting.
But that’s beside the point.
Does this machine have a will of its own that would contradict ours? In other words, at a given moment, that it could decide to stop for no reason or to take you to a place that you hadn’t imagined and that doesn’t correspond to a given objective.
I don’t think we need to worry about that.
Machines are not going to become autonomous. But society is changing. And the major issues are political, and that’s what we need to be very aware of.
In particular, we should be wary of those who own these technologies. So it’s Mr Sam Altman we need to be wary of. He has a tendency to mesmerise us, to cast a kind of smokescreen behind his intentions.
Sam Altman, in fact, is the danger!
Similarly, when Elon Musk wants to protect us against artificial intelligence by enhancing our cognitive abilities and putting chips in our heads. If we go his way, it will be Mr Elon Musk who decides what will be in our heads.
And it will be the worst dictatorship we’ve ever imagined. That’s the danger for the future!
You have to be vigilant, but you have to know where to look and what to be wary of.
The pseudo-intelligence of AIs, less dangerous than the harmful intelligences of humans?
JGG. Absolutely! Ray Bradbury, the author of Fahrenheit 451 wrote this famous line:
“No, I’m not afraid of robots, I’m afraid of people, people, people!”
Letter to Brian Sibley, 1974
Quote to be found on azquotes
About Jean-Gabriel Ganascia
Jean-Gabriel Ganascia
Chairman of the CNRS Ethics Committee
A professor at the Paris-based Université Pierre et Marie Curie (UPMC) and a member of the Institut Universitaire de France, Jean-Gabriel Ganascia was appointed chairman of the CNRS Ethics Committee in September 2016. An IT expert, holder of a PhD and doctoral thesis from the Université d’Orsay (Paris), he specializes in artificial intelligence. His current research work focuses on machine learning, text mining, the literary aspect of digital humanities and computational ethics. An IT professor at the UPMC since 1988, he heads the Cognitive Agents and Symbolic Machine Learning (ACASA) team at the LIP6 computer science research laboratory. He also set up and led the Sciences de la cognition (“cognitive science”) scientific interest group at the CNRS. Jean-Gabriel Ganascia is a member of the CERNA (ethics in digital science research commission) at the Digital Science and Technologies Alliance, Allistene.