Just like the Trojan war, the AI “revolution” will not take place*. Today’s topic is the inevitable generative AI. This is the perfect opportunity for us to discuss what a technological revolution is or isn’t. Here are our thoughts, and we might as well warn you that we are putting our trotters in the trough in a very counter-intuitive way. Admittedly, we’re taking a bit of a risk here, but keeping up with the Joneses isn’t a valid option for Visionary Marketers. As we know the subject well from having trained and certified a thousand students, we are also well aware of the limitations of GenAI tools.
The AI “Revolution” Will Not Take Place
* this is a literary reference to Jean Giraudoux
“Generative AI” has been buzzword of the year in 2023 for better or worse. I have wanted to dig somewhat deeper, though. So, what is and isn’t a technological ‘revolution,’ and is AI one of them? That is the question. A few days ago, I came across a LinkedIn post that argued, through a widely shared video, that “nothing was like ever before.” It showed a large crowd filming the 2023 fireworks on the Champs-Élysées with their mobile phones.
A technological “revolution”… really?
In addition to the ubiquitous mobile screens, there were giant LCD displays on the sides and the Arc de Triomphe itself was turned into a mammoth screen. Similar pictures could have been taken in London, New York or Ulan Bator. In a nutshell, what’s new in 2024 is that everyone owns a screen.
It’s laughable in more than many ways. It reminds me of my 2013 presentations on social media. I was already showing the same photo (above, taken from an NBC broadcast), which was supposed to prove a change in society.
Enough of that, let’s get back to our main topic, i.e., generative AI. The 2023 obligatory buzzword has undoubtedly been “revolution” as in “GenAI is a technological revolution.”
This term is ambiguous, though. As Merriam Webster state, a revolution’s first meaning is that of a planet turning on itself, a kind of standstill in fact. A revolution’s first meaning is “back to square zero.” In other words, as Alphonse Karr would have it, “The more things change, the more they stay the same.” On the other hand, it’s a term so widely used in innovation that it’s almost embarrassing.
1 (a) 1 Revolution the action by a celestial body of going round in an orbit or elliptical course. also: apparent movement of such a body round the earth [Merriam-Webster]
Try to play down this ‘revolution’ business and people will mock you. It has happened to me. However, one should wonder what, in our daily lives, is bound to change so radically in the coming years.
Fear Is Irrational
Firstly, fear is everywhere, but is it justified? Should we consider AI is a ‘revolution’ merely because we fear we might lose our jobs. What are the facts that substantiate that feeling?
Secondly, uncertainty, or rather a feeling of uncertainty is ubiquitous. One hears of a VUCA (Volatile, Uncertain, Complex, Ambiguous) world as if it were new. Yet, the concept was coined in 1987. I also wonder what the people of the late 18th century or the first industrial revolution might have thought of that.
These feelings are shared by many, as this recent anecdote shows.
Some time ago I hosted a webinar on the subject of AI-assisted development. It’s a fairly technical subject and certainly not revolutionary. It reminds me of my Unisys days 40 years ago and the Xerox CASE systems. Right after that event, I received a number of phone calls and messages from people who were experts in certain areas of IT, but who were panicking when thinking of this Sci-Fi-like, “dehumanised” view of our future.
But beyond these irrational fears, is what we are experiencing today really a “revolution”? In the sense that everything is changing radically.
No one understands innovation (Berkun)
I don’t think anyone in 2021 understands anything about innovation, an observation that was already made by Scott Berkun over 10 years ago, and which, in my opinion, remains entirely valid. So much the better, as it gives us work to do for many years to come, this is reassuring. After all, not everything is bound to disappear.
Over the last few months, as I’ve been delving into the subject of AI and generative AI, I came across a programme on France Culture (Science Chrono, 21 October 2023 in French) in which Antoine Beauchamp described the first attempts to generate text using artificial intelligence. It wasn’t in 2023, nor 2010, but… 1956! Granted, the texts it produced were gibberish. Those written by the surrealists too.
The buried figure exterminates the terrible dreams, the abysses and the solitary reapers are never a fierce anvil, crumpling with difficulty an ordinary sickle with the gleam, a blood mutilates the false twilight by a fertile sword.
Computer-generated text based on the lexicon of Victor Hugo – 1956
The generation of text, and poetry in particular, was one of the first playgrounds for artificial intelligence and mathematicians. Not all that revolutionary. What’s more, the presenter pointed out that the human brain is designed in such a way that when confronted with an incomprehensible text, it adapts to try and make sense of it. We certainly do the same when looking at the results delivered by ChatGPT and its competitors.
AI, Stochastic Approaches and The Surrealists
Many artists of past centuries have experimented with methods similar to the stochastic approaches of generative AI. These include Stéphane Mallarmé, whose texts are often hermetic, the precursors of the surrealists, Georges Perec, the other members of the Oulipo (literally, the “opener of potential literature), and their master Raymond Queneau (also a mathematician, his books were sometimes the result of what might be described as algorithms, as in the case of the skin of dreams aka loin de Rueil – 1944) and, of course, the surrealists with their infamous “exquisite corpses” (a century ago).
So what can we derive from all this?
Generative artificial intelligence has been so successful in the eyes of the general public since the end of 2022 (and longer as far as we are concerned), it would be silly not to admit that this is a technological breakthrough for computing. No one could buy into that. But is it a mere step forward or a “revolution”?
I’m leaning towards the step forward, even if it’s hard to substantiate this conclusion with facts. Readers versed in French are kindly advised to get to grips with Philosopher, Mathematician and IT expert Daniel Andler’s book, “Intelligence artificielle, intelligence humaine, la double énigme (Nrf, 2023).” Here is an excerpt.
But what kind of enigma is it? Here is what I think: the target of AI is an artificial intelligence that is on par with human intelligence. But this target never seems to get any closer, even though AI is constantly progressing. Here are the two explanations I propose to solve this enigma. […] The first is that the pursuit of an artificial intelligence endowed with human intelligence is pointless: according to the conception of intelligence that I defend, intelligence in the human sense can only be attached to a human being. Artificial intelligence, in whatever form and at whatever level of development, is designed to solve problems, which is only a secondary task for human intelligence. The second conclusion concerns the efforts that AI is devoting, with a tenfold increase in energy, to designing ever more intelligent systems, that is, in its view, ever closer to human intelligence. It also aims to give these systems as much autonomy as possible, and ultimately total autonomy. This dual objective is incoherent, dangerous and pointless.
We will only truly understand the changes brought about by these technologies in a few years’ time, and with the benefit of hindsight. In the meantime, untimely enthusiasm about technological ‘revolutions’ should be taken with a pinch of salt.
We are told that white-collar jobs, and mostly copywriters and even developers will disappear. Time will tell. I have my doubts. Jobs disappear and are replaced all the time anyway. There is nothing new in that.
The Booker Prize Awarded to ChatGPT?
Of course, ChatGPT and its clones know how to produce mathematically plausible texts. But will we be seeing the Booker Prize awarded to Bard, BingAI or ChatGPT in five or ten years’ time? I doubt it very much. On the contrary, some authors will make fun of these machines, hijack them and use them as creative material. This will last for a while, and then we’ll move on to something else.
Recently, I was looking back on more than a year and a half of using generative AI to create images for this website. I realised that my own perception of the pictures that were generated was evolving over time. Initially, rather like children, we played with these tools (Midjourney and others) in an irrational way. We started producing images all over the place.
Many users are still doing that today. LinkedIn is awash with these plastic images made by AI. Half-scary, half-demonstrative, they come in garish, stereotypical colours and are instantly recognisable by anyone with even the slightest training.
AI revolution : over time, perceptions change.
What used to be a game has become boring. Repetition even triggers fierce reactions from readers. Over time, you learn to abstain from using AI. I’m at this stage now. You then use this tool as one of the sources for producing images, mixed with stock photos and also more personal images, and to avoid using them systematically.
Of course, this won’t stop billions of users producing these gaudy, horrific images. But a more reasoned use of the machine can free us from these atrocities, and by rediscovering our technical skills and mixing several tools, we can find true creativity (combining, tearing apart, recombining, etc.).
All this to say that the ‘revolution’ will not take place, or rather that it will take place, but certainly not to the extent that we imagine today, and provided we wait (10, 15 or 20 years) and live long enough to witness the impact and true use of these platforms. Such impact will be undeniable for some uses — like picture generation — and much more debatable for others, in particular the generation of stochastic texts.
I know that some people will be disgruntled and will dismiss my predictions. Most of them will. Year after year Internet and social media pundits predict new technical revolutions. Our History of technology books are full of the stories depicting these failed technological upheavals.
I apologise for this, sincerely. But I’m not going to indulge in this exercise, which makes no sense whatsoever. I realise; however, that truth is less spectacular than fiction. People love their own dreams, this is precisely one of the things what will make humans better than machines.
In the meantime, I’m willing to bet that the digital word of the year 2024 won’t be “generative AI.”