AI may be fun to play with but its implementation within businesses is a tad more complex. Visionary Marketing attended a round table discussion on the impact of generative AI on businesses and the future of work at Big Data AI Paris in September 2023. AI and IT experts from different backgrounds were able to shed some light on the dos and don’ts of the integration of artificial intelligence within enterprises. In a nutshell, there are four main areas one should never overlook: 1/cybersecurity 2/experimentation 3/problem-solution match 4/learning from failures.
Dos and Don’ts for AI implementation within businesses
“AI is old hat,” Thomas Pagbé, editorial manager of leading French IT for Business publication, announced quite rightfully in his introduction to the round table on “The future of work transformed by AI”. He raised the question as to what had really changed in how businesses viewed artificial intelligence.
To answer this question and document the theme of the round table, he had brought together a panel of IT experts from diverse and complementary backgrounds:
- Stéphane Roder – CEO of AI Builders
- David Sebaoun – Executive Partner at IBM Consulting France
- Valentine Ferreol – Digital Factory Director – interim CIO at Citeo
- Nicolas Levillain – Managing Director at BCG Platinion France
What has really changed in companies’ vision of AI implementation
Sure, there were ChatGPT’s 100 million near-instant users, but is that sufficient to tell you how to implement AI within your business? Even if the numbers are huge, the share of visits to AI sites, compared to others, remains modest. What’s more, ChatGPT’s visitors are said to have been fewer since July.
Let’s return to our round table with this comment from Stéphane Roder. “Starting in 2016 we saw AI applications finding their way into our business processes and bringing value. This gives good results if you put them in the right places.”
The Democratisation of AI Is the True Revolution
So AI has ceased to be a technical subject, it’s become strategic. Increasing operational efficiency, designing new offers… So many subjects once reserved for humans alone have now found a co-pilot as the name goes.
The role of generative AI is undeniable, and should not be underestimated, in the “democratisation of artificial intelligence. Because it has made models available to users,” explained BCG’s Levillain.
Therefore, there is an undeniable generalisation of access to AI, but when it comes to disruption, we’re still not there, as IBM’s David Sebaoun points out. “Everything has changed and at the same time nothing has changed in AI,” he told us. “For me, generative AI is incremental.” For the IBM representative, it’s quantum computing that will “cause disruptions”.
No doubt, it’s all a matter of appreciation. Could this be the sign of a new era for the leading American IT company? Watson had often been presented as a revolution that has recently come to a rather unhappy end. Be that as it may, many AI experts agree on its “incremental” diagnosis.
In fact, whether or not we believe that ChatGPT is “revolutionary” – the history of technology will take care of that question without our comment – what’s important is that “we offer useful and effective solutions to decision makers”, added Valentine Ferreol.
The ultimate goal is to implement “both operational and decision-making issues, in a collective manner”. This is a task in which Valentine sees “CIOs playing the role of technologists vis-à-vis other departments, such as marketing or finance.”
For my part, I see it as a reminder not to give in to technological “revolutions” too fast. Neither today nor tomorrow.
Do CEOs need to be convinced of the importance of AI?
Is there still such a need for evangelisation in AI? According to Stéphane Roder, this has already been done by OpenAI and Microsoft, and we can only be impressed by how fast they did the job. Even if Joe Bloggs’s mastery of these tools remains uncertain.
The Ultimate Goal of Artificial Intelligence Technologies
Like Valentine, Stéphane Roder insists on the ultimate purpose of these technologies. “The question is whether it really adds value, or whether it’s a toy. CEOs want to quantify the contribution of technologies and answer questions about confidentiality.”
It’s a fundamental problem, but one that is “In the process of being resolved,” he told us. I am under the impression that such issues are still only seldom addressed.
That being said, Stéphane foresees “massive adoption, because ML allows you to do exceptional things”.
Nicolas Levillain confirms, “We’ve moved on from the need to convince CEOs to an effort to educate them to ask what they can get out of these technologies. Rethink how they work and how they interact with their customers, and find out if they can create new business.”
And he adds that this is work that BCG X, the techno arm of the famous consultancy, is carrying out with banks. It remains to be seen whether this movement with banks, sometimes carried out in a more than brutal manner from a human point of view, is due to the ongoing transformation of the business or to a miraculous and timely technological invention.
AI implementation by Industry
David Sebaoun has another explanation. For him, it’s the fact that banks and insurance companies are entirely based on IT and technology. Admittedly, but it’s probably not the most cutting-edge technology – otherwise what need would most established continental financial institutions have to buy budding pure players?
Banks were certainly the first to face new entrants, as he points out, but the digital transformation of banks, which we were calling for ten years ago, is still largely incomplete, to put it mildly.
[Above] The most up-to-date statistics on the use of generative AI by generation (not much impact there) and by industry (No! Healthcare isn’t the most likely candidate for AI implementation)… Our panellists argued that the evangelisation for AI is over, yet these figures are telling a very different story.
If banks are so interested in Generative AI, it’s seemingly more to catch up with the movement indicated by Chris Skinner, which began in the UK and spread worldwide around the time of the 2008 crisis. ChatGPT is a good excuse for cutting cost quickly and effectively.
Where does artificial intelligence stand in terms of productivity gains?
So what impact will generative AI have on the workplace, and on productivity gains in particular? According to JP Morgan, it will be enormous. Working hours will shrink, and decision-making processes will be turned upside down. In short, it will be total disruption. It’s rather difficult to be that adamant when talking about such subjects, though. Changes won’t happen overnight, contrary to what we hear.
Wait and see before it happens.
The Big data and AI Paris 2023 panellists, however, have observed productivity gains. “BCG X analysed customers using generative AI in their software factories,” says Levillain, “and we witnessed 40% productivity gains, improved quality, and fewer bugs.” Here again, not everyone agrees, starting with Thomas Gerbaud, Data Scientist, developer, and IT blogger.
Generative AI does produce code, but is it useful? Sometimes, according to a few studies (Chen et al 2021, Cassano et al 2022, Buscemi et al 2023). I don’t find it all that interesting! The data scientists I work with prefer to think on their own and when they hit a snag, browse Stack Overflow for an answer.
Alt-Gr.tech – the end of code
Valentine Ferreol agrees with him: “It’s true that AI can generate code, but only simple one,” she said.
This is not to say that ChatGPT and its competitors can’t help us generate code. But it does mean that they may be of more interest to novices and tinkerers and that pros have other ways of getting things done.
Above all, this means that it’s too early to panic, and that time will tell whether the productivity gains will be that enormous, for what it means, for white-collar workers. By the way, the latter were already facing problems before the recent AI boom. What’s more, some jobs disappear, others are invented, it’s always been the case.
Let’s get straight to the point that interested me most in this round table. Learning from AI implementation failures.
Learning from the post-mortems of AI projects
Here are the lessons I’ve learned from the panellists’ analysis of these AI implementation post-mortems.
- Securing AI is more difficult than you might think (Nicolas Levillain): new AI technological frameworks seem easily accessible, and may give rise to the temptation to move very quickly in order to gain a competitive advantage. However, Making this new breed of applications secure requires a great deal of testing. Holding one’s horses is Levillain’s advice.
- Experimentation vs ratiocination (Valentine Ferreol): Valentine reminds us of the basics of innovation, and particularly digital innovation: success is best guided by trial and error rather than theory. Familiarising oneself with a new technology is the right starting point, whereas possible applications come next. This rule of thumb “allows us to innovate faster, as long as we dynamically exert our critical eye”, she says. We couldn’t agree more.
- ‘An LLM isn’t a hammer to crack a nut’ (Stéphane Roder): Stéphane reminds us that we witnessed a lot of epic failures in AI until 2019. It seems to have died down for a while, and now it’s picking up again. “Everyone wants their XXXGPT,” he said, “French Railway completely screwed up with their implementation, he said.” We’re just discovering the underlying technology, and while “these models are fun to ‘play’ with, that’s not how it works within businesses”. He insists on the learning curve for professional IT teams – “We know of big e-commerce players who are now working hard on shrinking their data models. Ninety percent of use cases are indeed queries on simple document databases,” he warns us. A reminder that the size of data models is no guarantee of success and that there are many setbacks when databases become oversized. Even ChatGPT is taking a step backward with ChatGPT4, which is reportedly less powerful than ChatGPT3.5 in some areas and sometimes declining over time for certain tasks (download the Stanford report about the evolution of ChatGPT’s behaviour overtime).
- There are three AI implementation failure factors which must be stressed (David Sebaoun):
a) Lack of adoption,
b) Implementing POCs (Proofs of Concept) rather than pilots hence failing to scale (a main reason for failure in Big Data projects we already spotted 10 years ago or so),
c) Lack of governance (for example, those US states that used AI-based unemployment fraud detection systems and ended up favouring fraudsters).
As Levillain rightly concludes, “The key success factors for these types of projects are 75% human. The remaining 25% depends on technology.”
So far, I feel as if I were attending a project management course back in the ’80s.
Yet Nicolas added: “What has changed is that these models bring together people who didn’t talk to each other before: data scientists, business people and developers.”
Granted! In that case, I agree that this is a revolution, one that we’ve been awaiting for decades. Let’s hope he’s right.