Podcast (english): Play in new window | Download (Duration: 25:35 — 17.1MB)
Subscribe: Apple Podcasts | Spotify | Android | RSS
AI is not a tool, or is it? Reports regarding the impact of AI on jobs, society and businesses are cropping up all over the place at the moment in all corners of the world. Some of these reports are announcing forthcoming revolutions both for societies and our economies whereas others are playing down the impact of artificial intelligence, and reviving the good old Solow aka Productivity paradox (“You can see the computer age everywhere but in the productivity statistics”. follow up here and here). As a consequence, it is very hard to make an opinion, let alone advise business people and students alike with regard to what needs to be done in the future. Visionary Marketing has embarked on a mission to try and shed light on this topic in as rational and informed a way as possible.
AI is not a tool, or is it?

A lot of these predictions are guided by ideology. The authors, be they proponents or opponents of AI, have a personal agenda, often political or ideological, and are trying to make facts stick to this agenda. This is not very useful. But others are based on fact and careful analysis. I have decided to focus on two of these reports/predictions.
The first one is Fred Cavazza’s analysis of the impact of AI on society and the economy (original post in French), which describes Artificial Intelligence as a source of profound disruption. I have known Fred for years, and I know his deep knowledge of both subjects, which makes his report particularly valuable. With his kind permission, I have translated his piece from French to shed light on this subject.
The other report is by Forrester’s JP Gownder, whom I’ll be interviewing soon. I will test Fred’s assumptions on JP and see what he has to say about this idea of disruption by AI. Hopefully, our readers, and especially my students who have a lot of pending questions about this, will be able to separate the wheat from the chaff after these two interviews and podcasts.
AI is not a tool, it’s reshaping our society and economy
AI can’t be seen as just another technological innovation. By establishing itself as a major driver of productivity, automationMarketing automation in B2B enables marketing processes to be managed automatically across multiple channels. With marketing automation, companies can target their visitors with automated messages via e-mail, the web, social networks and SMS. Marketing Automation in B2B Above is a diagram explaining how the scenarios work in marketing automation, based on behavioural scoring and profiles. Messages are sent automatically, according to sets of instructions called workflows. The Limits of B2B Marketing Automation Some companies install marketing automation mechanisms while their maturity on the subject is ‘under construction’. They deploy technology for technology’s sake which leads them to use tools that and decision-making, it’s fundamentally disrupting the economic and social balance of our society. Whilst the productivity gains brought by AI are already transforming office jobs and creating a chasm between employees who’ve embraced it and those who haven’t, a fundamental question emerges: how do we integrate these synthetic entities into our collective organisations? Between appropriate taxation, legal personality and psychological resistance, there are numerous questions to debate before we can draft a new social contract.
AI IS NOT A TOOL — TLDR
- AI is triggering a disruption of our civilisation, it’s not just another tech breakthrough. It marks our genuine entry into the fourth industrial revolution by offloading, for the first time, human thinking and creativity to machines.
- AI’s productivity gains are already real and deeply uneven. A growing divide is opening up between workers who can work alongside AI and those stuck with 20th-century methods.
- AI agents are challenging how white-collar workers create value. Intelligent agents are transforming knowledge work, undermining certain business models and setting the stage for a rapid reshaping of office jobs.
- Integrating AI requires a new legal and fiscal framework. Like corporate entities, AI agents must be given a status that clarifies their responsibilities and reintegrates their value into the social contract.
- The socio-economic impacts reach far beyond just employment. AI affects our psychology, culture and demographics, making public debate crucial to head off looming social tensions.
AI on the Davos Agenda
This week, the world’s leaders are gathered at the Davos Economic Forum, and ecology isn’t on the agenda: AI, Big Tech and Trump Shine Most Brightly at the Davos Show .

AI is dominating every conversation, with considerations that extend far beyond technology:
- AI Is Poised to Take Over Language, Law and Religion, Historian Yuval Noah Harari Warns
- Palantir CEO says AI to make large-scale immigration obsolete
“Artificial intelligence will displace so many jobs that it will eliminate the need for mass immigration”
I’m not going to wade into commenting on everyone’s pronouncements, with their more or less biased viewpoints, but what’s certain is that major upheavals are on the horizon:
- AI and the Next Economy
- Nearly 80% of people feel unprepared to find a job in 2026
- The AI revolution is here. Will the economy survive the transition?
AI specialists are naturally the star guests at this 2026 edition of the Davos forum, invited to give their testimony and views: Deepmind and Anthropic CEOs expect AI to hit entry-level jobs and internships in 2026.
Looking at it this way, it seems absurd to sit back as spectators whilst the AI revolution unfolds and do nothing to limit the fallout from this productivity shock. But not all’s lost—at least not for everyone, as countries in the global south are already gearing up for it: The AI Revolution Needs Plumbers After All.
Productivity gains to be nuanced, but certainly not ignored
I’ve had plenty of chances to explain generative AI’s impact (Superintelligence will multiply our capacity to act tenfold and The digital divide is a problem no one can ignore). Whilst we’re largely in agreement about what widespread generative models mean, there’s serious disagreement over the timeline for AI’s arrival. The dominant narrative keeps insisting that general AI is a pipe dream and that human intelligence is and will remain superior to machines.
What is intelligence?
This is precisely where ambiguities crop up: firstly, intelligence comes in many forms (Theory of multiple intelligences and What’s your intelligence type?); secondly, not all office work requires emotional or social intelligence. What I’m getting at is that most service sector jobs boil down to shuffling information and data between systems. You don’t need to be a genius to do that—AI can handle it with ease.
To properly grasp the speed at which latest-generation AIs will gradually transform office jobs, I recommend you peruse the latest edition of Claude’s publisher’s macroeconomic barometer: Anthropic Economic Index 2026.
Anthropic’s economis index 2026
For this fourth edition, the study’s authors analysed thousands of people’s activities using increasingly precise indicators: New building blocks for understanding AI use.
This study yields several findings that demonstrate a strong progression in the adoption and capabilities of generative models. Notably, they observe an average 30% growth in Claude usage, driven mainly by the API rather than the chatbot—a sign of rapid adoption by advanced users (e.g., IT professionals) and slower uptake by ordinary users (white-collar workers using the web version).

The haves and the have nots
A gap is therefore widening between those who’ve adopted new habits (working in tandem with AI) and those still working as they did in the 20th century. This gap is starting to become problematic, because the latest version of Claude (Opus 4.5) has capabilities comparable to those of an adult who’s benefited from over 14 years of education—the equivalent of a Bachelor’s degree.

The question therefore is: how much longer can an employer justify paying salaries or hiring young graduates when chunks of the work can be farmed out to an AI? Whilst average productivity gains remain modest (1.8% according to the latest figures), AI’s contribution to certain tasks is absolutely spectacular:
- an average of 14 minutes to write a long article, versus 3 hours without AI assistance;
- an average of 5 minutes to analyse a complex data table, versus 1 hour 45 minutes without AI assistance.

You might argue this data’s skewed because these spectacular scores come from employees who are whizzes at using AI (therefore logically hyper-performers), but that’s not the case—the study covers ordinary employees with a 67% success rate for outsourced tasks.
What this boils down to is that for a third of tasks, AI slashes processing time by 10 to 20 times in two-thirds of cases. If we apply some basic maths, AI can potentially triple efficiency—or to put it another way, cut the average time needed to complete a task by two-thirds. Which type of profile do you reckon managers will favour?
(hint: McKinsey challenges graduates to use AI chatbot in recruitment overhaul)
Soon the arrival of agentic white-collar workers
Let me be clear: the productivity gains mentioned above relate to advanced AI usage, not just running searches in ChatGPT or asking Copilot to knock up meeting minutes. We’re talking about using generative models to their full potential, particularly intelligent agents (see Agentic Web: the revolution that won’t wait for you).
Intelligent agents
We’ve been banging on about these famous intelligent agents for a while now, but their potential only recently became blindingly obvious to ordinary employees (non-IT types) with the release of Claude Cowork, a very concrete wake-up call to the power of agentic AI: Claude Is Taking the AI World by Storm, and Even Non-Nerds Are Blown Away.

This awakening is shared by financial markets too, which are bracing for revenue drops at traditional software publishers, whilst one of France’s biggest IT services firms is axing jobs and European banks are preparing to follow suit:
- Claude’s new AI agent pushes down software stocks
- Capgemini plans to cut up to 2,400 jobs in France
- AI forecast to put 200,000 European banking jobs at risk by 2030
Adoption levels a matter for debate
This isn’t a topic to take lightly, even though adoption levels are debatable (as I explained earlier, it’s not binary) and gains vary wildly (Why AI Boosts Creativity for Some Employees but Not Others). What’s undeniable is that AI agents are forcing a major rethink of how white-collar workers create value, and more broadly for tertiary sector businesses that account for three-quarters of France’s GDP.
Whether you like it or not, whether you acknowledge it or not, we’re living through a civilisational shift, because AI’s arrival is turbocharging the fourth industrial revolution and unleashing upheavals whose full scope we’ve yet to grasp.
Fair enough, AI is a tricky concept to get your head round (We don’t need better AI, but a better understanding of AI). Yes, tools based on generative models require behavioural changes that’ll take ages to embed. Nevertheless, it’s crucial we prepare ourselves psychologically for the coming upheavals, because if we take even the slightest step back, we quickly realise they’re already underway.
AI is not just a tool: a shift beyond technology
Generative AI’s arrival and the march towards the first superintelligences aren’t just another turn of the technological wheel started by computers and smartphones. We’re witnessing a civilisational shift that marks our genuine entry into the fourth industrial revolution (Waves of change: Understanding the driving force of innovation cycles).

We’re not simply facing a new technological cycle, but a fundamental reshaping of economic and social foundations: for the first time, we’re offloading not physical power, but our thinking and creativity. Whether AGI arrives tomorrow or in ten years, we’re already living alongside autonomous entities capable of making decisions: synthetic agents, whether digital (AI agents) or physical (robots).

This situation throws up an unprecedented question: how do we integrate artificial entities that contribute massively to wealth creation whilst guzzling significant resources into our collective framework? History offers an imperfect but revealing precedent: how we’ve gradually integrated domesticated animals.
AI i not a tool: from biological analogy to legal reality
Humans get along perfectly well with domesticated animals because they’ve helped shape humanity’s development:
- Horses served to explore territories, wage war, plough the land, transport people and goods…
- Dogs were used for hunting, for guarding…
Insofar as animals contribute daily to our society, they benefit from services and rights:
- Guide dogs for the blind attend school and have status (a function = a job);
- Police dogs play a vital role in the fight against drugs; they’re entitled to retirement (they’re placed in a home for their old age).

From the moment animals make a direct contribution, they’re integrated into our society through their breeder and/or owner, who have obligations (identity tags and records for farm animals). They can benefit from protections (insurance, vaccination to fight epidemics…) and rights (laws against animal cruelty).
So what about AI that contributes value just as much, if not more, to our society?
Whilst it’s tempting to liken AI agents to a newly integrated species, much like domesticated animals, this analogy quickly hits ethical and legal buffers. Domesticated animals have rights because they’re sentient, conscious beings. AI, on the other hand, is an information processing system, software that has neither sentience nor consciousness.
The true parallel must be drawn with corporate entities (companies). Because, like a company, an AI:
- contributes to wealth creation (task automation, content generation…);
- exploits infrastructure and consumes critical resources (energy, rare earths, cooling water…);
- has rights (intellectual property) and responsibilities (transparency, explainability…);
- acts autonomously.
This is why the comparison is pertinent, as it enables us to evolve the legal and social framework.
The social contract of the synthetic era: responsibility and taxation
Integrating these intelligent agents into our society shouldn’t be done by granting anthropomorphic rights, which would be absurd for a computer system, but by giving them legal personality (like a company, association or local authority).
The real question isn’t whether AI deserve rights, but what legal status would clarify chains of responsibility. The avenue of electronic personality, debated in the European Parliament as early as 2017, aims precisely at this objective: not to recognise dignity in machines, but to organise their integration into our jurisdiction to protect humans, ensure they benefit from it, and that this benefit is distributed fairly (avoiding an even greater concentration of wealth and power).
As robots and AI agents replace human labour, they erode the base of social contributions that rests on salaries. But since they contribute to economic activity and generate costs for the community (energy consumption, electronic waste management…), there’s no reason why they shouldn’t be integrated into our tax system.
This isn’t about taxing AI agents as individuals, but applying tax to the value they generate through their operation. In exchange for this contribution, the AI (or its publisher) doesn’t gain social rights (pension, healthcare), but gets a framework of civil responsibility (fiscal, legal, social). This would enable AI-caused damage to be covered without necessarily tracing responsibility back to the original developer, who’s often disconnected from what the model ends up doing.
Socio-economic upheavals whose scope we don’t fully grasp
Having said that, the question of AI’s place in 21st-century society mustn’t stop at economic considerations, as it extends far beyond.
Domesticated animals and AI
If we revisit the domesticated animal analogy, we observe today that dogs aren’t just pets; for some, they’re also considered assistance animals. The exact term is “emotional support animals”—those that give retirees or psychologically fragile people (with chronic depression) a reason to get up in the morning.

The same goes for domestic robots, which are one of the pillars of Japan’s Society 5.0 programme—those that will care for the elderly with a physical presence (assisting them with daily tasks and limiting their loss of autonomy), as well as psychologically (conversing with them to exercise their memory) and emotionally (keeping them company).

For Westerners, this prospect is terrifying, but for the Japanese, it’s the only solution to their demographic deficit. Same in China, where parents work so hard they lack time to look after their child (vs “children”), and offer them AI-enhanced soft toys that tell them stories and answer their questions (satisfy their curiosity).
Furry robots
A trend that obviously came from Japan (Casio launches AI-powered furry robot pet that wants to replace your dog), but which can be experienced in the West (‘I love you too!’ My family’s creepy, unsettling week with an AI toy).

You might think all this is science fiction, Black Mirror-style, yet these are techno-sociological territories that have been explored for many years (Sony’s Aibo was launched in 1999).
Is philosophising about the merits of emotional support robots truly our priority? Apparently not, as there are more urgent matters. But it’s nonetheless an essential step, because let me remind you that AI adoption in Europe is rather low—not for functional or technological reasons, but purely emotional ones (strong resistance to change and major psychological barriers stemming from a misunderstanding of what AI actually is = barely 15% average enterprise adoption): EU Digital economy and society statistics.

So ultimately: Yes, we need to have this conversation and debate properly so we can come to terms with the changes ahead, anticipate the upheavals that’ll severely test our social system, and start rethinking our social contract (From Web 4.0 to Society 5.0).
Regulation as an integration factor
Don’t panic, I’m not about to launch into a lengthy sermon on the merits of universal basic income (an economic non-starter), but I will necessarily need to talk about regulation.
Indeed, living alongside synthetic agents (AI and robots) shouldn’t be thought of in terms of domestication, as with animals (to fit into our daily lives, dogs must be vaccinated and trained), but rather as regulating a synthetic workforce we can no longer afford to ignore.

The issue isn’t whether robots or AI deserve a pension, but how the wealth they produce can sustain our social model whilst regulating resource consumption, which creates economic tensions (electricity prices) and geopolitical ones (China’s monopolyMarket definition in B2B and B2C - The very notion of "market" is at the heart of any marketing approach. A market can be defined... on rare earths).
AI disrupting civilisation?
That was Fred Cavazza’s account of this forthcoming civilisational revolution. In my opinion, there’s a lot of truth in Fred’s vision about the future of AI and civilisation. Some of it sounds a bit like science fiction, but so much of the real world is mimicking SF (think of Altman’s obsession with Jonze’s Her) that he might well be right. As Fred states, the impact of AI might extend way beyond the technological breakthroughs that we are witnessing.
However, it’s still early stages in my mind. I can well imagine what Anthropic’s Cowork could do in the future, but I can’t see it happening now, even though I’ve been a heavy and advanced user of Claude for years.
This will take time
It will take time to seamlessly blend these technologies to execute proper workflows and not just tasks. Agentic software is well and truly promising, and we are even able to catch glimpses of it. However, the productivity advances enabled by these technologies are often uneven. Even for advanced users.
The other day, after a one-hour and a half mentoring meeting where I delivered strategic advice, I used my usual Claude project to build a second-to-none executive summary of my recommendation as I was frying some eggs for the wife. Yet, it took three major complex steps and software suites to achieve that properly.
But don’t be mistaken, we will get there someday. It’s just the timing that’s wrong; it’s not happening just yet. Innovation requires time and effort. As Fred points out, there is also a lot of resistance to change as always in innovation, and it’s not just in Europe, even though adoption is lagging behind in a traditional way on our continent.
The impact of AI, even on jobs, will certainly be big, but it might take years to appear in the statistics, to put it in the words of Robert Solow.
That said, Forresters’ vision is more nuanced, and we will review that with JP Gownder very shortly. Time will tell whether the truth lies somewhere in the middle, as I have a hunch it does. It’s certainly less romantic or frightening (depending on your point of view), but 40 years of implementation of tech innovation has taught me to grow a stiff upper lip.





