Are Gen AI and content compatible? Adobe organised a round table discussion on that subject during their Experience Makers conference in Paris in early November. The debate brought together a few digital experts. During this discussion, I mentioned that there were limitations to Gen AI images and that they weren’t technical. Others contended that it was just a matter of prompt engineering. Writing a good prompt may be recommended, but the limitations of Gen AI image generation tools extend far beyond that. Such is my point, which I substantiate in this piece with insights derived from a one-year practice of such software within the administration of this very website.
Gen AI and Content Marketing Lessons From Experience
This debate was organised by Adobe at the Louis Vuitton Foundation in Paris was about generative AI and content marketing. It was an opportunity for me to take stock of a year’s experience of using generative AI to produce images for Visionary Marketing.
Gen AI and Content: Excitement and Second Thoughts
At first, we were all very excited. And we did have fun producing images for all intents and purposes. Then came a moment when hindsight was required. Taking a step back from it all to ponder over the use of Gen AI. As I explained during the debate, it reminded me of the HDR filters when I started using Adobe Lightroom 12 years ago. At first, one used them every day. Five years later, in hindsight, one removed them all.
Here are a few thoughts on the use of these tools which, in my view, are more than ever worth investigating. Yet, one should look at them in the context of the widespread use of Gen AI tools by both Web users and the Media.
- On the one hand, what was initially pleasurable, at a time we felt like trailblazers, ends up being repetitive and bland. We come across too many of these pictures in the Media and on the Internet. Some of my readers pointed this out to me. My co-author even says he can’t understand why I don’t make more use of my own photos, whereas I am a photographer. He’s both right and wrong, and I’ll come back to that later. In the meantime, I insist that the featured image of this post is an original (and deliberately cryptic) photo by yours truly.
- On the other hand, these images, often produced in haste, end up looking the same. They are also often rather garish, with saturated colours that are very characteristic of virtual images. They’re also rather banal and sometimes vulgar. I realise that this is a personal and biased statement. After all, though, when it comes to images, there are no such thing as objectivity.
- There’s also a general trend towards ‘heroic fantasy’ type images, a genre I have nothing against, even though it’s not to my liking. But this does seem to add fuel to the fire about the trivialisation of images. We can add to this sci-fi-like illustrations, which are sometimes quite successful, but also confer a déjà vu aspect to your content.
- Lastly, a feeling of unease about images that are very realistic but at the same time are not. It’s a phenomenon known in the digital world as the Uncanny Valley. We’ll deal with this topic on this site in more detail at a later date.
Using Gen AI: Three Main Stages
In fact, at Visionary Marketing, we went through several stages. In the beginning, we only used images from my personal stock. All the Visionary Marketing content writers had to go through this limited stock of images. These photos are personal, and therefore unique. Yet a feeling of déjà vu soon set in. And above all, we were often unable to describe certain concepts using those images. It makes sense since this stock doesn’t include all the possible metaphors one would require.
A second step was to add stock photos to these images. This made it possible for us to get away from the limitation syndrome. However, it also made our illustrations look more commonplace. This could have been damaging in some cases. Fortunately, we use Jumpstory and this image data bank is rather unusual. Thus, we avoided this pitfall to some extent.
Over the Past Year, Generative AI
And since last year, this is the third stage, we’ve been making more intensive use of generative AI to produce illustrations for our articles. In all cases, whether it be the first, second or third stage, we’ve come to the same conclusion: using the image source all the time leads to a feeling of repetition, fatigue and trivialisation.
So you have to mix the different types of images and above all, as I explained during the Adobe debate, you have to be able to master the prompt so as to produce illustrations that are different from what we usually see on the Net.
The more abstract the prompt, the more eye-catching and different the image produced. That’s what makes you stand out from the crowd. This is rather counter-intuitive. Indeed, most self-proclaimed AI pundits on LinkedIn and elsewhere will be adamant that such prompts should be banned. What life has taught me, though, is that when the crowd produces A, producing non-A will make your work — and yourself — more distinctive.
Besides, advanced mastery of all the tools, generative AI, Photoshop, Illustrator, or all of them combined, means that you can retain total control over your images. Thus, you should be able to produce less commonplace pictures or illustrations for your content.
Last but not least, don’t hesitate to revisit the content to change illustrations that, with hindsight, seem too trivial, too stereotyped or too garish. Unless, of course, you like it that way.