Originally published by The Drum, June 27, 2025
Author:
Alex McDonnell, Creative Director, Inizio Engage XD
I recently spent a few days queuing (mostly) and listening (sometimes) at London’s first SXSW. It was a bit like being a member of Soho House. Unload loads of cash, only to discover there’s nowhere to sit and everyone is pitching something. But I did meet plenty of interesting people in the ever-present queues. And when you could get a seat, it was often worth the wait.
Of course, AI was 95% of the chatter. So, here’s my three AI-up, three AI-DOOOOM.
When it comes to AI, everyone in the creative industries flips manically between ‘it’s just a new tool’ and ‘it’s gonna put us all out of work.’ To be honest, I do too. But perhaps it’s not creative Armageddon just yet.
I heard a lot of talk about digital watermarking tech to make identifying AI content and tracing it back to source easy. So, if I’m a voice artist who hears myself in a commercial I didn’t record, it should be possible to trace that recording back to its creator and even identify the training data used to create the voice.
Plus, the EU Artificial Intelligence Act means AI content within the EU will need to be clearly marked. Imagine all your content is suddenly plastered with ‘generated by AI’ legal copy. Wouldn’t that feel a bit ‘value-range’?
We could be in for a real Black Mirror moment when the first browser plugins appear that label AI content. How much will there be, I wonder?
AI and machine learning have helped drive the cost of sequencing one human genome from $2bn to $50. That absolutely blows my mind. Today, every child who goes into the ICU in the UK could theoretically have their genome sequenced.
The resulting dataset could transform millions of lives. Imagine a world where genetic diseases only take minutes to diagnose. Imagine if advances in CRISPR technology mean a quick genetic snip is all it takes to change everything.
Businesses like Exactly.ai and Spawning are just two examples of the burgeoning ‘ethical AI’ sector. These creator-focused platforms offer ways for artists to protect and expand their IP, rather than have it stripped for parts.
Exactly.ai’s founder, Tonia Samsonova, offered up a simple but effective definition of ‘ethical AI.’ She proposes four principles: the AI is trained on the creator’s data; there is a transparent and controllable pipeline; the creator owns the model and output; and the platform is additive and IP-expanding.
Of course, it’s naive to think the outputs of these models won’t just be pumped into other AI platforms that aren’t so ethically minded. But it’s a start.
For anyone now lacking in existential angst, here’s my 3-DOOOOOM
I heard ethics claims from AI players that simply don’t stand up to scrutiny. Some companies in the space seem to be saying “we’re protecting creators” while clearly developing technologies that replace them.
One voice marketplace has paid a much-touted $5m to their ‘voice-creators’. They have over 5,000 voices on their platform, and the top earner has made up to a reported $10k per month. Doesn’t sound like there’s much left over for the bottom four thousand or so.
Another says it has a pool of company shares so it can grant options to select actors, alongside a promise to focus on corporate use rather than entertainment. Erm, do they have any idea how many people make a living from acting in corporate films and learning videos? More maths to be done there, I think.
I’m not saying we should (or even could) stop the development of these tools. That feels like London taxi drivers waving their fists pointlessly at Uber. But, for god’s sake, let’s be honest about it. Then we can focus on helping people understand what’s coming next and develop the new business models and skills to make the best fist of it they can.
KPMG’s Trust in AI report report surveyed nearly 50,000 people in 47 countries. Some 57% admitted to using AI and presenting the results as their own work. Nearly half (48%) reported uploading sensitive company information to public AI tools. Crikey.
We’re all using it, but AI-shame means we’re keeping it secret for fear of being judged or replaced. If we don’t get off our collective keisters and talk about it, I suspect we’re in for a very rough (and legally actionable) ride.
That was the title of a talk featuring Jim McKelvey, the co-founder of merchant services business Square. He definitely got under my skin. His hypothesis? Search is (almost) dead.
According to the exceedingly well-informed McKelvey, 95% of people in the USA earning $125k or more already use Large Language Models (LLMs) to discover and choose brands. Plus, people are twice as likely to trust LLMs as the results of a Google search.
Why does that matter? It means that your brands, your products, and even you, will soon be what the LLMs think you are. And there’s not much you can do about it. As far as I know, you can’t issue ChatGPT with a takedown notice.
McKelvey suggests the result for marketing will be a massive growth in SEO’s replacement: AEO (answer engine optimization). But how would that even work? Absolutely saturate the web with content to be learned by the AIs? I’m not sure anyone knows, but I am sure that loads of people are going to be claiming they know any minute now.
If I put my positivist hat on, perhaps it’s the start of a new era of truthfulness (seems unlikely, I know). The LLMs might be learning your brand’s claims, but they’re also learning every customer review too.
Ultimately, there was a lot of doom on show, but there was positivity too. In the end, I left SXSW with sore feet, a bunch of new contacts, and enough optimism to queue up next year. If I can persuade someone to buy me a ticket.