I’m pro-technology, but will AI destroy the creative arts?
Imagine you are a music producer with an inclination to fraud. You come up with an excellent scheme. When you upload music, Spotify pays you proportionally to how many times your music is streamed. You realise that if you create and run fake accounts, and make them play your music repeatedly, you earn more from the royalty payouts than your costs. It is a money printing machine.
But you hit a snag. Spotify does not approve of fake streams, and takes great pains to detect and stop them. You can’t scale your operation without it looking very fishy that a handful of your songs are apparently so newly popular. Luckily, you have a solution: artificial intelligence.
• Will AI give us new McCartney and Dylan albums in 2060?
This is allegedly the solution embraced by a music producer in the US, who was indicted for massive streaming fraud this month. The indictment accuses him of buying up to 10,000 AI-generated songs a month from an AI music company, distributing them on multiple streaming platforms, and spreading out his fake streams among them to avoid detection. He reportedly made $12 million over five years from this venture.
There were only about five AI music companies in existence at the start of the period covered by the indictment, and I was the CEO of one of them. (Not this one, I hasten to add.) But if you came up with this scheme today, you wouldn’t need to be in contact with an AI music company, or spend any money on these tracks at all. You can now easily, for zero expense, use AI to generate as many songs as you like.
Some people are excited about our new-found AI-powered ability to create at will any digital media — images, videos, text, speech, music — such that it’s indistinguishable from the best human output. It may lead to cheap AI assistants, more people making art and even AI surpassing our skills of technological invention. But it also presents serious issues for which we currently have no good answers.
Fraud — of which you can be sure there are many more examples waiting to emerge — is just the first. Deepfakes are another, and are altogether more frightening. Children from the US to South Korea are being sent deepfaked pornographic images of themselves by classmates. Similar images of Taylor Swift were seen 27 million times on X (formerly Twitter) before they were taken down. People have been able to create material like this for years but the difficulty of its creation used to be a bottleneck. Thanks to AI, that bottleneck has been removed.
• OpenAI launches new model with human-like reasoning
The deluge of AI-generated content also threatens our ability to sift truth from lies. A Google image search for Beethoven returns an AI image as the top hit. Faked images of Taylor Swift fans supporting Trump were reshared by the former president himself. AI chatbots, which many think will replace Google, regularly mix fact with fiction — every single one. And that’s before we come to the difficulty of examining students in the era of ChatGPT, and the huge impact on the creative job market that simple economics tells us will come from an explosion of supply.
I know lots of people at AI companies, and, in private, they are clear: AI will let anyone write a novel. AI will let people swing elections. Long before we get to Terminator-style end-of-world scenarios, we are entering a time when anyone can create any image, fake any audio, write any article, in a matter of seconds. I am generally pro-technology. But even the pro-technology among us should ask: have we really thought this through?
Ed Newton-Rex is chief executive of the AI non-profit Fairly Trained and a composer
Post Comment