Is artificial intelligence a threat to journalism or will the technology destroy itself? | Samant... - 5 minutes read
Before we start, I want to let you know that a human wrote this article. The same can’t be said for many articles from News Corp, which is reportedly using generative AI to produce 3,000 Australian news stories per week. It isn’t alone. Media corporations around the world are increasingly using AI to generate content.
By now, I hope it’s common knowledge that large language models such as GPT-4 do not produce facts; rather, they predict language. We can think of ChatGPT as an automated mansplaining machine – often wrong, but always confident. Even with assurances of human oversight, we should be concerned when material generated this way is repackaged as journalism. Aside from the issues of inaccuracy and misinformation, it also makes for truly awful reading.
Content farms are nothing new; media outlets were publishing trash long before the arrival of ChatGPT. What has changed is the speed, scale and spread of this chaff. For better or worse, News Corp has huge reach across Australia so its use of AI warrants attention. The generation of this material appears to be limited to local “service information” churned out en masse, such as stories about where to find the cheapest fuel or traffic updates. Yet we shouldn’t be too reassured because it does signal where things might be headed.
In January, tech news outlet CNET was caught publishing articles generated by AI that were riddled with errors. Since then, many readers have been bracing themselves for an onslaught of AI generated reporting. Meanwhile, CNET workers and Hollywood writers alike are unionising and striking in protest of (among other things) AI-generated writing, and they are calling for better protections and accountability regarding the use of AI. So, is it time for Australian journalists to join the call for AI regulation?
The use of generative AI is part of a broader shift of mainstream media organisations towards acting like digital platforms that are data-hungry, algorithmically optimised, and desperate to monetise our attention. Media corporations’ opposition to crucial reforms to the Privacy Act, which would help impede this behaviour and better protect us online, makes this strategy abundantly clear. The longstanding problem of dwindling profits in traditional media in the digital economy has led some outlets to adopt digital platforms’ surveillance capitalism business model. After all, if you can’t beat ‘em, join ‘em. Adding AI generated content into the mix will make things worse, not better.
What happens when the web becomes dominated by so much AI generated content that new models are trained not on human-made material, but on AI outputs? Will we be left with some kind of cursed digital ouroboros eating its own tail?
It’s what Jathan Sadowski has dubbed Habsburg AI, referring to an infamously inbred European royal dynasty. Habsburg AI is a system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, replete with exaggerated, grotesque features.
skip past newsletter promotionSign up to Five Great ReadsEach week our editors select five of the most interesting, entertaining and thoughtful reads published by Guardian Australia and our international colleagues. Sign up to receive it in your inbox every Saturday morningPrivacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.after newsletter promotionAs it turns out, research suggests that large language models, like the one that powers ChatGPT, quickly collapse when the data they are trained on is created by other AIs instead of original material from humans. Other research found that without fresh data, an autophagous loop is created, doomed to a progressive decline in the quality of content. One researcher said “we’re about to fill the internet with blah”. Media organisations using AI to generate a huge amount of content are accelerating the problem. But maybe this is cause for a dark optimism; rampant AI generated content could seed its own destruction.
AI in the media doesn’t have to be bad news. There are other AI applications that could benefit the public. For example, it can improve accessibility by helping with tasks such as transcribing audio content, generating image descriptions, or facilitating text-to-speech delivery. These are genuinely exciting applications.
Hitching a struggling media industry to the wagon of generative AI and surveillance capitalism won’t serve Australia’s interests in the long run. People in regional areas deserve better, genuine, local reporting, and Australian journalists deserve protection from the encroachment of AI on their jobs. Australia needs a strong, sustainable and diverse media to hold those in power to account and keep people informed – rather than a system that replicates the woes exported from Silicon Valley.
Samantha Floreani is a digital rights activist and writer based in NaarmSource: The Guardian
Powered by NewsAPI.org