The Rise of AI Generated Journalism

Claudine Murphy 8 Aug 2023

In the words of Irish Times editor, Ruadhán Mac Cormaic, “If a good newspaper is a nation talking to itself, the Opinion section is where much of that talking happens”.

In May, the Irish Times published an opinion column under the headline ‘Irish women’s obsession with fake tan is problematic’, written by a contributor purporting to be a young immigrant woman in Ireland. The piece accused those who use fake tan of mocking people with naturally dark skin, and quickly became the subject of much online discussion and complaint.

Shortly after publication, the Irish Times confirmed that the article and the accompanying photo may have been produced “at least in part, using generative AI technology.” The article was a hoax. The contributor corresponding with the newspaper’s editorial department was not who they claimed to be, and Mr Mac Cormaic confirmed the paper “had fallen victim to a deliberate and coordinated deception.”

Such was the significance of this deception, the hoax made international headlines. Unsurprisingly, the incident has highlighted a gap in the paper’s internal pre-publication procedures, as admitted by the Editor, along with the need to make these more robust.

What is the fallout around AI Journalism?

News of this hoax woke many readers up to the possibility that some of the content read in long-established publications, may not be the result of the organic thoughts and ideas of journalists. Instead, publications could be relying on machine taught software to produce articles, unbeknownst to their readers.

This incident leaves us with many questions. Firstly, how can we be sure that the articles we read and trust to present us with actual opinions, do just that (i.e., are they written by a human and not by software)? Secondly, can we trust that rigorous pre-publications checks are in place, and enforced, within publications? Finally, in the interests of transparency and to avoid misleading readers, shouldn’t articles or images be clearly marked as AI-generated, where applicable?

The Concerns

As readers we rely on online and print media to present us with the facts. In the case of opinion pieces, we instinctively understand these opinions to have been expressed by fellow humans. However, the reality is that they may not, and this presents an added layer of concern in an era of misinformation.

Such concern could inevitably allow the seeds of distrust to be sown in the content we absorb from our preferred publications, should such incidents become more commonplace. Indeed, should publications appear to take a laissez-faire attitude to the risks of AI generated technology in their journalism, then how will readers know what articles they can trust and which they should be more sceptical of, or even ignore?

Pre-publication checks will have to become even more necessary as there are clearly still unidentified risks and challenges associated with the explosion of generative AI for publications across the world.

The Irish Times incident exemplifies the need for robust pre-publication identity-checks and journalistic integrity to be maintained in publication houses. This has become alarmingly relevant when dealing with ad-hoc contributors, with whom editorial teams may not interact with on a day-to-day basis.

Transparency

Concerns around the ability of generative AI (such as Chat GPT) to create content and visuals in mere seconds, has led E.U. Commission Vice President Vera Jourova to ask Google, Meta, TikTok and others signed up to the 27-nation bloc’s voluntary agreement on combating disinformation, to work to tackle the issues AI presents.

As readers, transparency is crucial. In this respect, Jourova recently opined that companies offering services that may spread AI-generated disinformation should roll out technology to “recognise such content and clearly label this to users.”

While the imminent EU Digital Services Act will dictate that these tech companies will be faced with greater obligations in protecting their users from disinformation, Jourova has said that such companies should start labelling AI-generated content immediately.

Print and online media would be wise to follow Jourova’s advice, particularly in line with their obligations under the IPSO Code of Practice, “not to publish inaccurate, misleading or distorted information or images”.  

What’s the fix?

In a world where Chat GPT is still in its infancy and the potential for the dissemination of misinformation steadily grows, it’s essential that we readers apply our critical thinking to articles we read and rely upon to form our own views and opinions.

While it can be tempting to be swayed by clickbait headlines (the famous meme “I read it on the internet – it must be true” springs to mind), it’s helpful to consider what evidence exists in support of writers’ views. Also, asking what narrative the writer or publication may be seeking to generate, is another good starting point, especially in an era of cancel culture.

Ultimately, publications must take greater care to rigorously impose stringent pre-publication checks. Just as editorial and plagiarism checks are long-established in the production of articles or research papers, internal checks and procedures surely need to be implemented to protect against such hoaxes in the future.