The Liar’s dividend and OpenAI: Slovakia is proof that more must be done
OpenAI, the makers of ChatGPT, have laid out their election policies for the most significant year for democracy the world has ever seen. 2024 will see a historic amount of elections – about half of the adult population of the globe will have the chance to vote, this in an age where synthetic media thrives on an average of 6.7 social media accounts per person.
Will its policies be consequential compared to those of other generative AI companies? There is a potential for a deluge of AI-generated nonsense such as manipulating images, creating deepfakes or writing very convincing political propaganda.
As an example, in the run-up to the Slovakian election in 2023, the media entered a period where they were not permitted to cover the election, to allow everyone to make up their minds. In this brief lull, a snippet of audio circulated online which was later debunked as being created by generative AI. The snippet quoted a candidate talking about buying votes from the country’s Romani minority and a few days later the candidate lost the election. Though it may not have been the entire cause, it is a prime example where fake media did not have the necessary guardrails in place to be able to prevent influencing outcomes.
Likely as a response to the increasing threat of lawsuits from publishers claiming to have had their work pilfered, OpenAI has been licensing increasing amounts of news content recently. The result of this commercial enterprise is that high-quality news is then shared on Chat GPT. Contrast this to voters who are on platforms such as Facebook who will only see what is on their feed.
Though one of the biggest players in the space has stated that they have identified the most obvious avenues for electoral abuse and has shut them down, the question remains - what is going to happen with the smaller, open-source models which have no guardrails whatsoever? Trolls and campaign operatives will still be able to use these.
OpenAI’s new policies, whilst welcomed, could simply be seen as a fig leaf. They will only be as successful as their enforcement. Others have also insisted on new measures for the year ahead. DALL.E, for example, has guardrails to decline requests that ask for image generation of real people, including candidates. Some states in the US are requiring political ads to disclose whether they contain generated AI materials. Meta, for example, has also banned political advertisers from using its generative AI ad-creation tools, so if somebody were to try and use generative AI in their political campaigns, it cannot be done with Google’s technology. Indeed, Google will block actors from using political keywords at all when using its ad-creation AI tools and will watermark AI-generated imagery and audio so it is identifiable. X notably has remained silent. Researchers are also interested in Telegram, which has been known to have a laissez-faire approach to content removal.
Notwithstanding the measures being put in place, an element of smoke and mirror will remain. Academics have coined the term “the liar’s dividend” for those wishing to capitalise on the saturation of information and use it to their advantage. No doubt 2024 will also have to contend with several politicians who will seek to benefit from a confused information environment to help maintain support in the face of threatening content which surfaces, claiming that it has been authored by generative AI.