5 Causes of Tomorrow #2: Disinformation
Here is what DALL-E 3 produced when given the prompt, “Artwork to illustrate the issue of AI-powered disinformation influencing the outcome of elections.”
What’s the issue?
The phenomenon of bad actors programming bot accounts to spread disinformation online isn’t new. As far back as 2017, during Trump’s election, it was estimated that 31% of the low-credibility information on X (then Twitter) was spread by 6% of bot accounts, sowing division in society and upending election results.
However, with the emergence of AI, we are entering a new era of “professionalised” disinformation, where it’s not only cheaper and easier to produce on a vast scale, but the untruths themselves are presented more persuasively and imperceptibly than ever before.
With half the world heading to the polls in 2024/25, including major geopolitical players like the US, India and UK, we are facing a perfect disinformation storm that bolsters populists seeking election and could have damaging long-term consequences for the state of our democracies.
As a result, disinformation ranks 1st in the World Economic Forum’s 2024 Global Risks survey over a 2 year period.
What could brands do?
Aside from platform owners like X and Meta taking action to stop malicious bot accounts at the source, brands can work with disinformation experts and nonprofits to develop media literacy campaigns to educate people on how to spot bot accounts and critically evaluate information and sources with a healthy dose of scepticism.
The seriousness of the issue need not dictate the tone of the cause, however, as training people to spot whether messages are from “a bot or not” is just the kind of thing that’s ripe for gamification.
Which brands could be a natural fit?
Social media platforms, search engines, news and media sites, brands who stand for truth and against inauthenticity, brands invested in critical thinking skills and problem solving.
Further reading
How AI-powered bots work and how you can protect yourself from their influence
Why disinformation researchers are raising alarms about A.I. chatbots