In today’s fast-paced world, the internet has become an integral part of our lives, providing access to a vast amount of information on various topics. But this abundance of information has given rise to another problem — disinformation.
Disinformation refers to false or misleading information that is intentionally spread to deceive people. It can be difficult to detect and can spread rapidly, leading to significant harm.
The rise of disinformation has led many to question whether technology such as artificial intelligence (AI) can solve this problem. AI has already shown promise in various areas, including healthcare, finance, and transportation. However, considering that AI relies on the information that is being published at an alarming rate, it’s difficult to tell if AI can solve the issue of disinformation.
Ultimately, the answer to whether AI can solve the disinformation problem isn’t straightforward. While AI has the potential to identify and flag disinformation, it is not a silver bullet. There are several reasons why AI may not be able to solve the disinformation problem entirely.
AI Relies on Data
AI has access to every piece of data ever published digitally. If the data is biased or incomplete, then the AI’s results will be skewed.
In the case of disinformation, the data used to train AI algorithms can be misleading, making it difficult for the AI to detect and flag disinformation accurately. For example, if an AI algorithm is trained on a dataset that contains a high amount of false information, it may struggle to distinguish between what is true and false.
Products like ChatGPT, with the pro version powered by the newest GPT-4 backend, is powered by a dataset from 2021 making much of the content potentially outdated.
New Information is Constantly Being Published
Disinformation is a constantly evolving problem, and AI may not be able to keep up with the ever-evolving online landscape and the influx of news. Disinformation tactics and techniques are constantly evolving, and AI may not be able to adapt quickly enough to identify and flag new forms of disinformation.
Moreover, disinformation campaigns can be highly targeted, making it difficult for AI to detect them. For example, a disinformation campaign targeting a specific group of people may use language and terminology that is specific to that group, making it challenging for AI to detect.
However, there has been a new feature to AI to help track disinformation called Natural Language Processing (NLP). NLP techniques can be used to analyze the sentiment, tone, and language used in social media posts and other content to detect signs of disinformation.
At times, the very same NLP models are the source of disinformation as we have already seen many publishers coming out and stating that they are being flooded with AI generated content submissions.
However, developers have stated that the recent upgrade to ChatGPT improves the software’s ability to limit the use of disinformation and is more refined than in previous versions. That said, it is still the choice of the writer and how she may use these tools.
Social Media as the Driver of Disinformation
Disinformation is often spread through social media platforms, and AI may not be able to address or solve this issue entirely without the cooperation of these platforms’ developers. Luckily, most social media platforms practice content moderation by using AI-powered algorithms to detect and remove disinformation and fake news from their platforms. These algorithms can scan text, images, and videos for keywords and other patterns associated with disinformation.
Still, while social media platforms have the data and the resources to identify and remove disinformation, the human minds behind them may not always be willing to do so. Furthermore, social media platforms may have conflicting interests, such as protecting user privacy and promoting free speech, which may make it difficult for them to take decisive action against disinformation.
The Future is Unclear
Despite these challenges, AI can still play a significant role in tackling the disinformation problem. For example, AI can be used to identify patterns and anomalies in data that may indicate disinformation, detect and flag accounts and content that are associated with disinformation campaigns, or monitor social media platforms for disinformation.
However, to be effective, AI needs to be combined with human intelligence. AI can flag potential disinformation, but it still requires human moderators to review and verify the content. Human moderators can also provide context to the AI, helping it to better understand the nuances of language and culture.
Addressing the disinformation problem requires a multi-disciplinary approach that involves stakeholders from different sectors. Governments, social media platforms, news organizations, and civil society groups all have a role to play in combating disinformation.
AI has the potential to be a valuable tool in the fight against disinformation, but it is not a panacea. While AI is a great tool, it still relies on human intelligence to verify the information it scans and take action, as well as continuously update its algorithms to ensure they remain consistently accurate.
The author, Aaron Rafferty, is the CEO of StandardDAO and Co-Founder of BattlePACs, a subsidiary of Standard DAO.