In a world where fake news spreads at breakneck speed, discerning fact from fiction is increasingly difficult. And despite the countless hours of work and millions of dollars invested in projects and policies aimed at promoting truthful journalism, the situation remains far from ideal—and that was before artificial intelligence could be enlisted to create realistic-looking fake images.
In fact, fake news is considered free speech, according to the European Data Protection Supervisor, and fighting misinformation is almost impossible because “the sheer mass of fake news spread over social media cannot be handled manually.”
Fortunately, and fittingly, artificial intelligence can also play a part in unmasking fake news—no matter how it’s generated. This power comes primarily through the explosive growth of large language models like GPT-4.
For example, anonymous developers have launched AI Fact Checker, a tool created to use AI to fact-check information. Once a user enters a claim into the checker, the platform searches for reliable sources on the internet, analyzes the data, and compares that information with the provided claim. It then determines whether the claim or fact is true, false, or unclear and provides sources to back up its conclusions.
Decrypt tested the tool’s functionality, and it demonstrated a 100% accuracy level when fact checking recent news, historical events, and miscellaneous information. However, when it came to consulting news about the prices of goods, services, and investment vehicles, the tool stumbled and began to confuse prediction news with actual price behavior.
Other AI tools are aimed at the same problem: Google’s Fact Check Tools, Full Fact, and FactInsect are among the most renowned. There are even decentralized alternatives like Fact Protocol and the now-defunct Civil. But one alternative is beginning to stand out from the crowd due to its ease of use and accuracy: Microsoft’s AI-powered browser and search engine.
The software giant has integrated GPT-4 into its Edge browser, giving users an AI bot at their fingertips. Unlike ChatGPT (which cannot search the web for information and has a fixed dataset from before 2021), the new Bing with GPT-4 is able to surf the web, so it can provide accurate and up-to-date answers instantly, providing links to reliable sources for any question—including the confirmation or debunking of a dubious fact.
To verify fake news using GPT-4 with Bing, simply download the latest version of the Edge browser and click on the Bing logo icon in the top right corner. A sidebar menu with three options will open. Then, select the Chat option, ask what you want to know, and Microsoft’s AI will give you the answer.
Microsoft’s GPT-powered AI validates Decrypt’s headline. We don’t spread fake news.
Users can also click on the Insights option, and Microsoft’s GPT-powered AI will provide relevant information about the website publishing the news, including covered topics, relevant themes, page traffic, and clarification of common criticism. Unlike the AI Fact Checker Tool, it does not provide a concrete answer saying “this is true” or “this is false,” but it provides enough information to reach a conclusion.
The other side of the coin
While AI can track and compare multiple information sources in a matter of seconds, there are also risks associated with using AI algorithms to verify news. Some AI shortcomings include:
Training on flawed models: If an AI algorithm is trained with inaccurate or biased data, its performance could be negatively affected and produce incorrect results. Ensuring that data used to train AI algorithms is accurate and representative is crucial.
AI hallucinations: AI algorithms can generate information that seems plausible but has no real basis. These hallucinations can lead to false conclusions when verifying news.
Vulnerability to manipulation: AI algorithms can be susceptible to attacks and manipulations, such as injecting fake or biased data into their learning process. These are somewhat equivalent to a 51% attack, but in an A.I. context. A higher number of incorrect reinforcements are used, causing the model to assume the wrong data is true.
Artificial intelligence systems such as ChatGPT are susceptible to poisoning attacks. These types of attacks are relevant when threat actors attack training data sets, a proof-chain will eliminate such scenarios.
— Gummo (@GummoXXX) March 23, 2023
This is especially concerning in models that rely on human interaction or in models that are subject to a central entity that manages them… which is exactly what happens with OpenAI and ChatGPT.
Stay on top of crypto news, get daily updates in your inbox.
Source: https://decrypt.co/137012/ai-journalism-fake-news-checker-detector-fact-check