Artificial intelligence (AI) could one day filter out news from all the noise online, delivering only facts and cutting off all forms of bias.
A new program developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in collaboration with the Qatar Computing Research Institute (QCRI) aims to not only look at individual news stories but also their sources. Using this as the peg, the team has demonstrated a whole new AI that uses machine learning to judge if the very source of news is accurate or politically biased.
"If a website has published fake news before, there's a good chance they'll do it again," Ramy Baly, lead author on the new paper.
"By automatically scraping data about these sites, the hope is that our system can help figure out which ones are likely to do it in the first place."
Baly further explains that the new system needs around 150 articles to detect if a news outlet is trustworthy. Using this AI, he believes that fake news outlets can be stomped out before they start spreading false stories. Researching and disproving fake news is a long and arduous process, which if done manually can come in too late, notes a release by MIT. The damage is usually done and the true versions of the story seldom has the same reach as its fake counterpart.
Researchers reportedly first collected data from the Media Bias/Fact Check (MBFC), a website that employs human fact-checkers. They take the time to analyse accuracy and level of bias from over 2,000 news sites that include sites from MSNBC to Fox News as well as the low-traffic content sites.
This data was then fed to the machine learning algorithm, programming it to categorise news sites similar to the way MBFC does it. If the system comes across a new news outlet, it was found to work with 65 percent accuracy at detecting whether it has a high, low or medium level of factuality. As for political leaning, the AI was about 70 percent accurate at finding if the news source leaned left, right, or if it was moderate.
Researchers found that the most reliable way to spot both fake news and bias was to look at common linguistic features—use of language—across source stories, including sentiment, complexity and structure, notes the release.
One of the ways this works, as explained by researchers is that fake-news propagators were more likely to employ hyperbole, be subjective, and lead emotionally. For bias, left-leaning news were found to use language that was related to concepts of harm and care, and fairness and reciprocity, comparing them to qualities such as loyalty, authority, and sanctity.
"Since it is much easier to obtain ground truth on sources [than on articles], this method is able to provide direct and accurate predictions regarding the type of content distributed by these sources," says Sibel Adali, of the Rensselaer Polytechnic Institute who was not involved in the project.
Researchers caution that this new system is, as of now, a work in progress. They added that, even with improvements, it would be best used alongside traditional fact-checkers like Politifact and Snopes.
"It's interesting to think about new ways to present the news to people," says Nakov. "Tools like this could help people give a bit more thought to issues and explore other perspectives that they might not have otherwise considered."