Over a period of 90 days spanning December 2019 to February 2020, people shared, liked and commented on content from sites spreading misleading or false information about the COVID-19 respiratory illness 142 times more than the information disseminated from the Centres of Disease Control (CDC) and the World Health Organisation (WHO). This was reported in March 2020 by Newsguard, a New York Based service that recently launched a Coronavirus Misinformation Tracking Centre. Newsguard is a service that rates transparency and credibility of web news content.
It is an urgent priority to combat fake news. AI and big data provide us with hope in spotting fake news and have the potential to lead the way. Big data typically characterised by the many Vs such as variety, volume and volume, now deals with the additional challenge of ‘veracity’. We all know that, in essence, fake news is a challenge of data veracity. However, considering that ‘facts’ or for that matter ‘truth’ itself is very hard to discern, the definition of data veracity itself is blurred.
It is important to understand that the ‘misinformation dynamics’ lies at the intersection of fake news and the big data concept of data veracity. These are not just accidental inaccuracies found in a bulk of enterprise data, but fake news that is intentional and very dynamic. To counter this, AI is able to learn behaviours that are based on continuous and improved pattern recognition. This helps train the system to detect fake news on the basis of content historically flagged off by people as misinformation. This past information can be gathered by technology and used to sort fact from fiction.
In conjunction with AI, few of the techniques being used to combat fake news are:
- Web page scoring – a method developed by Google that makes an attempt to understand the context of a page without depending on links or ‘cookies’.
- Fact weighing against reputed media sources – the subject with headline, body text and geo-location of story can be examined through a natural language processing engine, that will further be verified by AI on the basis of other reputed sites reporting the same facts.
- Discovery of sensational words in news headlines using keyword analytics through AI, to detect and flag fake news headlines
- Predictive analytics backed by machine learning – predicting the reputation of a news source or website using multiple features like Alexa web rank and domain name
However, not all researchers are convinced that AI and big data hold the potential to tame the fake news devil. The possibility of AI failing in building automated fake news filters is due to the fact that it cannot understand human expression the way people do. AI in its current form can look at the text of a topic or style of the language, but it cannot figure out the context, tonality or the meaning behind the statements. For this manual human intervention will still be required.
The other, and perhaps greater problem lies in the fact that the same AI tools that allow us to fight fake news are being used by fake news creators. The issue here being content being created in this manner makes it even tougher to separate reality from fiction.
While a social platform like Facebook realises that fake news needs to be dealt with, it is aware that with trillions of Facebook posts from billions of users, no amount of fact checking will help in solving this problem. Outlining some efforts by Facebook to combat fake news, its CEO, Mark Zuckerberg, recently said, “Historically, we have relied on our community to help us understand what is fake and what is not….. We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties.”
They, however, understand that these strategies may be error-prone and that Facebook needs to raise the bar on fake news by using Artificial Intelligence (AI). It is an existing core component of Facebook, driving the priority, advertising and posts on its site. Facebook uses AI to detect patterns of words or phrases that might indicate fake information. However, given the mixed opinion about the efficacy of AI in combating fake news, Zuckerberg is guarded about revealing FB’s plans on use of AI for this purpose. “The most important thing we can do is improve our ability to classify misinformation,” Zuckerberg explains. “This means better technical systems to detect what people will flag as false before they do it themselves.”
In the meanwhile, as AI and big data get equipped to combat fake news, these few tools have worked well to identify and debunk fake news:
- Spike – Identifies & predicts viral as well as break-out stories
- Hoaxy – helps users identify fake news sites
- Snopes – spots fake stories
- Google Trends – watches searches and trends
- Crowd tangle – early detection and monitoring of social content
As Tom O’Reilly, founder, CEO, and Chairman of O’Reilly Media, says “The essence of algorithm design is not to eliminate all error, but to make results robust in the face of error. Much as we stop pandemics by finding infections at their source and keeping them from finding new victims, it isn’t necessary to eliminate all fake news, but only to limit its spread.”
Therefore, until AI, big data and machine learning enabled tools become more robust and reliable, it is critical to create awareness amongst people to not take every story at face value and apply critical thinking before we press the ‘share’ tab!
The views and opinions published here belong to the author and do not necessarily reflect the views and opinions of the publisher.