• +1.866.400.4536
  • founders@recognant.com

Factual Evaluation

Today, Fake News is so prevalent that the US Government is investigating Facebook, and Twitter for their role in spreading it. The UK Government is looking at sanctions against Facebook and Twitter for their role in promoting Fake News that lead to changes in EU Referendum.

But it isn’t limited to politics. Fake News creates strange and dangerous health trends, ranging from anti-vaxers to people who ditch science based cancer treatments for home remedies. Fake News also is responsible for a surge in flat-earthers.

Building the “gut instinct” for an AI to detect fake news requires a very powerful set of Natural Language Processing tools, and a technology called an epistemology. (See my other LinkedIn Articles for more on epistemology) One of the challenges is that you have a chicken and an egg. Epistemologies build relationships between words and establish traits on noun entities they contain. This is done by mining the internet. While using “trusted sources” can limit the wrong information the epistemology contains, there is a lot of information that never appears in trusted sources because it is assumed the reader knows so much about the basic world that it isn’t worth mentioning. The phrase “mules have horizontal pupils” doesn’t ever appear on the internet, but an AI may need to know that this is true. If the fact doesn’t appear in all of the internet, consider how unlikely it is that it would appear in a trusted source like an encyclopedia. Facts also change, and so they need to be updated over time. New leaders are elected, populations change, and understanding of medicine changes. All of this new information takes a very long time to make it into trusted sources, which is why a “Truthiness” analyzer is needed for AI to be able to function.

Unpartial is a wrapper for the truthiness analyzer that Loki and Lobi, Recognant’s AI, use for their fact validation and gathering. Some changes needed to be made for humans, such as explaining why the article was considered fake news. As an AI, articles are mostly binary, trusted or untrusted, so when they read an article, they don’t process the whole thing if it appears to be untrustworthy.  When analyzing for humans, the level of untrustworthy needs to be shared, so the entire article is processed.

Visit www.unpartial.com and enter a URL, or install the Chrome Extension. This link will take you directly to an analysis.

Get the API

Our API is self service and easy to get started with. Natural Language Processing on demand, at scale, and under budget.