According to Neiman Journalism Lab:

“Dan Schultz, a graduate student at the MIT Media Lab (and newly named Knight-Mozilla fellow for 2012), is devoting his thesis to automatic bullshit detection. Schultz is building what he calls truth goggles — not actual magical eyewear, alas, but software that flags suspicious claims in news articles and helps readers determine their truthiness. It’s possible because of a novel arrangement: Schultz struck a deal with fact-checker PolitiFact for access to its private APIs.”

(via Bull beware: Truth goggles sniff out suspicious sentences in news » Nieman Journalism Lab.)

It’s a fascinating idea. Imagine browsers having a plug-in that is able to fact check all sorts of data using sources such as Wikipedia. It could have a huge impact on the future of news media. Imagine reading an article on, say, climate change in The Australian, and this “truth goggles” plug-in pointing out all of the inconsistencies in their reporting.

Or imagine reading Hilary Clinton ramping up the case for invading Iran because they are weaponising uranium, but have “truth goggles” pointing out that there is no evidence to support this claim.

Of course, this process doesn’t *need* to be automated with an algorithm. Chrome extensions like “Glass” allow people to comment on websites. For example, see this screenshot of a comment I left using Glass on a story in the Brisbane Times today about News Ltd corruption allegations from former QLD senator Bill O’Chee.

Could we all use tools like Glass to subvert the ability of the mainstream media and certain blogs to spin bullshit to their readers? Of course there is always the comments section of most sites these days, but perhaps they tend to get moderated and news sites promote comments by their faithful believers. Would Glass-like tools also get corrupted by flame wars? How do we keep them clean and useful? User moderation ala Wikipedia?