It’s clearly impossible for the web as a platform to objectively report that a stated fact is true or false. This would require a central authority of truth – let’s call it MiniTrue for short. It may, however, be possible for our browsers and social platforms to show us the conversation around an article or component fact. Currently, links on the web are contextless: if I link to the Mozilla Information Trust Initiative, there’s no definitive way for browsers, search engines or social platforms to know whether I agree or disagree with what is said within (for the record, I’m very much in agreement – but a software application would need some non-deterministic fuzzy NLP AI magic to work that out from this text).
Imagine, instead, if I could highlight a stated fact I disagree with in an article, and annotate it by linking that exact segment from my website, from a post on a social network, from an annotations platform, or from a dedicated rating site like Tribeworthy. As a first step, it could be enough to link to the page as a whole. Browsers could then find backlinks to that segment or page and help me understand the conversation around it from everywhere on the web. There’s no censoring body, and decentralized technologies work well enough today that we wouldn’t need to trust any single company to host all of these backlinks. Each browser could then use its own algorithms to figure out which backlinks to display and how best to make sense of the information, making space for them to find a competitive advantage around providing context.
There is a lot to think about.Read more