Content algorithms shouldn’t be influenced by profit-making
by Oliver Sauter (@worldbrain) To effectively sort fact from fiction in the fake-news war, we need to give people the technologies and mental tools needed to think for themselves.
You’re a journalist. Tick tock. The clock on the wall watches you. Your deadline for an article on the Cape Town water crisis looms. You’ve scoured mountains of data but need a particular factoid to establish a key point. You saw the research a week ago but can’t recall the domain, now that you need it.
Sorting fact from fiction
A Google search of “Cape Town water crisis” yields over 5m results. Poring over them, you confront your next problem: sorting fact from fiction. The Cape Town water crisis is fiercely contested by a number of political actors. “Lies, misinformation, genuine errors and conspiracy theories”, Daily Maverick’s Rebecca Davis writes, flow hard and fast.
WorldBrain Memex was born when I tried, and failed, to sort fact from fiction in the genetically modified organism (GMO) debate. I realised my ignorance — and the amount of work it took to understand this topic — made it easier for me to be misled. This was 2013, long before the spectre of fake news saturated our headlines.
Today, humanity is getting to grips with how dangerous the fake news problem is. In its latest report, Freedom House says democracy faces its most-serious crisis in decades. In the report, Freedom on the Net 2017: Manipulating Social Media to Undermine Democracy, the advocacy group reveals that “online manipulation and disinformation tactics played an important role in elections in at least 18 countries over the past year.” Freedom House states this has contributed to an overall decline in internet freedom.
But it’s not just critics outside of the social titans that hold this view. In January 2018, former Facebook product manager and head of civic engagement, Samidh Chakrabarti, admitted that Facebook (the world’s biggest social network with over 2bn users) was not good for democracy. In a blog post, Chakrabarti confessed that his company had been “far too slow” to recognise the manipulation of the platform by “bad actors”.
Facebook’s solution? Changing its algorithms and hiring some 10 000 more staff to fight fake news. But, days later, it was revealed that Facebook allowed “dangerous fake news about vaccines to go viral”. In this same week, Vanity Fair labelled Google’s salvo against fake news, an app called Bulletin, a “fake news disaster waiting to happen“.
I believe that the way the giants of the social media are trying to solve the misinformation is a waste of valuable resources. And it can likely make matters worse. The go-to strategy is to spend massive amounts of time and money to employ factcheckers and curators to deal with the rising mass of content. In tandem Facebook, Twitter and Google are using new kinds of artificial intelligence to filter ‘good’ from the ‘bad’.
Determining what is quality — and what is not — as an answer to the fake news crisis means the game is already lost. Any perspective that claims to be a single source of truth or that makes a judgement about quality is inherently biased. As such it is at risk of being rejected by those who don’t share this bias, especially with emotionally loaded topics. This can lead to increased tribal thinking and filter bubbles.
One-sided arguments or single sources are never a good way of understanding societal issues. To understand issues deeply, one needs to have easy access to — and to draw from — as many perspectives as possible. After all, democracies function best with diverse media and an electorate whose opinions are based on a multiplicity of diverse perspectives.
The underlying problem the social media giants have to fight against is that clickbait and sensationalism is baked into their business model. Facebook, Twitter and Google can’t enjoy massive engagement — and profits — by countering misinformation at a deeper level. To effectively counter misinformation, the algorithms used to deliver content should not be influenced by profit-making.
Fixing the fake news crisis would mean the social giants would need to bring users face-to-face with cognitive dissonance, which would be bad for business. Cognitive dissonance is that uncomfortable feeling one has when considering two conflicting ideas, beliefs or values, or when established beliefs are challenged by new information.
My approach is to make it 10x faster for people to grasp the complexity of content they consume, and topics they research. To form better opinions, I want people to get deeper insight, faster understanding and to digest as many diverse perspectives as possible.
This is the second goal I’m working on. There’s a way to go but my understanding is that countering fake news means empowering people to embrace cognitive dissonance and have access to diversity of thought — people need to be able to think for themselves.
- The Guardian: Fake news is a threat to humanity, but scientists may have a solution
- The New Yorker: Fighting fake news is not the solution
Olive Sauter (@worldbrain), a media entrepreneur, has been working on a solution to misinformation since 2014, before fake news became a thing He is the founder of WorldBrain.io, the creators of Memex: a free, private browser extension that solves one of the most-frustrating experiences of doing online research: organising and finding websites again. Memex reduces the time it takes to research, thus giving knowledge professionals more time to focus on writing. Or relaxing.
“Motive” is a by-invitation-only column on MarkLives.com. Contributors are picked by the editors but generally don’t form part of our regular columnist lineup, unless the topic is off-column.
— One subscription form, three newsletters: sign up now for the twice-weekly MarkLives newsletter, including Ramify headlines; The Interlocker, our new monthly comms-focused mailer; and/or MarkLives Zambia!