Tools for Online Speech logo
by Jacob O'Bryant

Algorithms don't cause filter bubbles, people do

FYI: I imported this post from Substack, and I am far too lazy to fix the tweet formatting.

As a pro-algorithm person, it always sticks out to me when I see people say some variation on “algorithms are bad because they cause filter bubbles.”

When I’ve seen this brought up, it’s usually taken for granted rather than supported with evidence. While I was trying (in vain) to find an old tweet I liked about a study on YouTube and filter bubbles, I found this:

So I’ll be sure to reference that next time the issue comes up. And now that I’ve sent it out here, I’ll actually be able to find it again instead of relying on Twitter search.

In fact, preventing filter bubbles (so to speak) is an explicit design goal of recommender systems. It’s referred to in the literature as “serendipity.” If you make a recommender system that only shows people things they are most likely to enjoy, they get bored. You end up showing them the same types of things, and you often stick to popular items. Each item is a good recommendation in the (very) short term, but not in the long term. So when designing a recommendation algorithm, you have to build in a mechanism for it to explore beyond the user’s current tastes.

This is related to a common trade-off in machine learning (and life) in general: exploration vs. exploitation. When you exploit, you use your current knowledge to get the best possible reward right now. When you explore, you increase your knowledge so that you can get better rewards in the future. This has been formalized as the multi-armed bandit problem, in which you have a bunch of slot machines (each with their own payout probability distribution) and you have to decide which ones to pull.

One of the simplest strategies, called “epsilon-greedy,” goes like so: a certain percentage of the time, say 10%, you pick a random slot machine. The rest of the time, you pick whichever slot machine has given you the highest average payout so far. I use this strategy myself in The Sample (you get a random newsletter issue 33% of the time) and Findka Essays (you get a random essay 15% of the time).

There are plenty of other ways you can optimize for serendipity and diversity. I use a technique in Findka Essays which I’ve dubbed “popularity smoothing.” I put the essays into six bins based on how many times they’ve been recommended previously. When picking an exploit (i.e. non-random) recommendation, I first choose a bin at random. Then I pick the essay with the highest predicted rating for the given user. So the most popular essays (Paul Graham’s, in case you’re wondering) won’t be recommended more than 1/6 of the time.

Anyway. I’ve been thinking about this the past couple days because I made an extremely amusing update to The Sample’s landing page:

In other words: what’s the cure for filter bubbles? An algorithm, of course. I haven’t used that word (or “machine learning”) explicitly since this is meant for a general audience. I do on Findka Essays though, since I designed that landing page for people from Hacker News:

This one is funny because it hijacks the word “curated,” which usually means selected manually by a human. It’s my attempt to sell the algorithm while avoiding the negative connotations.

I digress. If you haven’t already, I highly recommend reading The Toxoplasma Of Rage from Slate Star Codex. In a nutshell: the ideas that spread are the ones that are good at spreading, not the ones that would be beneficial for humanity if they spread. And rage + controversy spreads pretty well. Thanks to the internet and social media, humans are far more connected—so those ideas spread much faster.

The sobering thing about The Toxoplasma Of Rage is that the essay offers no solutions. That was my main question after reading it: are we doomed to be victims of our own human nature?

That, however, is one of the reasons I spend so much time working on recommender systems. There is always an algorithm. The default one promotes rage, controversy and misinformation. But we don’t have to stick with the default.

Published 9 Mar 2021

I write an occasional newsletter
about my work and ideas.

RSS feed · Archive

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.