Tools for Online Speech logo
by Jacob O'Bryant

Filter bubbles vs. divisiveness

I have a small follow-up on my previous post about the idea of filter bubbles. As I was discussing the post with a friend, I realized that I conflated two different negative phenomena: filter bubbles and divisiveness.

"Filter bubbles" (or "echo chambers") are when you are exposed to only things that you like/agree with. To reiterate, there is little empirical evidence that algorithms cause filter bubbles. And if you understand a bit about how recommender systems work, that seems like a plausible result. I covered this in the first half of the post.

(This doesn't mean that we couldn’t use more variety in the content we consume. The whole reason I’m bullish on algorithms is because they have so much potential to reach far outside your normal sources, exposing you to great things that you normally would never have come across on your own.)

But in the second half, I talked about divisiveness, about content that spreads precisely because it’s controversial. How do algorithms relate to that? I’m not aware of (nor have I looked for) any research on that topic, but it at least seems obvious to me that humans do a great job of spreading outrage and controversy even without the aid of algorithms. However, in this case I think it is likely that the algorithms of today do accelerate the problem, though I don’t know to what degree.

To illustrate: if you see a low-quality tweet from some barbarian, what do you do? You might be tempted to reply. You wouldn’t quote-tweet them, of course, but other, less-sophisticated tweeters often do. And all the algorithm sees is engagement, a positive signal. By interacting with the tweet, you train the algorithm to show it to more people.

That being said, there are other signals. You can mute or block users, and the first item in th context menu is “Not interested in this tweet.” (I’m going to start using that). Also, Twitter could theoretically do sentiment analysis to find out if engagement is endorsement or not. e.g. if you respond to someone by saying “your tweet is dumb,” then Twitter could infer that your engagement should not be considered a positive signal for the subject.

In practice, I have no idea how much weight these negative signals are given in Twitter’s (or anyone else’s) algorithm. I do think that negative, private feedback should be more prominent on social media, just like how Pandora doesn’t hide the thumbs-down button in a context menu. We need more people to signal their preferences without inadvertantly promoting bad stuff.

(Here’s the part where I plug The Sample. It uses 5-star ratings!)

There is a catch: if we start putting frowny face buttons everywhere so that the algorithms can figure out which stuff we don’t like… then filter bubbles might actually be a problem—not because algorithms inherently prevent you from seeing a variety of content (they don’t), but because now you’re explicitly training them to not show you certain types of things. (I’m reminded of this thread.)

That’s a potential second-order effect, but would it actually happen? I don’t know. It might depend on the user. Things should be fine as long as people upvote good content, even when they disagree with it. But at the system level, lots of people will downvote everything they disagree with. Will that affect only them, or will their ratings negatively influence everyone? Even if it’s just the former, will that harm society (not to mention the individuals)?

I have ideas about how you could counter it, but I’ll stop there. Clearly, recommendation algorithms are hard. They’re hard in a different way from other types of machine learning. If you need to classify an image as either a bird or a fire hydrant, there’s an objective answer. But recommendation deals with human taste. And that is a complicated thing.

When I was working on music recommendation, it struck me how easy it was for the algorithm to get thrown off. There are so many different things you have to account for, like how recently a song has been played, how much data you’ve collected from the current user, how much variety you should give them, etc. If you don’t get everything just right, you’ll end up in some kind of loop, where the algorithm keeps reinforcing itself in the wrong direction.

But with enough tweaking, eventually it works. There’s no shortcut; no way to abstract over all the hard details that often show up when you’re dealing with humans. But despite the effort required to make great recommender systems, I’m convinced the payoff will be worth it.

Published 16 Mar 2021

I write an occasional newsletter
about my work and ideas.

RSS feed · Archive

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.