Blame the Algorithm

It’s easy to blame a machine.

In looking at the effects of ranking algorithms in democracy, it’s easy to get tunnel vision and focus on the new technology involved. But racism, populism, and extremism are not new phenomena. Algorithms such as the ones powering Facebook’s News Feed or Google’s Search Results Page (SRP) “optimize” their contents to better suit their understanding of the user’s interests. Given recent episodes of amplified outdated ideologies, it’s almost a reflex response to try and regulate the unregulated machines. But if we assume these algorithms are not biased themselves, that they only confirm and amplify a person’s bias regardless of what that is, shouldn’t we instead ask why, in 2017, do we still have massive amounts of people with xenophobic tendencies?

Just like with “trash” television programming, it’s easy to fall into conspiracy territory and blame faceless corporations, networks, or even the government for the shoddy content. But TV is a business. Networks will play whatever will get them eyeballs for the ad spaces that go between the shows, and as it’s apparent by the ratings, people do want these terrible things blasted at them. Nobody is forced to tune into those programs, yet millions do in a daily basis. The real question is: why?

Facebook and Google work very similarly to television networks. The content they display is not really what’s important to them. The reason the News Feed and SRP generate this echo chamber of interests, where people can see themselves reflected in the content on the screen, is because they need people to keep coming back, they need to keep their attention. Why? Because the ads around the content is what drives their revenue stream.

Of course, the fact that these algorithms exist provides an opportunity for some to abuse them in order to amplify other’s already existing tendencies, for good or evil. This is a relatively new thing, but it has already happened, we’ve seen the effects, and even if we didn’t like them, it will keep happening. As humans, it’s tempting to look for scapegoats whenever we find something new we don’t like or understand. Machines are especially great for this, since they can’t defend themselves. But the machines are innocent, the people are not. We can question the ad-based business models that drive the creation of these algorithms, sure (though, for the longest time, we haven’t, and it looks like we won’t ever), but what we should really be questioning is: what is wrong with our society that leads people to hatred and radicalism?

Human person

Leave a reply

Skip to toolbar