Algorithmic Scapegoats

Over the last couple of weeks (and months (and years)) I've found myself repeatedly taking part in conversations about how we can combat the rise of hate speech and fringe ideologies online. It's an important topic and I enjoy thinking it through, because there simply aren't any obvious answers, but one small yet consistent talking point has increasingly felt problematic.

No matter which way you approach the problem, at some point "algorithms" get thrown under the proverbial bus. Most of the time, people are talking about "dynamic" social-media timelines and feeds, but it's also routinely used as a catchall that people struggle to fully pin down when asked to. "Algorithms" are just something that Big Tech uses to manipulate people, with the conclusion being that this is directly responsible for a sizeable amount of the "radicalisation" of online discourse. Heck, even the satirical Death to 2020 film has a moment where their fictional tech CEO claims that they're aiming for their algorithms to bring down the time taken to radicalise a user's viewpoint from 30 minutes to "as low as 5"!

And "algorithms" aren't just cropping up in discussions around hate speech. In 2020, the eponymous "algorithm" has been blamed more widely than ever before. In the UK, "algorithms" were initially used as justification for our late lockdown policies; then, later in the year, they were at the centre of our school exam fiasco. At the same time, across the Channel in Europe, we've seen the creation of new laws which will force websites to "algorithmically determine" whether content breaches copyright law (alongside censorship laws, such as those targeting hate speech) on upload.

Now, to be clear, I'm not saying that Big Tech, social media, or even "algorithms" don't share some of the blame. I have no doubt that the (actual) algorithms used to determine student grade boundaries last summer were deeply flawed. Nor do I doubt that the echo chambers that predictive, dynamic timelines on social media create do anything but increase radicalisation for people who begin sliding down that slope. But the consistent fallback point when discussing many of societies wider issues increasingly seems to be to point the finger at the "algorithms", which just feels like a 21st-century version of scapegoating. It distracts from the harder conversations around human behaviour and societal construction, shifting the blame away from the hard topics to the "it's all the fault of technology" camp. It alleviates personal responsibility, and that worries me.

Which brings me on to a(n expectedly) brilliant article from Jeremy Keith that has been sat in my RSS inbox for weeks now. In Clean Advertising, Jeremy highlights some of the absurdities underlying our current privacy nightmare of behavioural advertising. Most importantly:

...there’s a problem with behavioural advertising. A big problem.

It doesn’t work.

Almost all of the data suggests that behavioural advertising would work, if the prediction model is accurate. And of course, those prediction models online are, well, algorithms.

The problem is that those algorithms are terrible. Which isn't surprising. The biggest algorithmically-based companies in the world – Twitter, Facebook, et al. – still can't get predictive feeds to work, despite literally owning all of the data that would be needed for them, so why on earth would something as complex as consumer relationship mapping be possible?

Of course, that's not what we're told. We're told that online advertising is so effective as to be able to predict that you're gay before you know so yourself; to guess when you're pregnant; or to know how you're going to vote. After all, this is a trillion-dollar industry we're talking about! There has to be something more to it, right? Well, Jeremy puts it better than I ever could:

Suppose someone told you that they keep tigers out of their garden by turning on their kitchen light every evening. You might think their logic is flawed, but they’ve been turning on the kitchen light every evening for years and there hasn’t been a single tiger in the garden the whole time. That’s the logic used by ad tech companies to justify trackers.

In other words, it's really hard to prove a negative. Any time someone criticises the industry, they point to vanity metrics like clickthrough rates as "proof", even though most of these metrics are devised by the industry, monitored by the industry, and (crucially) are near-impossible to categorically tie back to the ads themselves.

The bigger smoking gun is that right now proof of the positive – that online tracking actually does lead to increased revenue – is also mysteriously absent. Sure, everyone knows about some successful online ad campaigns, but how many of those were successful due to a combination of virality, third-party talking points (such as blog posts and social media coverage), and old school contextual advertising (e.g. showing ads for a video game on a tech website)?

Jeremy makes a fantastic case in his article for why the blind adherence to behavioural advertising is terrible for the web:

  1. It's bad for users, because it serves largely irrelevant ads;
  2. It's bad for advertisers, because it wastes their time and money (not just in terms of serving ads poorly, but also in terms of the huge cost of developing tracking software and analysing the largely pointless results in the first place);
  3. And, of course, it's extremely bad for the web, causing page bloat, impacting performance, and resulting in terrible user experiences (cookie banners, GDPR consent forms, JS-requirements, cookie-walls etc.);
  4. Plus (secret option four), it's bad for the planet, too!

But I think it's equally problematic from a societal perspective. Across the Western world[1], we're seeing laws being debated (and even approved) that put a huge amount of faith in "the algorithm", such as that EU copyright initiative. Increasingly, politicians are turning to "algorithms" to answer hard questions of governance, like how to grade students when exams aren't possible. And worse, the general public is starting to argue that algorithms need to be changed in situations where societal change is far more necessary.

What does that have to do with ads? I think they're closely intertwined. I think that faith in online behavioural advertising – and the mythologising that has grown up around it about its effectiveness – accounts for a large amount of the false faith people now have in "algorithms". The scary reality is that the actual algorithms simply don't work. They aren't fantastical magic boxes that can solve all our problems, or generate all our evils. They're just fairly hard-to-unpack data models, built on biases and fragile methodologies that rarely stand up to any kind of scrutiny.

Jeremy rounds out his article by arguing that there is hope for a future without cross-site tracking and online behavioural advertising. As more browsers autoblock third-party cookies, and more users opt-in to third-party script blockers, the tenuous claims of the ad-serving agencies become easier to expose. I can only pile some more hope on top of that: if "algorithm-based" ads are finally shown to be a false economy, then maybe that will start to chip away at the narrative around "algorithms" in general.

It would be a nice dream.

Explore Other Articles

Conversation

Want to take part?

Comments are powered by Webmentions; if you know what that means, do your thing 👍

Footnotes

  • <p>A lot of blame is heaped on the near-mythical "algorithm", but is that really just an easy scapegoat for actual societal issues?</p>
  • Murray Champernowne.
Article permalink