Mods! Mods!

Some thoughts about moderators and the decline & fall of platforms

This isn’t a thought out blogpost, but I thought it would be fun/good for me to just write something without thinking too hard about it.


My position is generally anti-newsletter (we do not need more content!), but I will always click on Ryan Broderick’s Garbage Day, which is a weekly curation of the weirdest, disgusting and also just fun stuff online. It’s an antidote to Twitter discourse and certain newsletters that then try to synthesise said discourse.

In one of Ryan’s last free posts, he ruminated over @maplecocaine’s now infamous post about Twitter’s ‘main character’ , and how at the end of the day, as long as you aren’t on the trending bar, you’ve done good. Ryan’s observation about user experience was an interesting point that I didn’t really consider until after reading, and he suggests that as the notion of the ‘main character’ takes on its own dominant characteristic on the site, it essentially atrophies the platform’s function, until it either ends up Balkanizing the user base, or atrophying the functionality completely. Here’s how he observes the pathology that underpins the decline and fall of platforms:

I broadly agree with most of this. But I would take it a step further to argue that even if Twitter is “dying” (And I don’t think ‘dying’ should be considered in the same vein as, say, Myspace, Friendster or whatever came before it. Piczo?) I’m hard pressed to think of a platform that can, even in the short run, build a platform that doesn’t end up following the above rules, in even faster time. One of the things I’ve been fascinated with ever since I started studying the internet full time is Platform Logic and how the dominance of platforms has resulted in an internet that is driven by the continuous assetization and reproduction-into-assets of data. So long as that continues, any type of social communicative structure that’s dependent on scaling en masse in a short period of time ends up at the same point: with disparate, confusing and chaotic eco-systems that will always render the user a voyeur to increasing levels of ironic absurdism. Moreover, the pursuit of monopoly/monopsony, probably means that there is little, if any, incentive to structurally change the social architecture of platforms, and that self-policed micro communities on the platform (as seems to be the case with Facebook) is a model that more popular platforms will end up having to accept.

Personally, I think Ryan’s last point is the most interesting - namely how an absence of structural moderators ends up with users becoming volunteer police officers online. There’s some interesting thoughts in this paper (£/uni access needed) about the digital labour interactions this has in the context of Reddit’s 2015 mass-moderator strike. But again, without the architecture to even recognise the act of moderation as work, self-policing is probably going to end up with more conflicts, separation and cross-platform policing (think about how Parler, or Gab, is largely just Twitter screenshots).

I also think that as we discuss what a moderator ‘is’ or ‘should be’, its important to step away from who we usually think of as moderators on forums, or the human, increasingly traumatised moderators who attempt to keep a digital community together (as per the axiom of platform logic) while also presenting without agency. As we’ve broadly accepted that digital culture is ‘authentic’ culture at the very least, the moderator is much more likely to be an active agent. It’s one possible explanation to the growing hostility to fact-checkers/ ‘disinformation reporters’ or even “digital culture reporters” more broadly — embedded in the job is the position of the content cop, whose incentive is the reproduce and project the micro-surviellance to the macro one. Having ‘impact’ as a digital culture reporter often means displaying good policing for the benefit of one side against another, and it makes sense that the response to this will be more content policing, and subsequently, content policing of content policing. Which is to say that no matter how sophisticated any future platform architecture can or will be, so long as the Platform Logic is one that rewards and incentivises policing, we’ll see a lot more content collapses over the next decade, happen a lot faster.

Of course, I could be wrong. In which case, feel free to screenshot this blog and publicly call me out until I’m shamed into logging off for good.