The Buck Stops with Social

Back

In Proximity to Hate

ads all social media and society twitter Sep 18, 2023

What does it really mean for brand ads to be placed alongside hateful content?

 

by Jeremy Grossman, PhD 

 

Last Monday, on the 22nd anniversary of 9/11, right-wing-watch online publication Media Matters reported that the Twitter/X ads algorithms were placing sponsored ads alongside organic content by unhinged antisemitic conspiracy theory posts, most of which were committed to the fiction that 9/11 was carried out by Jewish people. Elon Musk has, of course, pledged the virtues of free speech absolutism while simultaneously throttling accounts he doesn’t like and threatening to sue the Anti-Defamation League (ADL) for, evidently, single-handedly sabotaging ad revenue on the app, a characterization of the raw global (some would say globalist) influence of a particular group of people that shares a rhetorical form with something I can’t quite put my finger on. Oh, that’s right, unhinged antisemitic conspiracy theories.

 

For Media Matters’ concern to make sense, you have to think of social media consumption in terms of sequence: as you’re scrolling, posts appear either chronologically based on those you follow or algorithmically based on what the platform thinks you’re interested in seeing, which is to say, what they think will keep you on the app the longest. Then, peppered into that sequence at regular though not determinative intervals are sponsored posts, advertisements from all kinds of brands, businesses, individuals, or anyone else willing to enter the self-serve social paid market. Ok, so everyone knows all of that.

 

One of the keys here: self-serve means there’s relatively little gatekeeping, at least when it comes to the question of who advertises, because basically anyone can make an account, input a credit card, and throw something together. But, more importantly, “self-serve” implies a platform structure built for accessibility—not in the sense of disability or accommodation, of course, just in the promise that any joe-schmoe should be allowed to feed dollar bills into the vending machine without a lot of specialized training.  The programmatic nature of this sort of ad buying removes layers of both expertise and a certain amount of direct strategy. And with that accessibility, organized as it is on the promise of magical algorithmic targeting, is born a justification absolutely unthinkable in eras past: advertisers don’t necessarily get to decide around which content their ads will appear.

 

Sort of. Obviously, you can try to narrow down to affinity/interest targets, which implies that if you want impressions from people who like gardening, ideally your ad will be placed alongside content related to gardening because that’s what that person enjoys engaging with. But you can’t control that variable directly, in the sense that you can’t proactively select the posts next to which your ads are run or prevent your ads from running next to certain, specific posts that you want to avoid. That would be a lot of manual effort, even if it were possible, and would thus work directly against the momentous ease and programmatic promise for which the ad buying experience on self-serve networks is explicitly designed. From a purely technical standpoint, the only alternative we have is the assurance by social networks that they’ll keep horrible, hateful, and offensive content off the platform so that advertisers don’t have to think very much about the question at all. That doesn’t always work out exactly as planned, but it’s the general playbook going all the way back to the glory days of YouTube.

 

The level of acceptance for neo-Nazis on a particular platform is as good an indicator of the health of that platform for advertisers as you can get, according to this logical sequence, specifically because of the risk of association. And the year-over-year 59% freefall of ad revenue in the wake of the Musk takeover, which was very publicly accompanied by the continued problem of proliferated and unaddressed hate speech, seems to confirm that. Brands and businesses, it seems, are unwilling to countenance the possibility that their ads will be placed alongside such objectionable content.

 

When you think about it, though, this is a very strange formulation in the age of programmatic advertising. The objection rests on the presumption that proximity signifies implicit, if not explicit, support for the content. If a McDonald’s ad is sandwiched (ha ha) between two racist rants, then McDonald’s has somehow become implicated in that racism. This reasoning accepts two dubious premises:

 

1) To the extent that audience targeting is high quality (itself a constant battle), showing up on the wrong person’s social feed is a reflection of the brand, rather than of the platform that allows such content.

2) Multinational corporations, famous for their business ethics and for the humane treatment of their workers, do not want to also sell things to bad people.

 

I’m not sure about either of these things. In fact, I’m very sure that neither of those things is particularly true. To me, the hand-wringing by major ad buyers is a fascinating performance of a sort of rhetoric of proximity, and the question is why. The more likely explanation for the revenue freefall is a second-order phenomenon, namely that most people don’t want to go to a Nazi bar. This is a simplistic and hyperbolic way of putting it, but the perceived health of a platform is not only based on how many terrible people are posting terrible things on that platform, but also how much other people are talking about that fact.

 

Advocacy groups (including the ADL, but also including many others), use a rhetoric of proximity when they attempt to persuade brands to withhold ad spend on platforms, but that rhetorical formulation really has very little to do with visual or algorithmic proximity from a technical standpoint. As mentioned, the control that advertisers have over ad placement is limited in large part to the quality of the audience targeting the platform offers. But this system is somewhat opaque to the everyday user, whose frame of reference is grounded in the experience of more traditional ad buying methods, where ads were run against content intentionally and strategically. As a result, when presented with evidence of visual proximity—screenshots of conspiracy theories next to an NFL ad—users are also presented with a construction of ethical or political proximity, the feeling that when an NFL ad touches the border of an antisemitic tweet, it becomes morally contaminated. It’s magical reasoning, but it feels right, and that’s what counts.

 

Thus, when Musk blames activists for a decline in ad revenue, he’s probably not totally wrong. It’s obviously also the case that his mass layoffs have affected content moderation to such a degree that hateful hashtags, when not removed, will inevitably have ads running within their indexing feeds. And accounts spreading those hashtags are left to run rampant, soiling the experience for others. Nevertheless, it’s also true that ad placements have almost nothing to do with that, so appeals to proximity stand in as a useful persuasive construction, a metaphor that transposes the contamination of brand posts by toxic feed posts into a contamination of the overall user experience. It’s both a fallacy and a completely effective rhetorical arrangement.

 

None of this is to say that advocacy groups shouldn’t be using this framing, particularly as the explicit aim of their advocacy is to reduce the proliferation of hateful and violent content by putting pressure on the platforms to eliminate it. That proximity is not exactly the issue, though, also gives us the opportunity to push back against the hand-wringing, as if the solution should be to ensure that NFL ads simply don’t show up next to hateful content. What needs to be made inordinately clear is that the issue isn’t the proximity of brands’ ad placements—it’s in their continued support for platforms that don’t take hate seriously enough.

THE BUCK STOPS WITH SOCIAL

Sign up to be notified when a new article is live

You're only signing up for the newsletter—no solicitations involved.