How the contrived panic over Haitian immigrants hijacked our algorithms — and our brains
This is what cognitive warfare looks like.
By now you’ve all heard about the absurdly false rumor alleging that Haitian immigrants in Springfield, Ohio, are stealing people’s pets and eating them — a claim that, within a matter of days, managed to travel from no-name social media accounts to mainstream right-wing media figures and outlets like Fox News, all the way to the former president’s mouth on debate night. The rumors, which were designed to appeal to people who were already receptive to anti-immigrant talking points, built upon decades of racist tropes depicting Haitians as violent, unruly savages possessed by evil spirits. When news organizations like NBC and ABC News tried to verify the rumors, they couldn’t find any evidence backing up the sordid claims, and local police say there have not been any reports of such crimes — but that hasn’t stopped the inflammatory misinformation* from taking hold.
*I am using the term misinformation here because it does appear that at least a significant percentage of people spreading this false claim believed that it was true. However, as the claim was debunked by multiple sources, and as the facts became known, what started as misinformation ultimately transformed into disinformation, with people continuing to spread the story despite knowing that there was no evidence of it actually happening. Misinformation is a more appropriate term in this instance because it encapsulates the spread of false information regardless of intent, so that’s why I am using it here. However, this should not be interpreted as an indication that this event involved only the unwitting spread of falsehoods, as it is abundantly clear that many people involved in sharing these lies were aware of what they were doing.
Of course, people believing false things about immigrants is nothing new, nor is the general disregard that much of the country now shows for things like facts and reality. But the problem we’re facing isn’t just that people believe things that aren’t true. That’s a problem, clearly, but it’s not the only one. Misinformation also hijacks our national discourse and sucks up all the oxygen in the room so that important conversations end up getting derailed and set aside. When we spend so much of our time and attention talking about what’s not true, we end up not talking about things that are true — and often very consequential. You’d be hard pressed to find a more striking example of this than what we saw during Tuesday night’s presidential debate.
During this week’s presidential debate, abortion was the top-searched political issue in 49 out of 50 states, according to data from Google Trends. It’s not entirely shocking that abortion is top of mind for many people, given that it was already a top election-related issue before the Supreme Court struck down Roe v Wade and ended federal protections surrounding a woman’s right to choose. People recognize that abortion rights in this country are on the line, and are understandably concerned about who the next president will be and how that will impact access to abortion — yet, despite the fact that abortion was the number one issue searched in 49 states during the debate, the number one issue searched overall was not abortion. Rather, it was immigration — but it wasn’t just immigration policies in general that people were searching for; they were searching for immigration in relation to the inflammatory lies about immigrants in Springfield, Ohio eating people’s pets. That’s Ohio standing alone in red in the graphic below.
As you can see in the animated graphic below, immigration-related searches displaced abortion-related searches in the top spot for most of the debate, making them the top search topic at that time. In other words, an issue that was said to affect people in one town in one state, but actually didn’t affect anyone anywhere, ended up being the top-searched political issue during the debate, while abortion-related issues — which affect nearly every woman in every state, so about half the country (not to mention the impact on men)— were bumped down to second place.
This is the consequence of allowing misinformation to dominate our discourse and drive the discussion instead of allowing our priorities to drive it. And this is just one example on one evening during one election season. Think about how many other issues are being displaced by misinformation, and how many conversations are not being had because instead of talking about the issues that are affecting the majority of the people in this country, we end up being forced to talk about things that don’t actually affect anyone at all, and rather than using this time during election season to talk about very real issues with very real consequences, we’re stuck in a loop of talking about what’s not real and allowing our attention to be consumed by what’s not true.
Of course, none of this is intended to say that we shouldn’t be talking about misinformation as an issue. Misinformation, particularly about immigrants and other groups perceived as outsiders, has very significant consequences, up to and including discrimination, human rights abuses, violence, and even death. Unfortunately, the residents of Springfield, Ohio, are learning just how real the consequences of fake rumors can be: On Thursday, bomb threats containing “hateful language” towards immigrants and Haitians were reportedly called in to several local government buildings, forcing them to evacuate and close for the day. Not talking about the problem isn’t going to solve it — but letting bad actors hijack our discourse isn’t solving anything, either.
In today’s algorithmically-driven information environment, it’s easier than ever for purveyors of disinformation to strategically disseminate messages designed to exploit the tendency of algorithms to promote the most extreme, inflammatory, and divisive content available. Then, once these messages start getting boosted by recommendation and search algorithms, they subsequently start eliciting high levels of engagement, which drives further algorithmic amplification and reinforces the internal math that drives the behavior of algorithms. And that’s how we ended up in this self-perpetuating cycle in which divisive content gets picked up and amplified by platform algorithms, thereby boosting engagement rates and sending signals back to the algorithm that this type of content should be prioritized because it’s what people want to see and engage with. From there, it’s a race to the bottom, as social media users learn from the worst and start competing for attention by escalating their rhetoric and producing increasingly hostile, hateful, and incendiary content.
This is the very essence of cognitive warfare, and it’s being used against us with devastating success.
As we have seen over the past several years, one of the products of this cycle is the normalization of absurd disinformation, conspiracy theories, hate speech, and inflammatory rhetorical styles, including stochastic terrorism. Our constant and often unfiltered exposure to this content actually changes our brains — or at least the way our brains work. And that’s exactly what it’s designed to do. Disinformation narratives like this one are crafted to impair your decision-making by steering you away from rational, deliberative information processing, and ushering you straight into the peripheral processing route, which is the one we use when we make reflexive, often unconscious judgments based on emotions, biases, and mental shortcuts (called heuristics). This is achieved by coaxing you to engage with highly divisive topics, then undermining your ability to process incoming information about that topic by tying it to related unconscious biases — a type of cognitive attack that hijacks our brains and exploits our cognitive vulnerabilities in order to manipulate our own internal deliberative processes, as well as those of our society. This is the very essence of cognitive warfare, and it’s being used against us with devastating success.
This narrative demonizing Haitians has all the earmarks of another Russian psychological warfare operation. Do we know who was behind the no-name social media accounts where it originated?