LA wildfires disinformation reveals limits of fact-checking
Meta is ditching its fact-checking initiative — but is it really working for us anyway?
Meta, formerly known as Facebook, announced last week that it is ending its fact-checking initiative and following in the footsteps of X (formerly Twitter) in implementing a community notes feature in place of professional fact-checkers. Not surprisingly, there has been a great deal of backlash and many people are alarmed and angry about Meta’s decision. And I completely understand this reaction. But as someone who studies disinformation — and has studied it since well before most current fact-checking programs existed, as well as throughout their implementation, lifecycle, and now their rollback – I am not so confident that the impact will be nearly as significant as many people fear. That’s in large part because fact-checking, especially on Meta, isn’t really working the way we wanted it to in the first place, nor can it keep up with the rapidly evolving pace of disinformation in all of its varied forms— because it was never even designed for that.
Fact-checking is meant to provide a response to misinformation (ie, inadvertently sharing false information), but it doesn’t really offer an answer to the problem of disinformation, which involves deliberate efforts to deceive, mislead, and/or manipulate. That’s not a knock on fact-checking; we shouldn’t expect it to solve a problem that it wasn’t meant to combat. But I think we do need to acknowledge that fact-checking was developed during a previous era of disinformation and hasn’t really been updated for Disinformation 2.0, the current era of disinformation, which is characterized by much more complex dynamics involving human-computer interaction, artificial intelligence, algorithmically-driven social media feeds, and other sociotechnical processes. Fact-checking will always have inherent value and will continue to play an integral role in maintaining information integrity, but relying on it as the cornerstone of our fight against disinformation was never realistic — and we need look no further than the ongoing discourse surrounding the devastating wildfires in Los Angeles to see this exemplified quite clearly.
Like most crisis and disaster events in modern history, the ongoing wildfires in and around L.A. have been accompanied by a deluge of misinformation and disinformation across social media. Some of the contested facts and claims are relatively straightforward and simple to debunk, but most are more complicated because they involve not only a disagreement over objective facts, but also over the interpretation of those facts, and the context needed to understand them. Consider, for instance, the issue of whether or not the Los Angeles Fire Department (LAFD) and other local first responders had the resources they needed to mount an adequate response to the fires. One of the major talking points that has emerged in recent days is the claim that the fire department was under-resourced due to a $17 million budget cut, which is said to have hampered their ability to fight the raging fires.
The first thing to take note of here is that this talking point actually contains multiple claims and assumptions: 1) that the budget was cut by $17 million; 2) that the budget cut directly impacted the operations of the fire department; 3) that the inability to control the wildfires was due to lack of manpower and/or resources, rather than the strength of the fires; and 4) that if the budget had not been cut, the fire department would have been able to respond more effectively, and could have stopped the fires from causing so much damage. Right away, you can see that this can’t be labeled with a simple true/false rating, but let’s see what it looks like when we try to fact-check these claims:
It’s true that the budget for the LAFD was cut by $17 million during initial budget negotiations this year, as other budgetary negotiations for the LAFD were still going on. Had this been the final budget for the year, it would have represented a little more than 2% of LAFD’s total annual budget.
It’s true that leaders of the LAFD have publicly stated that they were under-resourced, which impacted their ability to respond effectively to the wildfires.
It’s also true that the LA City Council approved $53 million in firefighter pay raises and signed off on $58 million for new fire trucks and other equipment. When these figures are added to the equation, then the department budget actually increased by 7%, rather than decreasing by 2%.
It’s also true that an additional $23 million is being set aside for increased pension and healthcare coverage for firefighters, and an additional $27 million is being allocated for medevac services. When these figures are included in the calculation, the LAFD is looking at a 9% increase in its budget.
It’s also true that firefighters and scientists who study fires say that high winds and unusually dry conditions are making the fires spread more rapidly than they can keep up with. The conditions have been described as “unprecedented”, and local fire officials say “no water system in the world” would have been sufficient given the magnitude of the fires. At the same time, some human factors likely did contribute to the spread of the fires, including aging infrastructure and inadequate vegetation management.
So, in other words, yes, they were going to cut the budget, but no, they didn’t actually cut the budget, and yes, local fire departments say they were under-resourced and that this impacted their response capabilities, but they also said that even with every resource available at their disposal, they likely still couldn’t have controlled the blazes due to the extreme conditions. Got it?
That’s a lot of nuance, and understanding nuance takes patience. And if I know anything about the Internet in 2025, it’s that the Internet hates nuance and has no time for patience nor any interest in making the time. This is (in part) why false narratives will continue to propagate, even though the facts are available and known to many of those who are spreading misleading claims and jumping to unsupported conclusions. It’s just so much easier for our brains to process simple (yet false) storylines like, “They cut the fire department’s budget, so now they don’t have the resources to fight the wildfires” than it is to process complexity. And that’s not even touching on the many other factors that influence how we process information — factors such as our political ideology, emotions, trust in institutions like the media, and cognitive biases.
And this is where the limitations of fact-checking really become apparent. Fact-checking relies on an assumption that people engage with and process information in a generally rational, objective manner, yet research shows that this simply is not the case, particularly when we are talking about viral rumors and weaponized narratives on social media. In this specific example, those who have an axe to grind with the Democratic administration in California have seized on the opportunity to blame the fires on alleged mismanagement by Gov. Gavin Newsome and local city leaders like L.A. Mayor Karen Bass. For these people, the facts that matter are the ones that can be used to make politically-charged accusations that align with their preexisting beliefs and biases. Sure, you can try showing them information that disproves or challenges these misleading narratives, but doing so is unlikely to have much of an effect. So why would we expect fact-checking to break through this barrier?
While studies do suggest that fact-checking can help people better identify accurate information and correct inaccurate information, this doesn’t necessarily result in people modifying their beliefs or misconceptions, nor does it reliably motivate people to change their behaviors to align with the facts. Importantly, in some contexts, fact-checking also doesn’t appear to make people less likely to share false information. For example, one study found that, although adding labels to social media posts warning about misleading information can help people better identify inaccurate content, it had no effect on their intentions to share content — even if they knew it was labeled as false. In other words, fact-checking may help people better identify true versus false content, but in certain cases it doesn’t appear to reduce false beliefs, nor does it reliably make people more hesitant to share false information. Furthermore, even when exposure to fact-checking does help to reduce misperceptions and false beliefs, it often doesn’t result in changes in behaviors that are based on those false beliefs. For example, exposing people to fact-checks about vaccine safety can decrease beliefs in myths about vaccines, but it often doesn’t make people any more willing to get vaccinated, and in some cases may actually result in reduced intentions to get vaccinated.
There are, of course, plenty of studies showing that fact-checking can be effective under the right circumstances. But the problem — well, one of the problems — is that the disinformation that tends to go viral and have the most impact and staying power also tends to be the kind of disinformation that is most resistant to fact-checking, especially when the facts are coming from a source outside of a person’s social network.
Furthermore, Meta’s fact-checking initiative was far from ideal. Among other things, it explicitly prevented its partner fact‐checking organizations from debunking content from political actors or advertisers, and also directed them to avoid fact-checking anything that was labeled as “opinion.” Members of its global fact-checking initiative have said in the past that they felt “underutilised, uninformed and often ineffective,” and some — like Snopes — parted ways with the platform for that very reason. So while I understand the backlash to Meta’s decision to end the program, I think we also need to remember that it was never a panacea to begin with, and that it was plagued by persistent flaws and shortcomings throughout its existence.
As stated earlier, fact-checking will continue to play an important role in our information environment, regardless of what Meta does or doesn’t do. Just because Meta is giving up on fact-checking, doesn’t mean that fact-checking is giving up on its purpose. Even if it doesn’t immediately make people think or act differently, fact-checking can still work to hold politicians and elites accountable for what they say, establish ground truth for historical purposes, and teach people how to verify information for themselves. There will still be plenty of fact-checking initiatives around; they just won’t be affiliated with Meta — but that doesn’t mean you won’t see fact-checks appearing on Meta’s platform. False claims that tend to go viral on Meta also tend to go viral elsewhere, so it’s likely that third-party fact-checkers will still end up debunking much of the viral false content circulating on Meta. Those who are likely to be responsive to fact-checks will be the same people who are also likely to share third-party fact checks on Meta, while those who were inclined to ignore or dismiss Meta’s fact-checking labels will still ignore efforts to fact-check the content they share on the platform.
In some ways, Meta‘s decision is a reflection of where we are in the evolution of disinformation. If we think of disinformation as an iceberg, we have collectively been focused on the visible part sticking out of the water — the content itself, and what is contained within it — while largely ignoring the much more powerful and insidious drivers of disinformation lurking beneath the surface. If ripping off the Band-Aid of fact-checking is what it takes to finally get us to confront the unseen yet hugely influential forces shaping our information environment, then it couldn’t happen soon enough. Disinformation 2.0 is already here and has been for some time. I think most people know this, but there seems to be a hesitance to want to really change our approach to confronting disinformation to reflect this new reality, likely because doing so will require us to grapple with the same types of complexities, contradictions, and uncertainties that make us susceptible to believing falsehoods — and resistant to correcting them — in the first place.