REVEALED: How the Trump admin is covertly reconfiguring online algorithms
For the first time, here’s a look at the administration’s playbook for reverse algorithmic capture.
Most Americans think censorship is about banning books, deleting tweets, or deplatforming dissidents. And depending on where and who you are, this can indeed be a primary form of censorship. But in 2025 in America, the most widespread and insidious forms of censorship aren’t about takedowns or suppression of speech— rather, they’re about tuning the algorithm. Quietly, subtly, and almost always with plausible deniability, the Trump administration appears to be executing a playbook of what can best be described as reverse algorithmic capture: a process through which government pressure reshapes the architecture of digital platforms, not by direct control, but by engineering incentives that guide algorithms to amplify preferred narratives and suppress dissent. Put differently, it’s the state using soft power to realign platforms’ invisible gates toward ideological conformity—but never giving up that element of distance and plausible deniability.
This strategy does not need to issue censorship orders or serve warrants to Silicon Valley tycoons. It works by shifting the terrain on which digital gatekeeping occurs. In July 2025, Trump signed an executive order titled Preventing Woke AI in the Federal Government. It requires that all artificial intelligence systems used by federal agencies be scrubbed of “ideological bias,” including references to diversity, equity, and inclusion (DEI), critical race theory, and other progressive frameworks. Although the order ostensibly applies only to federally procured AI systems, its ripple effects stretch much farther. Any company hoping to win government contracts must now conform to these ideological guardrails—not just in internal memos and tools, but in the public-facing models they train and sell. That includes the algorithms that shape search results, feed curation, ad targeting, and content moderation on platforms like YouTube, Meta, and X. Essentially, these decisions determine what information and content – as well as content creators – you see or don’t see, and have a lasting effect by defining the parameters of acceptable discourse and content.
Furthermore, the executive order specifies that AI systems must be “ideologically neutral” which … isn’t a thing. Ideology, by definition, is not neutral. Opposing ideologies are simply different viewpoints on how to run a state. It’s not bad to have an ideology, and neither left nor right is inherently good or bad, nor correct or incorrect all the time. The fact is that if an ideology exists at all, then there is no neutrality. Furthermore, there are many instances where neutrality is not the desired state. When there is a clear right and wrong or an indisputable true and false position, we don’t want artificial intelligence systems that can’t or won’t tell us which position is correct or incorrect. We don’t want artificial intelligence systems that are so afraid of appearing ideologically biased that they refuse to tell us when something is true or false. At least, those of us who don’t want to live in an authoritarian country don’t want systems like that.
[T]he rules of knowledge distribution are [being] rewritten. Algorithms, once claimed to be neutral tools, have become the unseen curators of reality.
As all of this was going on, YouTube was caught quietly relaxing its moderation guidelines, according to leaks verified by independent tech reporters. Internal training documents reportedly instructed moderators to allow more “public interest” content on sensitive topics—such as vaccine skepticism, climate change denial, election fraud claims, trans rights, and abortion rights—even if it contained potential misinformation. While YouTube publicly framed the move as a balance between safety and speech, the timing aligned closely with intensifying political pressure from Trump allies, who have repeatedly accused platforms of suppressing conservative viewpoints. The sudden change also represented a pretty significant shift in company policy, at least at that point in time. The result is an environment where not just controversial, but harmful content now flows more freely and reaches more people, while platforms get to shield themselves behind a veneer of neutrality even as they host the content that others are getting in trouble for. Back in the earlier days of content moderation, a similar incident happened when conservatives complained so much about Facebook supposedly censoring them that Facebook ended up actually amplifying conservative voices and right-wing news networks, despite the fact that they were never actually censoring them in the first place.
By recasting content moderation as elitist suppression and partisan censorship, and positioning AI realignment as a defense of “American values,” Trump’s team has flipped the script.
The brilliance—and danger—of reverse algorithmic capture lies in its stealth. Traditional regulatory capture occurs when corporations co-opt public agencies to serve their interests. Here, it is the state applying strategic pressure to induce platforms to self-regulate in alignment with political ideology. There’s no need for a direct order to remove a video or suppress a post. Instead, platforms interpret policy signals and reprogram their systems accordingly. Content moderation thresholds are recalibrated. Ranking systems are tweaked. Engagement scores are adjusted to reward compliance. The government never has to touch the code—but the algorithm still moves, and those in charge of programming it know what their assignment is..
This process is not simply about content—it’s also about epistemology. When search results are reordered or restructured entirely, when dissident voices are algorithmically demoted, when “public trust” becomes a metric for visibility, the rules of knowledge distribution are rewritten. Algorithms, once claimed to be neutral tools, become the unseen curators of reality. The average user will not see what changed—they will only know that some stories now feel louder, others strangely absent. One can’t know what one isn’t seeing.
Similarly, because of the lack of transparency surrounding algorithms and their output, very few people will ever get to see exactly why they are being served certain content and not other types of content, and therefore it will be nearly impossible to prove that any specific change is due to a policy pushed by the Trump administration. Individuals are also unable to see why some of their posts got more traction than others, potentially placing them at an unfair disadvantage in terms of being able to conduct an evidence-based evaluation of the performance of their content, and how various tweaks may influence engagement metrics.
Crucially, this also enables powerful actors to maintain plausible deniability. The administration can say — truthfully, at least in the most literal sense, though not so much when we actually break down what we mean — hasn’t censored anyone. The platforms can claim they’ve made internal adjustments based on evolving policy needs or commercial incentives. But the outcome is the same: a digital public sphere where ideology is upstream from visibility, and truth is filtered through a politicized machine. And with no transparency, there is no opportunity for accountability — no many incentives for improvement.
In the name of fighting censorship, the Trump administration is shaping digital ecosystems to favor its preferred narratives while chilling dissent.
We’re already seeing the effects of this. TikTok has faced increasing scrutiny for its handling of political content, but under the current climate, enforcement has softened for some narratives while intensified for others. Truth Social, Trump’s own platform, has quietly enforced shadow moderation practices that disproportionately restrict anti-Trump posts—despite branding itself as a free speech alternative. Meanwhile, AI companies adapting to federal compliance language are tweaking their foundational models to avoid generating answers deemed “biased”—a standard increasingly defined by political loyalty rather than epistemic integrity.
The consequences of this new phenomenon — reverse algorithmic capture — are profound. In the name of fighting censorship, the Trump administration is shaping digital ecosystems to favor its preferred narratives while chilling dissent. This is not the overt authoritarianism of content bans (though that happens here, too); it is a form of soft epistemic authoritarianism—where the gates remain open, but the path is paved for those who align, and obscured for those who do not.
This strategy is especially potent because it rebrands algorithmic power as populist reform. By recasting content moderation as elitist suppression, and positioning AI realignment as a defense of “American values,” Trump’s team has flipped the script. Platforms that resist may face regulatory scrutiny or contract exclusion. Those that comply enjoy market access and political cover. In essence, we are witnessing the emergence of a privatized information infrastructure that serves public ideological aims without invoking the First Amendment—or admitting the state’s hand.
When governments can shape what you see without saying what they’ve done, democracy becomes a puppet show lit by a carefully engineered spotlight.
We must be clear about what this is. Reverse algorithmic capture is not a policy debate about platform bias. It is a strategy designed and rolled out by the Trump administrationto realign the rules of information visibility in favor of those in power, using regulatory levers to reprogram epistemic systems. If the last decade was about identifying how platforms moderate speech, the next will be about exposing how governments reshape the systems that do the moderating.
This strategy is difficult to detect and harder to challenge. But it can be resisted—through transparency mandates, third-party audits of algorithmic impact, whistleblower protections, and legal frameworks that treat visibility filtering as a public interest issue. And most of all, informed citizens who read articles like this one.
Because when governments can shape what you see without saying what they’ve done, democracy becomes a puppet show lit by a carefully engineered spotlight.
And if the algorithm is the gatekeeper now, then who holds the keys?


My advice is to treat this as a public health emergency- and bring together the top folks, including neuroscientists, social psychologists, experts in cult brainwashing and hypnosis, AI and start educating people en masse. I have written 5 books, done 4 TEDx talks- Dave Troy's TEDxMidAtlantic's Dismantling QAnon that explained it as a psyop was taken offline by Russian cult Allatra as "hate speech" but it was just explaining it- so it was censored. But giving up is not an option.
You're already seeing this change on substack