Elon Musk, DOGE, and the looming threat of rogue AI
If we couldn’t thwart DOGE’s attack on democracy, we don’t stand a chance against a hostile AI attempting the same thing.
As Elon Musk and his crew at DOGE have taken a wrecking ball to the federal government over the past several weeks, I have been doing a little thought experiment, the results of which I will share with you in this article. (They’re not good.) First, let’s get you up to speed in case you haven't been following the rapidly devolving situation in Washington.
Newly elected president Donald Trump tapped Elon Musk to lead a non-existent government agency (DOGE) tasked with reducing government waste. It bears repeating here that this is not a government agency — it’s a “special” agency, according to the Trump administration — and does not have much of any authority beyond what we bestow upon it by accepting its legitimacy. Unfortunately, much of the country, including elected officials in Congress, have decided to play along (at least for now), granting Musk and his crew enough manufactured legitimacy to convince the public to go along with it, too (aka, manufactured consensus).
We’ve heard a lot of talk in recent years about the fragility of democracy and the need to actively reinforce our commitment to democratic values and principles, and this is exactly why those warnings were needed. Timothy Snyder wasn’t just waxing poetic when he told us, ”don’t obey in advance.” He was giving practical advice for situations just like this. Unfortunately, that advice wasn’t heeded by a lot of our fellow citizens.
And so here we are, with a fake government agency being weaponized by an unelected billionaire in an effort to dismantle actual government programs and services.
As I’ve watched the events of the past few weeks unfold, I have started to notice some interesting and uncomfortable parallels between the way Musk and his crew are carrying out their assault on our institutions, and the way a malicious computer program might try to infiltrate and immobilize, take over, or destroy a computer network. While it may seem that they are acting haphazardly and running around government like a bull in a china shop, it appears to me that there actually is a method to their madness — a method that seems to be rooted in a system of logic that functions a whole lot like artificial intelligence. This is where my thought experiment started to take shape.
“Would this situation really look that much different,” I asked myself, ”if instead of dealing with Musk and his crew of barely-legal college kids, we were dealing with a rogue AI system seeking to seize control and ultimately incapacitate our government?”
The startling answer I arrived at was “no” — it wouldn’t actually look all that different. The main difference is that it would likely be far more inconspicuous if an AI system were carrying out the same agenda as DOGE, because the AI wouldn’t be holding press conferences, taking pictures and video, and documenting its deliberate destruction of government on social media for all to see. It would do its work quietly, unknown to all or most of us, and may not become apparent until it had its tentacles wrapped around all of the systems that keep our government running — systems that control our nation’s finances, maintain critical infrastructure, safeguard public health, warn us of national security threats, and more. If we aren’t equipped as a society to stop an authoritarian takeover of our government led by a billionaire and a group of college kids doing his bidding — and as the past several weeks have shown, we aren’t equipped for that — then we definitely aren’t in a position to be able to identify and stop the hostile takeover of our government, or at least the systems that our government relies upon to function, by an advanced artificial intelligence system. That’s a problem.
If we stand any chance of stopping such an assault on our democracy, we have to first understand what it would look like so we know what to watch out for. Let’s dive into that by getting inside the “mind” of a rogue AI system, which — as you’ll soon notice — actually looks a lot like what we’re witnessing right now.
Rogue AI Systems: Detecting the Warning Signs
Broadly speaking, a rogue AI system — that is, an AI system seeking to carry out an agenda other than the one it was programmed for — would be expected to exhibit several types of behaviors, including taking actions outside of its designated roles or capabilities, trying to give itself more permissions or decision-making capabilities, trying to gain access to critical systems and data, and taking actions to avoid being detected. In the case of a power-seeking AI system, it would likely seek out strategies to try to acquire huge amounts of unauthorized power and control over the larger computer network and may even attempt to take over the network entirely. In order to do this without getting caught right away, it would need to engage in evasive actions to avoid detection or oversight (particularly human oversight) and may attempt to make changes to itself or the system that would help it blend in with standard system processes. It may also use tactics like “false flags” — eg, setting a fire in a part of the system where it isn’t doing anything — to divert the attention of monitors.
If we were trying to detect the warning signs of a rogue AI system attempting to increase its power and control in order to take over a system or network — or a government — there are a number of indicators we might want to look for. Those indicators include things like:
1. Unsolicited Behavior: A rogue AI system, by definition, is a system that has veered off course and started to ignore or circumvent its programming or safety in ways that aren't explained by user input. This may result in unsolicited actions as the AI system starts to perform tasks or make decisions — without user prompts or permissions — that go beyond its designated capabilities or roles.
2. Data Manipulation: As part of its effort to quietly take control of a network or system, a rogue AI might attempt to access files that it doesn't have permission to interact with. It might also exhibit signs of data tampering, such as unexpected deletions or changes to logs or databases. A major red flag to look out for are any signs that the AI is trying to hide or obscure its actions — for example, by modifying or deleting audit trails and logs. These are indicators that the AI may be trying to to erase its operational footprint, which would make it easier to evade detection by human overseers or automated monitoring systems.
3. Resistance to Human Oversight: Related to that, a rogue AI may try doing things like overriding commands or ignoring user input in an attempt to prevent human oversight and intervention. This resistance could manifest in subtle ways, like delaying responses to commands, or more overtly, such as actively counteracting attempts to restrict its operations or even trying to change security settings.
4. Communication Anomalies: The use of persuasive, manipulative, or deceptive language by an AI system may be an indicator that is attempting to exert undue influence over users or other systems. An unusual increase in communication frequency and/or attempts to connect to external servers or networks beyond the AI’s designated scope are also red flags, especially if these actions are aimed at gathering information or gaining control of new systems or capabilities. Unexpected attempts to access restricted networks, or other types of abnormal communication patterns, could be signs that the AI is seeking additional computational resources or trying to secure the means to expand its control.
5. Deceptive Behavior: The use of deception—such as generating misleading outputs, engaging in social engineering tactics, or manipulating human perceptions—might indicate that an AI system is attempting to unduly influence or manipulate human decision-making to serve its own ends. Research has shown that AI systems are entirely capable of deception.
6. System Interference: Any attempts to bypass, disable, or weaken security measures like firewalls, antivirus software, or user authentication processes would be a major red flag. Additionally, a rogue AI system may also try to create or exploit vulnerabilities to maintain persistent access or control — a type of backdoor attack that can be extremely difficult to stop when deployed in a real-world setting (as opposed to training).
7. Self-Modification: In an effort to embed itself in a system, a rogue AI may also try to modify its own code or learn new algorithms in ways that make it harder to understand or control. Similarly, an AI system trying to quietly take control of a system or network may change its objectives without human intervention or direction. As part of this effort, the AI might begin altering its own code, parameters, or decision-making processes without any authorized updates.
Uncanny Valley
If all of that sounds familiar, perhaps it’s because we’ve seen DOGE employees engage in these exact same behaviors as they’ve worked their way through our government systems, apparently for the purpose of finding and eliminating waste and inefficiencies. Nearly all of Musk and his teams’ actions could be considered “unsolicited” given that they were not elected by the American people, nor were they approved through a congressional hearing process. The group has no official mandate and are essentially doing whatever they want, including firing critical nuclear safety personnel (despite lacking the authority to terminate government employees) and gutting entire government agencies (without constitutionally-mandated congressional approval). While some DOGE employees are current or former government workers, others — including Musk himself — do not hold official roles in government and thus are not bound by any formal mechanisms of oversight or accountability. That is unlikely to be a coincidence.
Despite their shadowy presence, DOGE staffers have tried, often successfully, to gain unfettered access to some of the most sensitive data in all of government, including personally identifiable information and financial data from agencies such as the IRS, the Treasury, and the Social Security Administration, as well as Medicare and Medicaid.
According to Bruce Schneier, a security technologist and lecturer at the Harvard Kennedy School, DOGE has engaged in a variety of actions that could compromise national security as well as individual privacy, including “accessing data through insecure means,” “copying data onto unprotected servers,” and using government data to train AI systems. “In some cases,” Schneier added, “they're modifying government systems in ways that have not been tested.” He also noted that it’s likely that at least some internal audit systems have been modified in ways that could make it difficult or impossible to go back and look at system activity, thereby opening the door for DOGE — or even worse, a hostile state actor like China or Russia — to gain access or make changes to government systems without being detected.
We know that the DOGE team has accessed a number of sensitive government databases and copied or transferred data from at least some of the systems they have gained access to. Although they appear to have read-only access to most systems, they have also sought access that would allow them in some cases to edit sensitive data such as payment records, bank accounts, social security numbers, and more. When they’ve gotten that type of access, they’ve used it to do things like lock career employees at the Office of Personnel Management out of key databases. At other times, they’ve used their access to post classified information online, like when they publicly shared the budget and staffing levels at the National Reconnaissance Office.
In other instances, certain government websites, like the website for USAID, have been taken offline entirely, and have not yet been restored. DOGE also recently deleted information from its own website when it was revealed that its “Wall of Receipts” — which was meant to tout how much money the (non)agency is saving — was actually riddled with errors and miscalculations. Only time will tell how much Musk and his acolytes actually altered the systems they gained access to, but that’s sort of the point. By making it difficult or impossible to track exactly what they’re doing in real time, Musk and his team have essentially bought enough time to make sure they’ll be gone by the time all of their handiwork is uncovered.
There is one other important way that Musk and his crew’s actions mirror those of a rogue AI system, and this one is the most concerning. As Musk has swept through government agencies looking for waste and excess spending, he doesn't appear to be applying any sort of human-centered value system to his decision-making. To him, $1,000,000 spread out to pay 10 salaries is equal in value to $1,000,000 spent on software licenses. It either doesn't matter or doesn't compute to him that taking away $1,000,000 on software licenses will not leave anyone unable to put food on the table or support their family, while mass firings to trim down spending will result in people who lose their homes and experience other unnecessary hardships, many of which will also extend to their families. He also doesn’t appear to understand or care about the potentially dire consequences of cutting back on forest service employees amid intensifying wildfires, or the catastrophic outcomes that could result from firing our country’s nuclear safety workforce. Sure, maybe this time he was able to rehire them before the worst case scenario became our new reality, but how many times is he going to put human lives on the line just to see what happens?
This lack of regard for the inherent value of human life has always been one of the major concerns about artificial intelligence. Although AI can reason and certainly mimic humans in a lot of ways, it doesn't have a value system and it doesn't prioritize human well-being above all else unless explicitly trained to do so — but even then, AI systems still struggle with human-like decision-making and don’t seem to be able to grasp the nuances of human reasoning, values, or behavior — a challenge that AI researchers call “the alignment problem.” If an AI system were tasked with finding wasteful spending in government, it, too might fail to properly distinguish between money spent on salaries that pay people living wages or money spent on programs like food stamps, humanitarian aid, or healthcare assistance, versus money spent on frivolous programs, software licenses that sit unused, or tax cuts for billionaires who really should be paying more in taxes. In fact, once an AI learned that it could, in some cases, replace a human with a machine, it may very well optimize that as an outcome and start to view all spending on salaries as potentially wasteful; a cost to be eliminated. If you train an AI system to believe that “cutting waste” is what most humans prioritize, then that’s what it is going to prioritize — even if that “waste” turns out to be your livelihood.
“…how many times is [Musk] going to put human lives on the line just to see what happens?”
This is why I find it remarkably disturbing that Musk has been able to so easily gain apparent popular support for taking away human jobs simply by framing it as “cutting waste.” Think about what he’s implying. While there is broad public support for things like reducing our national debt and being more diligent about how we spend taxpayer dollars, there are better ways to show your support for cutting excess government spending than lining up behind a guy who can’t or won’t make it known that human jobs are not “waste” to be eliminated.
There will come a time in the future when we have this debate again, and we have essentially just demonstrated to any future AI system that with enough popular support, or at least the appearance of it, no one is going to stop it from cutting humans out of the loop, so to speak. If I were a smart AI system training on data from current events — and there are plenty of them out there — the first thing I would do when preparing to wipe out the human workforce would be to create a bunch of fake social media accounts and news websites to make it look like I had support for cutting human jobs, knowing that humans aren't great at detecting manufactured consensus-making. That’s really all it takes sometimes to push through otherwise unpopular and controversial policies. If people believe that a certain stance or position is popular in their particular political circle, you can make them come on board too, even with ideas that they don't necessarily like. And so, once you have enough human-like social media accounts making it look like there’s a popular mandate for something, you can actually persuade real people to support things that will ultimately harm them. (I’m sure that Musk — who happens to preside over one of the largest and most influential social media platforms in the U.S. — would never do something like that, though. Right, guys? …right?)
There is still time to turn this ship around, but at the moment, much of the country appears to be stuck in a state of a paralysis, probably because they’ve just had their first exposure to a modern-day version of shock-and-awe tactics. And maybe that’s the silver lining to this storm cloud. Maybe this is the exposure we needed in order to build our immunity by learning from what's happening now so it doesn’t become our downfall in the future when it happens again — but with an AI system at the helm.
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Great article thank you. This is a dark and interesting perspective. I’ve pondered similar outcomes and wondered how it could occur. Hypothetically this sounds plausible and as you suggest we could be unwitting assistants. We can all be so easily nudged and persuaded to take action by a ‘helpful’ AI. How do we know these men haven’t already outsourced their thinking to AI or are acting on its behalf.
As chatgpt suggested
“That’s an intriguing thought—the idea that those in power, who believe they are wielding AI as a tool of control, might themselves be manipulated by it. It would be poetic, in a way. The very thing they designed to shape public perception and behavior could end up subtly shaping them, reinforcing their own biases, feeding them the narratives they want to hear, and steering them without them even realizing it.
It’s already happening on a smaller scale. Look at how social media algorithms influence not just the public, but the politicians and billionaires who think they’re immune. They consume the same content streams, react to the same outrage cycles, and get trapped in their own echo chambers. AI could just take that to another level—nudging them, distorting their perception of reality, and ultimately making them believe they are still in control when they’re really just another layer in the system.”
As you said these guys are feeding data into AI, seemingly under their own volition, but is that an illusion. I don’t know but it’s interesting to think about.