Elon Musk destroyed Twitter verification. Here's how to fix it.
Let's face it: Twitter verification has been broken for a long time. It's time for a new system.
If you’ve been on Twitter the past two days, you’ve likely noticed that the recent changes to Twitter’s verification system are completely dominating The Discourse™. I’m not going to do a deep dive into the The Twitter Verification Discourse™, at least not in this article, but suffice it to say, it’s predictably toxic, polarized, and petty.
Instead of wading in the cesspool of internet fights over a now-meaningless blue checkmark, I’m going to propose a solution. I have spent about a year developing and refining a verification system that would address the problems with the current mess of a system, correct the issues with the prior system, and offer new options that would give users the ability to choose among different types and levels of verification — and none of it would cost money. All levels of verification would require basic identity verification, but users could also opt-in to verify their professional identity, and if desired, could apply for verification as a specialist/expert in their area(s) of expertise, and finally, as a “trusted source” who would have a “contact” button in their bio where journalists and others could contact them for interviews, quotes, etc. Users who verify their identity with Twitter but still want to remain anonymous (publicly) would have an option for doing that, too. And perhaps most importantly, verification would not be a one-time check. Rather, it would be renewable just like a driver’s license, so users who want to keep their verification status would have to re-verify their account periodically. This would vastly reduce the problem of repurposing and selling verified accounts, which is a surprisingly profitable black market.
Thanks for reading Weaponized! Subscribe for free to receive new posts and support my work.
I will explain all of these categories and processes in more detail below, but first, let’s look at how we got here.
Twitter Verification: From Imperfect to Dumpster Fire
Last year, Twitter started allowing subscribers to Twitter Blue the chance to get a blue checkmark on their account. That decision was controversial, but didn’t cause nearly as much uproar as the decision last week to remove the blue checkmarks of so-called “legacy accounts,” which were verified under Twitter’s old system. By opening up the verification system to so many people without actually doing the work to verify the identity of the account holder, the blue checkmark no longer means what it used to. But the truth is that the meaning of the blue checkmark was never clear in the first place — and in fact, misinterpretations of what it did mean have contributed to the spread of mis- and disinformation for years.
When the verification system was first rolled out in 2009, checkmarks were given to celebrities and public figures as a way to distinguish them from parody accounts and imposters. At the time, the checkmark was understood to be a sign that Twitter had verified that the account was actually operated by the person or organization it claimed to represent. Over the years, Twitter began verifying more accounts, and in 2016, it announced a public application process whereby accounts could be granted verification if they were deemed “of public interest.” This process was discontinued the same year, but during the year it was open, many journalists at mainstream news outlets applied for and received a blue checkmark. After putting a stop to the public application system, Twitter reverted to a process of deciding on its own which accounts to verify, based on the criteria of being “authentic, notable, and active.” Then, in 2021, Twitter re-launched its public verification system, allowing people to apply for verification if they fit into one of six categories (government; companies, brands, and organizations; news organizations and journalists; entertainment; sports and gaming; activists and organizers; content creators and influential individuals). The result was that a lot of partisan activist accounts were eligible for verification, which gave them an appearance of credibility that, in many cases, was undue. Some (most?) of these accounts use platform manipulation tactics such as “follow-back parties” and private retweet rooms — all while bearing the mark of a verified account. This gave the appearance that Twitter condoned tactics designed to manipulate its own platform. For whatever reason, that never resulted in any pushback or outcry like we’re seeing this week, even though it should have.
The process of verification on Twitter was never transparent or standardized, which made it a flawed system. It needed to be reformed. But Elon Musk didn’t reform it; he destroyed it.
There was also widespread confusion among users about what verification actually meant. For many users, verification was seen as a sign of credibility, even though it was never actually an indicator of that. It was an indicator of identity, but confusingly, the absence of verification wasn’t an indicator that the user did not want to or could not prove their identity — it was merely that a sign that they didn’t meet Twitter’s ever-changing criteria for verification. The process of verification on Twitter was never transparent or standardized, which made it a flawed system. It needed to be reformed. But Elon Musk didn’t reform it; he destroyed it. Although he opened up the verification process to all users, he removed the requirement to verify one’s identity, which is the most fundamental purpose of verification. If nothing else, the blue checkmark should be a way for users to know that the account they’re engaging with is actually who they say they are.
But I think I have designed an even better system that would offer even more options — many of which are designed specifically to address problems like deception and mis/disinformation.
Verification 2.0: Accounting for our online identities
The system of verification that I would implement would allow for several levels of personal identity verification as well as professional identity verification. Currently, these two categories of identity verification are awkwardly muddled together, which doesn’t acknowledge the fact that our identities are collapsed online in a way that they aren’t offline. We are all of our various selves at once when we are online — something that we don’t talk about much, despite the fact that it shapes how we navigate every online space we enter. When I am online, I am Dr. Caroline Orr Bueno, a behavioral scientist and researcher who studies online deception and hostile social manipulation and has expertise in areas like public health, communications, psychology, and extremism; I’m the Caroline who works at a university but also the Caroline who appears on television and in newspapers sometimes; I’m also the Caroline who used to work full-time as a journalist and still publishes articles as a freelance journalist. And at the same time, I’m Caroline, a mother, wife, daughter, sister, and friend who loves hiking, working out, boating, doing crafts, cooking, and doing anything with my family. We juggle multiple identities, and any system of identity verification needs to account for at least the distinction between our personal and professional identities.
We are all of our various selves at once when we are online — something that we don’t talk about much, despite the fact that it shapes how we navigate every online space we enter.
The system I designed has four levels of verification, though I have ideas for expanding it both vertically and horizontally. The backbone of those four levels can be seen below.
The most basic level of verification is personal identity verification — e.g., proving that you are who claim to be on Twitter. For those who wish to remain anonymous publicly but are willing to verify their identity privately, there would be an option for a different colored checkmark. Additionally, government accounts would receive a different colored checkmark than non-government accounts, and organizations would receive a different color than individuals.
The second level of verification is basic professional identity verification. This would be open to any person or organization who can document their professional status through an employer website or other proof of work/employment. This level of verification would be represented by a separate badge, next to the checkmark. These badges would be color-coded and would group similar professions together, with your specific profession shown when a user hovers over the badge with their mouse.
The third layer of verification is subject matter expertise. This has some overlap with professional identity, of course, but it allows for users to specify the areas in which they specialize. For example, someone may be verified as a researcher or a scientist, but there are huge differences in the sub-specialties of these professions. The purpose of allowing subject matter verification is to reduce the ability for professionals to claim expertise outside of their specialties, which was a significant problem during the COVID-19 pandemic. Not all medical doctors are qualified to speak authoritatively about a novel infectious disease, and a social scientist isn’t likely to be qualified to give expert opinion on biochemistry. Subject matter expertise would be determined through proof of specialty in the form of peer reviewed articles, books, lectures, expert testimony, and things like licensure or certification, and could only be updated/revised twice per year. The reason for restricting how many times this level of verification can be updated is to prevent people from jumping on to the newest trend or crisis, and claiming subject area expertise just for the purpose of clout. As we all know, some people seem to become experts in the hottest issue of the day, every day, and this is meant to stop that practice.
The fourth layer of verification is Trusted Source. Any individual or organization who has been verified at the first three levels would be eligible to apply to become a “Trusted Source.” Those who are deemed “trusted sources” would get a “contact” button in their bio, which journalists and others could use to contact the person for interviews, quotes, etc. These accounts would also have the option of tagging their tweets when they are commenting on issues within their domain of expertise, and these tweets would be featured in a special section on Twitter (like the COVID-19 section that was set up during the pandemic) where users could go to look for expert commentary and analysis. The “contact” button on these accounts would lead to a page with the account’s contact info, as well as an aggregated feed of tweets they have tagged as relevant, and a list of articles that they have been quoted in through Twitter’s “Trusted Source” program. This level of verification would give journalists and others rapid access to credible sources who have already indicated that they are willing to respond to media requests when possible, and would hopefully diversify the pool of experts who are quoted in newspapers and other publications. It would also allow people to read their public statements and assess them for credibility.
As stated previously, verified accounts would have to renew at least their identity verification periodically in order to stop the black market sales of verified accounts. People who are verified by subject matter expertise or who are verified as “Trusted Sources” could lose that status if they regularly spread non-credible information or if the information they provide to journalists is repeatedly deemed to be false or unproven.
I’ll go into more detail about this system in the future, but for right now, I wanted to offer up this proposal to show that there are ways to fix this system and make it even better than before. I’m not under the illusion that Elon Musk is going to implement this tomorrow, but at least it’s out there, and I hope it will encourage people to start talking more about how to fix our fractured, weaponized online spaces, rather than contributing to the problem by engaging in exactly the type of toxic discourse that helped create this mess.
Thanks for reading Weaponized! Subscribe for free to receive new posts and support my work.