fallen blue domino
Photograph: Jordan Lye/Getty Images

Twitter Really Is Worse Than Ever

Under Elon Musk, hate speech has surged and propaganda accounts have thrived.

A year ago, Elon Musk announced that he wanted to buy Twitter to clear it of bots and turn “the de facto public town square” into a place for unfettered free speech. Social media experts worried that would mean the platform would stop moderating what users post, and warned that the consequence of Musk’s stated absolutism would be that the platform would be overrun with violent and hateful content. It turns out they were right. 

After he took over the platform, Musk insisted that “Twitter’s strong commitment to content moderation remains absolutely unchanged.” But around the same time, Twitter fired most of its trust and safety staff, the team responsible for keeping content that violates the company’s policies off the platform. 

The result, perhaps unsurprisingly, was that hate speech on Twitter surged “dramatically” in the weeks following the takeover, according to a new study from the University of Southern California’s Information Sciences Institute, Oregon State University, UCLA, and UC Merced, which also found that there had been no decrease in the number of bots on the platform. It is yet another data point in a series of changes that have taken Twitter from being a global public square to a platform where racists, bigots, and propagandists are more empowered than ever.

“A few months ago it was the first place you looked for insight,” says Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), a nonprofit that tracks disinformation. “It was always about finding communities of mutual interest and seeing what the most interesting people around the world were saying about things and what the news was. And that is just destroyed.”

Twitter did not respond to a request for comment about its moderation practices since Musk’s takeover or what systems it has in place.

Researchers found that the increase in hateful content began almost immediately after Musk’s takeover as users began to test the boundaries of what would get past Twitter’s new moderation regime.

“The day that [Musk] officially took over the platform, a lot of right-wing figures had started tweeting anti-LGBTQ rhetoric, specifically the term ‘groomer,’” says Kayla Gogarty, research director at Media Matters for America, a media watchdog group, referring to the conspiracy theory that LGBTQ people prey on younger people by “grooming” them. “[These accounts] were basically saying that they were testing the waters” of Twitter’s content moderation, she says.

Twitter’s policies do not allow slurs and tropes that “intend to degrade or reinforce negative or harmful stereotypes about a protected category.”

“There seems to have been a clear indication that people anticipated that Musk would reduce moderation,” says Keith Burghardt, a computer scientist at USC’s Information Sciences Institute and one of the co-authors of the paper. “But it’s clear that hate speech didn’t decline immediately after Elon Musk bought Twitter, suggesting that whatever moderation he did was not enough.”

Even before it reduced the size of its moderation teams, Twitter wasn’t particularly quick to remove hateful content, according to Tal-Or Cohen Montemayor, founder and executive director of CyberWell, a nonprofit that tracks anti-Semitism online in both English and Arabic. 

Data collected by CyberWell found that though only 2 percent of anti-Semitism content on social media platforms in 2022 was violent, 90 percent of that came from Twitter. And Cohen Montemayor notes that even the company’s standard moderation systems would likely have struggled under the strain of so much hateful content. “If you’re experiencing surges [of online hate speech] and you have changed nothing in the infrastructure of content moderation, that means you’re leaving more hate speech on the platform,” she says.

Civil society organizations that used to have a direct line to Twitter’s moderation and policy teams have struggled to raise their concerns, says Isedua Oribhabor, business and human rights lead at Access Now. “We've seen failure in those respects of the platform to actually moderate properly and to provide the services in the way that it used to for its users,” she says.

Daniel Hickey, a visiting scholar at the USC’s Information Sciences Institute and coauthor of the paper, says that Twitter’s lack of transparency makes it hard to assess whether there was simply more hate speech on the platform, or whether the company made substantive changes to its policies after Musk’s takeover. “It is quite difficult to disentangle often because Twitter is not going to be fully transparent about these types of things,” he says.

That lack of transparency is likely to get worse. Twitter announced in February that it would no longer allow free access to its AP—the tool that allows academics and researchers to download and interact with the platform’s data.  “For researchers who want to get a more extended view of how hate speech is changing, as Elon Musk is leading the company for longer and longer, that is certainly much more difficult now,” says Hickey. 

In the months since Musk took over Twitter, major public news outlets like National Public Radio, Canadian Broadcasting Company, and other public media outlets have left the platform after being labeled as “state-sponsored,” a designation that was formerly only used for Russian, Chinese, and Iranian state media. Yesterday, Musk reportedly threatened to reassign NPR’s Twitter handle.

Meanwhile, actual state-sponsored media appears to be thriving on Twitter. An April report from the Atlantic Council’s Digital Forensic Research Lab found that, after Twitter stopped suppressing these accounts, they gained tens of thousands of new followers. 

In December, accounts that had been previously banned were allowed back on the platform, including right-wing academic Jordan Peterson and prominent misogynist Andrew Tate, who was later arrested in Romania for human trafficking. Liz Crokin, a proponent of the QAnon and Pizzagate conspiracy theories, was also reinstated under Musk’s leadership. On March 16, Crokin alleged—falsely—in a Tweet that talk show host Jimmy Kimmel tweet had featured a pedophile symbol in a skit on his show.

Recent changes to Twitter’s verification system, Twitter Blue, where users can pay to get blue check marks and more prominence on the platform, has also contributed to the chaos. In November, a tweet from a fake account pretending to be corporate giant Eli Lilly announced that insulin was free. The tweet caused the company’s stock to dip almost 5 percent. But Ahmed says the implications for the pay-to-play verification are much starker.

“Our analysis showed that Twitter Blue was being weaponized, particularly being taken up by people who were spreading disinformation,” says CCDH’s Ahmed. “Scientists, journalists they’re finding themselves in an incredibly hostile environment in which their information is not achieving the reach that is enjoyed by bad actors spreading disinformation and hate.” 

Despite Twitter’s protestations, says Ahmed, the study validates what many civil society organizations have been saying for months. “Twitter’s strategy in response to all this massive data from different organizations showing that things were getting worse was to gaslight us and say, ‘No, we’ve got data that shows the opposite.’”