KRON4

Far-right extremists shift online strategies

(File/Getty)

(The Hill) – Domestic extremists are adapting their online strategies to push disinformation and conspiracies despite a crackdown by social media platforms in the year since the attack by a pro-Trump mob on the Capitol.

Online extremist groups and far-right influencers are using more coded language to slip through gaps in mainstream content moderation enforcement and are still active on alternative platforms that have risen in popularity since the Jan. 6, 2021, riot. 


Experts say efforts to counter domestic extremism must adapt as well, or else the spread of disinformation online poses real world risks heading into the midterm elections this November and the 2024 presidential election.

“There’s always going to be this synergistic relationship between the content moderation failures of Facebook, Twitter and alt tech platforms like Parler. So we should absolutely expect that going into the 2022 midterms, especially in battleground states where things are extremely polarized, we will see a similar dynamic,” said Candace Rondeaux, director of the Future Frontlines program at the think tank New America. 

What’s even more worrying is what may happen during the lead up to the 2024 presidential election — where the capital available to candidates, parties and interested stakeholders is larger and platforms can be used to “influence debate to the point where things may get violent,” Rondeaux said. 

Research reports released this week highlighted disinformation narratives among far-right groups on platforms such as Parler, Gab and Telegram. The Department of Homeland Security also warned partners of an uptick in chatter on online extremist pages.

Although fringe platforms brand themselves as separate from mainstream social media, they’re intertwined with the broader scope of internet conversations. Experts warned against dismissing the influence of alternative platforms despite their lower user base. 

“A lot of the activity that is happening on those platforms is still reactive to things that are happening on mainstream platforms. So really understanding that dynamic and not treating it as this completely separate and distinct factor when we think about the internet I also think is important,” said Jared Holt, a resident fellow at the Digital Forensic Research Lab (DFRLab). 

New America released a report Wednesday with researchers at Arizona State University analyzing millions of posts on Parler in the lead up to and directly following last year’s insurrection.

“Although more research is needed, our preliminary analysis suggests that as long as regulatory gaps for social media platforms persist, the United States faces the prospect of a months-long—or worse, years long—rolling crisis as the public backlash against far-right political violence drives extremist cadres deeper into fringe parts of the internet and farther into the dark web,” the report stated. 

New platforms branding themselves as a haven for conservatives who are fed up with the content moderation on mainstream sites have emerged since the riot. 

In July, former Trump campaign aide Jason Miller launched Gettr, which describes itself as a “the free speech social media platform which fights cancel culture.” Gettr announced Friday that had it hit 4 million users. 

Trump is planning to launch his own social media network called “TRUTH Social.” Although the scope of the platform is still vague, Trump’s media company announced a deal with Rumble, a YouTube alternative popular with some right-wing audiences, in December. 

A DFRLab report released this week detailed the ways extremist groups have adapted their strategies, including through what Holt described as a “giant game of musical chairs” — with influential far-right figures shifting across an array of fringe sites and bringing their followers along.

While Parler and other alternative platforms cater to far-right audiences and boast about their lack of content moderation, mainstream platforms, including Facebook and Twitter, were also filled with posts amplifying election disinformation and groups organizing ahead of the insurrection.

Since the violent attack last year, big platforms have cracked down on disinformation. But they may not be doing enough, say experts. 

Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, said the platforms’ tendency to be reactive makes it “much more likely that in 2022 and 2024 we’ll see renewed mayhem online and in the real world.” 

After the insurrection, Facebook took steps to combat content that amplified election disinformation and QAnon conspiracy theories. Facebook banned content with the phrase “Stop the Steal” and banned pages, groups and accounts tied to the QAnon conspiracy theory.

Facebook and Twitter also both cut off former President Trump’s accounts, although Facebook has left open the option of restoring Trump’s account next year. 

“Even after Jan. 6, and even after banning President Trump either indefinitely or permanently, it strikes me that the platforms still are more prone to reacting to what they see as public relations crises, then they are inclined to address these very serious problems in a forward looking and comprehensive way,” Barrett said. 

“There just hasn’t been any kind of concerted, industry-wide effort to say that, ‘Look we are part of the problem. Not because we intend to be part of the problem, but it turns out inadvertently, we have created tools that are being misused in connection with absolutely vital institutions and processes,’” he added. 

A Facebook spokesperson pushed back on the claim that its actions have been reactive. 

“Facebook has taken extraordinary steps to address harmful content and we’ll continue to do our part,” a spokesperson said in a statement. “We’re continuing to actively monitor threats on our platform and will respond accordingly.”

A Twitter spokesperson said the company has taken “strong enforcement action” against accounts that “incite violence” before and after the insurrection. 

“We recognize that Twitter has an important role to play, and we’re committed to doing our part,” the spokesperson said in a statement. 

Although conspiratorial disinformation is no longer “as prevalent” on Facebook and Twitter, the platforms haven’t “completely solved the misinformation problem,” Holt said. 

Content from “hyperpartisan” media is still performing well on the platforms and more of that content is incorporating the “same ideology and rhetoric from sources that are otherwise banned,” he said. 

Disinformation has also continued to circulate on mainstream platforms through coded language to work around the moderation, said Bret Schafer, head of the Alliance for Securing Democracy’s information manipulation team.

“You’re still seeing a lot of the same sort of narratives there, it’s just a little bit more subtle and certainly the names of groups are not quite as on the nose,” Schafer said. 

Many far-right extremists have also encouraged followers to channel efforts on more local events, such as fighting COVID-19 restrictions or challenging school curricula, which can be more challenging to monitor and detect online than a nationally focused event, Holt said. 

By targeting local events, they need less participants to cause a disturbance, meaning groups and accounts with smaller followings that may fall under the radar still pose risks.

“[If] an extremist group’s goal is not to rally 1,000 people in D.C., but to rally 10 people at a school board, they don’t need a channel or an account with a million followers. They could pull it off with 30 if it’s the 30 right people,” Holt said. 

“We’re not necessarily looking for the big fish anymore,” he added. “Those big fish will continue to remain important, but in this developing situation even these smaller clusters which might not have a particularly large amount of reach could still be capable of organizing.”