When disinformation becomes 'racialized'

Civil rights groups figure out tactics to fight "racialized disinformation."

February 5, 2022, 8:55 AM

Disinformation is so pervasive (and can be dangerous, as the world witnessed on Jan. 6 with the attack on the Capitol by many whipped into a frenzy by online disinformation) that advocates and researchers are not only looking into ways to fight it, but are analyzing it -- seeing what types of disinformation has spread and to whom.

One type, they say, is racially targeted disinformation.

Racialized disinformation -- a term used by some of the experts who spoke with ABC News -- is "intentionally designed to mislead," DeVan Hankerson Madrigal, research manager for the Center for Democracy & Technology, told ABC News.

PHOTO: CEO of Facebook Mark Zuckerberg appears on a monitor as he testifies remotely during the Senate Commerce, Science, and Transportation Committee hearing on Capitol Hill, Oct. 28, 2020.
CEO of Facebook Mark Zuckerberg appears on a monitor as he testifies remotely during the Senate Commerce, Science, and Transportation Committee hearing 'Does Section 230's Sweeping Immunity Enable Big Tech Bad Behavior?', on Capitol Hill, Oct. 28, 2020.
Pool via Getty Images

This form of disinformation exploits "existing forms of discrimination, targeting people based on race, gender, and identity," she added.

And it's rooted in racism and stereotypes, advocates say. Extremists have pinpointed Black communities by engaging in what advocates have called digital blackface -- non-black people creating fake online profiles pretending to be Black.

Digital blackface is often accompanied by those accounts using vernacular often stereotypically associated with Black culture and people according to Clemson professor and digital researcher Dr. Darren Linvill, who studies disinformation.

Certain social media hashtags like #Blaxit which appeared to be created and promoted by Black people were found to be "inauthentic or right-wing," said Brandi Collins-Dexter, a disinformation researcher at Harvard Kennedy School's Shorenstein Center on Media, Politics and Public Policy and author of the report, "Media Attack: Operation Blaxit."

Why racially targeted online disinformation?

The Russian trolls that spread disinformation during the 2016 election, as outlined in the Mueller report, also targeted Black Americans specifically, according to research by Deen Freelon from the University of North Carolina, on the Russian-based Internet Research Agency.

According to Freelon's report, Black Trolls Matter: Racial and Ideological Asymmetries in Social Media Disinformation, the Russian agency created social media accounts masking as Black users during the 2016 election season using Twitter account names like "Blacktivist" and "BlackMattersUS."

PHOTO: Displays showing social media posts are seen during a House Intelligence Committee hearing in Washington, D.C., Nov. 1, 2017.
Displays showing social media posts are seen during a House Intelligence Committee hearing in Washington, D.C., Nov. 1, 2017.
Bloomberg via Getty Images

Why would Russians bother to do this? According to Freelon and other researchers on the subject, these accounts tapped into America's racial divisions for geopolitical advantage and to suppress Black voter turnout by regularly sharing content to invoke outrage, such as police violence videos.

Racialized disinformation trolls also leveraged the distrust Black Americans may feel for institutions like law enforcement and health care after experiencing medical racism and ongoing discrimination, according to research from First Draft, a global not-for-profit organization that investigates online disinformation.

First Draft said it also found that white anti-vaccine advocates such as physician Simone Gold and Robert F. Kennedy Jr. spread misinformation online that is targeted to Black people.

In January 2020, a speech by Gold shared over 3,800 times on Facebook referred to the vaccine as an "experimental biological agent," and alleged that the U.S. government uses Black people as test subjects for the vaccine.

Kennedy falsely claimed that vaccines are harmful to Black people's health, and highlighted the Tuskegee syphilis experiment as a comparison, in a recent documentary.

Gold and Kennedy did not immediately respond to a request for comment.

Anti-vaccine advocates have also targeted Hispanic communities.

ISD, a London-based think tank, found that out of over 8,000 Spanish-language pieces of content mentioning the World Doctors Alliance, an anti-vaccine group, only 26 were fact-checked.

The more popular anti-vaccine groups including Médicos por la Verdad (Doctors for the Truth) and Coalición Mundial Salud y Vida (Comusav) translated memes, videos and posts from English to Spanish to amplify anti-vaccine narratives to their followings, according to First Draft.

Tackling race-based disinformation

In addition to studying racialized disinformation, there are groups that say they are determined to fight it.

The Disinfo Defense League is made up of over 230 organizational members that are demanding social media companies halt what they see as the spread of harmful false information targeted to Black and brown people.

PHOTO: The Facebook und Twitter logos on the screen of an iPhone, March 9, 2021.
The Facebook und Twitter logos on the screen of an iPhone, March 9, 2021.
DeFodi Images via Getty Images

Launched in June 2020, the Disinfo Defense League created a policy platform calling on Congress and the U.N. to protect digital civil rights and investigate what it deems "racialized disinformation" -- intentionally misleading, racially based online propaganda.

Politicians have also been prodded into action. In April 2021, Senate lawmakers interrogated the CEOs of Twitter, Google and Facebook on issues such as misinformation and extremism, while calling for amends to Section 230, a federal law that offers broad legal immunity to social media sites.

Groups like the Disinfo Defense League are also demanding Big Tech limits collecting users' personal information ceases "discriminatory algorithms."

"Facebook had had the power to limit a lot of harmful narratives that were circulating before the [presidential] election, because they turned on this specific mechanism that limited the spread of election units and disinformation close to the election," Jaime Longoria, researcher for the Disinfo Defense League said. "It really goes back to their business model."

PHOTO: Close-up shot of female hands typing on computer keyboard, lying on bed, working late at home.
STOCK PHOTO/Getty Images

When asked for comment, Facebook referred ABC News to its general policy about misinformation.

First Draft's Kaylin Dodson also said social platforms must allow journalists more access to their data in order to tackle racialized disinformation.

"It's all about verification. You can find a Twitter account and be like, 'oh, this Twitter belongs to a doctor,' Dodson said. "But a lot of times, if you can do more digging on that doctor, you may find the red flags there.

In a statement to ABC News, Twitter said:

"We regularly take proactive enforcement action on accounts engaged in platform manipulation and spam, which we define as using Twitter to engage in bulk, aggressive, or deceptive activity that misleads others and/or disrupts their experience. This includes accounts that may be involved in the spread of misleading information," Twitter said in a statement to ABC News.

Google sent ABC News its policy on delivering reliable information, along with YouTube’s hate speech and harassment policy.

TikTok did not respond to a request for comment about racially targeted disinformation.