National News

Five ways hate speech spreads online

Hate groups that support neo-Nazism, racism, sexism or homophobia use virtually the same tools as any mainstream advertiser to spread their insidious messages online.

Posted Updated

By
CNN Staff (CNN)
(CNN) — Hate groups that support neo-Nazism, racism, sexism or homophobia use virtually the same tools as any mainstream advertiser to spread their insidious messages online.

What's worse, these groups are growing in numbers. For the first time in eight years, hate groups were found in all 50 states, according to a report released this year by the Southern Poverty Law Center. The report also warns that the number of US hate groups has increased 20% since 2014.

Hate groups have transformed parts of the internet into a propaganda battlefield, with potentially deadly consequences. For example, 2015's Charleston massacre, the so-called "Pizzagate" fake conspiracy story in 2016 and last year's violent, racially charged rally and protest in Charlottesville, Virginia, were all linked to online activities.

Here are five basic ways hate groups use the internet to get their messages out and stir up violence.

1. Social networking

Mainstream social networking outlets such as Twitter and Facebook have struggled with how to handle hate groups on their platforms. These sites often find themselves trying to balance the right to share and debate ideas with the responsibility to protect society against potential attacks.

After Charlottesville, Facebook CEO Mark Zuckerberg said Facebook was already taking down "any post that promotes or celebrates hate crimes or acts of terrorism." In addition to removing posts connected to specific groups and events, Facebook said it was paying closer attention to its content in the wake of the Virginia attacks.

Facebook said it has its own internal guidelines about what constitutes a hate group.

Simply being white supremacists or identifying as "alt-right" doesn't necessarily qualify. A person or group must threaten violence, declare it has a violent mission or actually take part in acts of violence.

Twitter also reacted following Charlottesville. Last October in an internal email, Twitter CEO Jack Dorsey detailed more aggressive policies, including treating hateful imagery and hate symbols on Twitter as "sensitive media."

Like adult content and graphic violence, the content will be blurred and users will need to manually opt in to view. But Twitter didn't detail what it considers to be a hate symbol.

In late 2017, people used web platforms including bulletin board sites Reddit and 4chan to post about the "Pizzagate" false conspiracy theory that top Democrats including Hillary Clinton ran a child sex operation out of a D.C. pizzeria. The false story spread to other platforms and websites. Pizzeria owner James Alefantis received death threats. Eventually a man with an assault rifle showed up at the pizzeria and fired a shot before he was apprehended. No one was hurt.

Reddit cracked down, banning the so-called "r/pizzagate" thread from its site. "We are very clear in our site terms of service that witch hunts and vigilantism can hurt innocent people," said a spokeswoman in a statement sent to CNNMoney in 2016.

Although tech companies feel increasing pressure to police speech on their platforms, if they over-correct or ban speech too broadly, they risk losing customers, said David Snyder, executive director at the First Amendment Coalition to CNNMoney.

2. Video platforms

Countless messages of hate are posted worldwide on various video platforms. According to a statement posted on YouTube last June, YouTube and its owner Google promised to do more to identify and remove hateful content.

"... (W)e will be taking a tougher stance on videos that do not clearly violate our policies --- for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetized, recommended or eligible for comments or user endorsements."

However, last month, a CNN investigation found that ads from more than 300 companies and organizations -- including tech giants, major retailers, newspapers and government agencies -- ran on YouTube channels promoting white nationalists, Nazis, pedophilia, conspiracy theories and North Korean propaganda.

YouTube responded to CNN's investigation with a written statement: "When we find that ads mistakenly ran against content that doesn't comply with our policies, we immediately remove those ads. We know that even when videos meet our advertiser friendly guidelines, not all videos will be appropriate for all brands. But we are committed to working with our advertisers and getting this right."

3. Online funding

Without the convenience of using the internet to raise funds, many hate groups would be crippled.

The Southern Poverty Law Center, a nonprofit that monitors hate groups in the US, said organizers, speakers and individuals attending last year's Charlottesville rally used PayPal to move money ahead of the event.

PayPal said in a blog post last August it works to make sure its services aren't used to accept payments or donations that promote hate, violence or racial intolerance. That includes groups that encourage racist views, such as the KKK and white-supremacist organizations.

"If we become aware of a website or organization using our services that may violate our policies, our highly trained team of experts addresses each case individually and carefully evaluates the website itself, any associated organizations, and their adherence to our policy," PayPal said in the blog post.

Popular crowdfunding site GoFundMe also took a stand against hate speech following Charlottesville. The platform shut down multiple campaigns to raise money for James Fields, the man accused of driving his car into a crowd at the rally, killing one woman and injuring dozens more.

"White nationalists and neo-Nazis cannot use GoFundMe to promote hatred, racism, or intolerance, and if a campaign violates GoFundMe's terms of service, we'll remove it from the platform," a spokesman told CNN Tech last August.

The company said those campaigns did not raise any money and were immediately removed.

Some hate groups get around traditional funding sites by using alt-right-focused fundraising platforms and cryptocurrency. Cryptocurrency a relatively new kind of currency for the digital era. It works across international borders and doesn't need to be backed by banks or governments.

4. Websites/Webhosting

Websites are a basic piece of the hate propaganda machine. But Charlottesville may have made it more difficult for white supremacist and Neo-Nazi websites to remain online.

Webhosting companies GoDaddy and Google Domains gave the white supremacist site The Daily Stormer the boot after it published a derogatory story about the victim killed in the Charlottesville protest.

Google said in a statement it canceled The Daily Stormer's registration for violating its terms of service.

But getting kicked off a web host merely forces a site to go somewhere else --- ultimately prompting a game of internet-domain whack-a-mole. It also raises issues around what domain-hosting companies are responsible for, and where they draw the line on objectionable material.

"Legally, they don't have any responsibility around this, unless it's a federal crime (such as child pornography) or intellectual property," Daphne Keller, the director of intermediary liability at the Stanford Center for Internet and Society, told CNN Tech last August.

5. The dark web

If you've never heard of the dark web, you should be made aware. The dark web is a part of the internet that can't be searched by Google or most common search engines. It can only be viewed with a special Tor browser.

After being banned by GoDaddy, Google Domains and a Russian hosting outfit, The Daily Stormer was forced onto the dark web, where it couldn't be accessed through standard web browsers. Later the site was able to find a legitimate host and return to the internet.

Hate groups may find less policing on the dark web, but because relatively few people use the dark web, the potential audience available is very small.

Why haven't tech companies done more to combat hate groups online?

One reason: Tech platforms are protected under the Communications Decency Act, a unique US legal protection that gives a broad layer of immunity to online companies from being held liable for content posted by users. There are exceptions, but the law is meant to preserve freedom of expression. Companies are supposed to act in good faith to protect users.

Copyright 2024 by Cable News Network, Inc., a Time Warner Company. All rights reserved.