Instagram Unveils a Bully Filter
Posted May 1, 2018 2:22 p.m. EDT
Instagram said Tuesday it was expanding its online anti-bullying initiative, adding a new filter to weed out comments meant to harass or bully the 800 million users of the popular social media site.
The company said it would review accounts that have a large number of comments filtered out. If those accounts violate Instagram’s community guidelines, it will take action, which could include banning them. The new filter will also hide comments attacking a person’s appearance or character, and alert Instagram to repeat offenders.
It is the second step in an initiative announced last year to curb offensive comments and rid Instagram of its most malicious members.
“To be clear, we don’t tolerate bullying on Instagram,” Kevin Systrom, the company’s chief executive and co-founder, told Instagram users in a blog post Tuesday.
The company will also expand policies to guard against the bullying of young public figures who are often the target of hate-filled messages.
“Protecting our youngest community members is crucial to helping them feel comfortable to express who they are and what they care about,” he added.
In a 2017 study conducted by Ditch the Label, an online anti-bullying organization, 71 percent of respondents in the United Kingdom said social media sites did not do enough to combat online bullying. Instagram was of particular note: 42 percent of more than 10,000 people aged 12 to 20 said they had experienced cyberbullying on the site in the previous 12 months.
In March, model and actress Amber Rose called out cyberbullies for saying her 5-year-old son was gay after she posted videos on Instagram of him opening a gift from singer Taylor Swift.
It is not only children who are targeted. In November, Drew Barrymore was attacked after she posed with a starfish in a photograph to promote a new lipstick.
“It hurt me,” she wrote in a follow-up post, which was liked 484,238 times.
Instagram, like other social media sites including Twitter and YouTube, has become an easy place to shame or offend, something the company acknowledged last year. Systrom addressed it in a blog post then, saying, “Many of you have told us that toxic comments discourage you from enjoying Instagram and expressing yourself freely.”
Instagram is using a machine-learning algorithm to detect offenders. Called DeepText, it was built by Facebook, which owns Instagram, and uses artificial intelligence to review words for context and meaning, much as the human brain determines how words are used. (Facebook is holding its annual F8 developer conference this week.) Initially, Instagram had a team of people review and rate comments, sorting them into different categories: bullying, racism or sexual harassment.
“What we are concentrating on is building the tools so people can control their experience on Instagram,” said Karina Newton, head for public policy at Instagram. “Those will improve over time.”
Instagram’s users are expected to follow the site’s guidelines, which include being respectful to other community members and not posting photographs of naked bodies.
The company has also embarked on a “kindness” campaign, hosting events to promote inclusion and diversity.
“It’s been our goal to make it a safe place for self expression and to foster kindness within the community,” Systrom said. “This update is just the next step in our mission to deliver on that promise.”