Instagram using new artificial intelligence to help stop bullying

The social media app will monitor comments that are considered offensive.

December 16, 2019, 8:13 AM

Instagram has a new feature that will use artificial intelligence to help combat bullying.

The photo-sharing social media app announced exclusively on "Good Morning America" that it will begin the roll-out to its billions of users on Monday.

PHOTO: In this photo illustration, the Instagram logo is displayed on the screen of an iPhone in front of a TV screen displaying the Instagram logo on Dec. 10, 2019 in Paris.
In this photo illustration, the Instagram logo is displayed on the screen of an iPhone in front of a TV screen displaying the Instagram logo on Dec. 10, 2019 in Paris.
Chesnot/Getty Images

How It Works

If a user writes a caption on a photo or video that the AI deems offensive, that user will receive a prompt that alerts them their language could be considered bullying.

That alert will give the user a chance to possibly change or delete their words before they are able to post anything.

How It Helps

Best-selling author and Smith College educator Rachel Simmons -- who is an expert on leadership development and helps girls and women build resilience and find their voices -- told ABC News she has seen the firsthand impact of social media on her students.

"When you are being cyberbullied, it follows you 24 hours a day," Simmons explained. "It's public and everyone can see it. And so that can really make a kid afraid to go to school. It can disrupt their sleep. It can make them feel like there's nowhere they can be where the bullying won't be."

Simmons said Instagram's new tool gives her hope.

"This is a great step in the right direction," she said. "We want to see social media platforms like Instagram stopping bullies before they start."

Instagram Tools and Reporting Bullying

Instagram partnered with suicide prevention programs to create the new product, which is the latest change to the photo and video platform after Instagram opted to begin removing "likes" from posts.

This warning seeks to help educate people on what Instagram allows, in addition to limiting the reach of bullying, and will alert users when an account is at risk of breaking its rules and community guidelines.

"We want to foster a positive, diverse community. We remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages," Instagram's community guidelines state.

The platform guidelines also asks users in the Instagram community to report it, "if an account is established with the intent of bullying or harassing another person or if a photo or comment is intended to bully or harass someone."

Related Topics