Popular social media platform TikTok on Wednesday announced plans for a rating system aimed at protecting young users from inappropriate content.
The move comes after sharp criticism from lawmakers and advocates in recent months over the prevalence of harmful posts on the app, especially those that appear in the feeds of young users.
The rating system, called "Content Levels," will categorize videos based on the age-appropriateness of their material, preventing users under 18 from seeing certain content deemed mature, the company said. The system will be launched in the coming weeks and operate like similar approaches in the film and gaming industries, TikTok added.
"We want to play a positive role in the lives of the people who use our app, and we're committed to fostering an environment where people can express themselves on a variety of topics, while also protecting against potentially challenging or triggering viewing experiences," the company said.
In February, Wisconsin Sen. Tammy Baldwin and Minnesota Sen. Amy Klobuchar, both Democrats, sent a letter to TikTok saying its "algorithm of 'nonstop stream of videos' increases the likelihood that viewers will encounter harmful content even without seeking it out."
The letter followed an investigation from The Wall Street Journal in December that found the platform surfaced tens of thousands of weight loss videos to a dozen automated accounts registered as 13 year olds within a few weeks of their joining the app.
Since last year, TikTok has been testing solutions that prevent users from seeing a flood of content focused on sensitive topics like dieting and sadness, the company said in a statement on Wednesday. In addition to the ratings system, the company is readying to launch a feature that will recognize and limit such sensitive topics from appearing in a user's feed, it said.
In general, scrutiny over the harmful effects of content on social media, especially for young people, has intensified since leaks from whistleblower Frances Haugen last year revealed that an internal Facebook study had shown damaging mental health effects of Instagram for teen girls.
In September, Facebook suspended plans to offer a version of Instagram for kids.
The following month, officials from Snapchat, TikTok and YouTube told lawmakers they would work with them on proposals to help protect young users from harmful content on their platforms.
A bipartisan Senate bill introduced in February aims to tackle the harmful effects of social media for young people through a variety of measures, including mandatory privacy options that would allow users to disable addictive features and a tool for parents to track time spent on apps. So far, eight senators have signed on in support of the legislation.
A separate bipartisan Senate bill would fund a study of the effects of social media. Six senators have formally supported the bill.