A teenager holds a phone whose screen displays the Instagram social media logo.

New research indicates that teenagers using Instagram are still encountering safety concerns on the platform, even though improved safeguards for young users were implemented over a year ago.

An exclusive report provided to TIME implies that the safety measures introduced by Meta, Instagram’s parent company, last year have not succeeded in curbing safety problems for teenagers. This study, conducted by child advocacy organizations ParentsTogether Action, the HEAT Initiative, and Design It for Us—which Meta contested as prejudiced—found that almost 60% of adolescents aged 13 to 15 experienced unsafe content and unsolicited messages on Instagram over the past six months. Close to 60% of those who received unwanted messages indicated they were from individuals they suspected were adults. Furthermore, almost 40% of young people who received unwelcome messages stated the sender sought to initiate a sexual or romantic relationship.

Shelby Knox, director of online safety campaigns at ParentsTogether, a national parental advocacy group, commented, “The most alarming aspect was the continued extent of contact children have with unassociated adults.” She added, “Parents were assured secure environments. We were assured that adults would not be able to reach our children on Instagram.”

Meta challenged the conclusions drawn by the researchers.

Liza Crenshaw, a Meta spokesperson, stated to TIME, “This highly subjective report is based on a fundamental misinterpretation of how our teen safety mechanisms operate. Moreover, it disregards the fact that hundreds of millions of teenagers in Teen Accounts are encountering less sensitive material, experiencing fewer unwelcome interactions, and spending less time on Instagram during evening hours.” She continued, “We are dedicated to continually enhancing our tools and engaging in crucial discussions about teen safety—however, this report does not further either objective.”

In September 2024, Meta introduced substantial modifications to the platform with the aim of enhancing safety for younger users. Individuals under the age of 18 were to be automatically enrolled in “Teen Accounts,” which were specifically designed to filter out harmful content and limit messages from users they do not follow or are not connected with. Upon the launch of Teen Accounts, the company claimed they “incorporated safeguards for teens, offering parents reassurance.”

Conversely, the latest report indicates a different outcome. Researchers discovered that inappropriate content and unsolicited messages remain so prevalent on Instagram that 56% of teenage users reported not even flagging them, stating they are “accustomed to it now.”

Sarah Gardner, CEO of the Heat Initiative, an advocacy group dedicated to urging tech companies to revise their policies for safer children’s platforms, asserted, “Unless you intend for your children to have round-the-clock access to an R-rated environment, you should not grant them access to Instagram Teens.” She added, “It is unequivocally failing to provide the protections it claims.”

This recent report marks the second in a few weeks to raise questions about the effectiveness of Meta’s child-safety mechanisms. A report from late September, produced by other online-safety advocacy groups and validated by researchers at Northeastern University, concluded that a majority of Instagram’s promised 47 child safety features were defective.

In that particular study, reported by Reuters, investigators determined that among the 47 features, only eight functioned as promoted; an additional nine mitigated harm but with certain restrictions, while 30 tools (64%) were either non-functional or had been discontinued, encompassing sensitive-content filters, time-management utilities, and instruments intended to shield children from unsuitable contact.

The researchers in that same study, which Meta also challenged, discovered that adults could still send messages to teenagers who did not follow them, and that Instagram recommended teens follow unknown adults. They also observed that Instagram continued to suggest sexual, violent, self-harm, and body-image content to teens, despite such posts supposedly being blocked by Meta’s sensitive-content filters. Furthermore, they found indications that elementary-school-aged children were not only accessing the platform—despite Meta’s prohibition on users under 13—but that “Instagram’s algorithm, based on recommendations, actively encouraged children under 13 to engage in risky sexualized behaviors” due to the “unsuitable promotion” of sexualized content.

Arturo Bejar, a former senior engineering and product leader at Meta who contributed to the design of that study, informed TIME that the company’s algorithm favors suggestive content, even when originating from children unaware of their actions. Bejar remarked, “The minors did not start out that way, but the product’s design instructed them to be.” He concluded, “At that stage, Instagram itself takes on the role of a groomer.”

The day following the release of that report, Meta declared that it had already enrolled hundreds of millions of underage users into Instagram Teen Accounts and was broadening the initiative to include teenagers globally across Facebook and Messenger. Additionally, they unveiled new collaborations with educational institutions and educators, along with a new online safety curriculum for middle school students.

Adam Mosseri, head of Instagram, asserted in a blog post regarding the rollout, “Our aim is for parents to feel confident about their teenagers using social media. We recognize that teens utilize applications such as Instagram to connect with peers and pursue their interests, and they should be afforded this opportunity free from concerns about unsafe or unsuitable encounters.” He further added, “Teen Accounts are purposed to provide parents with reassurance.”