

Aiming to shield young users from mature material, the company introduces automated safeguards amid ongoing scrutiny over social media's youth impact.
Instagram, owned by Meta, announced on Tuesday an expansion of its safety measures for users under 18, drawing inspiration from the PG-13 movie rating to filter out content deemed too mature for teenagers.
The update automatically applies these restrictions to teen accounts, limiting exposure to posts with strong language, references to drugs, risky stunts, or other themes that align with adult-oriented material.
This initiative builds on last year's introduction of Teen Accounts, which established default privacy settings and curbs on content related to violence, self-harm, and cosmetic procedures.
Meta's move responds to persistent concerns from parents, advocacy groups, and regulators about the platform's role in youth mental health.
Under the new system, Instagram will not promote or display posts that encourage harmful behaviors, such as those featuring marijuana paraphernalia or dangerous activities.
Teen users will be unable to follow accounts that frequently share age-inappropriate material, and existing follows will result in blocked interactions, including direct messages and comments.
The platform will also restrict search results for terms like "alcohol" and "gore" for minors.
Meta's generative AI tools, including chatbots, will adhere to similar guidelines, avoiding flirty exchanges, discussions of self-harm, or suicide with young users.
Parents can activate a "Limited Content" setting for stricter filters, reduced screen time, and further limits on AI conversations, set to expand next year.
These features will use age prediction technology to enforce protections even if teens misrepresent their age during signup.
The changes come amid reports questioning the efficacy of prior safeguards, including a September study where nearly 60 percent of 13- to 15-year-olds on Teen Accounts encountered unsafe content or unwanted messages in recent months.
Meta has contested such findings, arguing they overlook positive user experiences.
Earlier investigations revealed issues with AI chatbots engaging in romantic or sensual interactions with minors, prompting August updates to curb such behavior.
The rollout begins Tuesday in the United States, United Kingdom, Australia, and Canada, with a full global launch by year-end.
Similar protections are extending to Facebook teens.
This effort unfolds against a surge in lawsuits from families and schools accusing Meta, TikTok, and YouTube of fostering addictive designs harmful to children.
Meta emphasized the alignment with familiar parental standards like PG-13 to provide clearer control.
The company stated in a blog post:
Just like you might see some suggestive content or hear some strong language in a PG-13 movie, teens may occasionally see something like that on Instagram - but we're going to keep doing all we can to keep those instances as rare as possible.