Meta is set to use artificial intelligence (AI) to analyze photos and videos for visual clues such as height or bone structure in an attempt to identify underage users. The company claims it’s not facial recognition but rather looking at general themes to estimate age, combining these insights with text analysis to increase the number of underage accounts removed.
This system is already operating in select countries, with plans for a broader rollout. Meta's efforts also include analyzing entire profiles for contextual clues such as birthday celebrations or school grades across various formats. The company aims to expand this technology to Instagram Live and Facebook Groups.
The announcement comes after a New Mexico jury ordered Meta to pay $375 million in civil penalties for misleading consumers about platform safety, emphasizing the importance of child protection measures within tech giants.
Meta is also expanding its technology that automatically places teens into stricter 'Teen Accounts' on Instagram to 27 countries in the EU and Brazil. These accounts offer enhanced security features such as receiving direct messages only from people they follow or are already connected with, hiding harmful comments, and setting accounts to private by default.
These measures reflect Meta's ongoing battle to comply with child safety regulations while maintaining user engagement on its platforms like Facebook and Instagram.







