Meta Expands AI Crackdown on Underage Facebook and Instagram Users Amid Growing Child Safety Pressure
Meta has begun deploying a new artificial intelligence system capable of estimating a person’s age from photos and videos in a major expansion of its effort to identify underage users on Facebook and Instagram. The move comes as the social media giant faces increasing global scrutiny over child safety standards, platform accountability, and compliance with international digital regulations.
The company said the new technology analyzes physical characteristics visible in images and videos, including indicators such as height and bone structure, to estimate whether an account may belong to a child younger than 13. Meta stressed that the system is not facial recognition technology and does not attempt to identify who a person is. Instead, the AI is designed only to estimate approximate age ranges based on visual information.
The announcement marks one of the most aggressive steps Meta has taken so far to enforce minimum age requirements across its platforms. The company said the visual analysis system is currently available only in select countries, though wider international expansion is expected over time.
The rollout arrives at a sensitive moment for Meta as governments, regulators, and child safety advocates intensify pressure on large technology platforms to improve protections for minors online.
Meta Introduces Visual AI System to Detect Underage Accounts
Meta said the new AI system works alongside existing tools already used to identify accounts suspected of belonging to children under the age of 13. Until now, much of the company’s detection strategy relied heavily on text based analysis and behavioral patterns.
The company explained that its systems already scan public activity across posts, profile biographies, comments, and captions for clues that may indicate a user is underage. These clues may include references to school grades, birthdays, or other age related information shared publicly on the platform.
With the introduction of visual analysis, Meta believes it can significantly improve the accuracy and scale of underage account detection.
According to the company, the AI system can examine uploaded photos and videos to estimate age based on physical appearance signals. Meta emphasized that the technology does not store identity information or perform facial matching. Instead, it uses broader age estimation techniques intended to determine whether an account potentially violates the platform’s minimum age rules.
Any account identified by the system as potentially belonging to a child under 13 is immediately suspended. Users must then complete Meta’s age verification process to regain access. Accounts that fail verification or cannot prove eligibility are permanently removed from the platform.
The company said this layered approach combining text analysis, behavioral signals, visual AI tools, and human review teams is intended to create a stronger safety framework across Facebook and Instagram.
Instagram and Facebook Teen Account Restrictions Expand Globally
Alongside the new AI announcement, Meta also confirmed the expansion of its Teen Accounts system, which automatically applies stricter privacy and messaging protections to users believed to be between the ages of 13 and 17.
The Teen Accounts framework introduces a more restricted experience designed to reduce unwanted contact and harmful interactions involving minors. Under these settings, accounts are placed on private mode by default, messaging access is limited, and certain comments and interactions are automatically filtered.
Meta said the system has already been operating in countries including the United States, Australia, Canada, and the United Kingdom through Instagram. The company is now extending these protections to Brazil and all 27 member states of the European Union.
In another significant expansion, Meta confirmed that Teen Accounts will also launch on Facebook in the United States for the first time. The company plans to expand the feature further into the United Kingdom and European Union beginning in June.
The broader rollout reflects Meta’s attempt to demonstrate stronger child safety enforcement as lawmakers continue questioning whether major social media companies are doing enough to protect younger users.
Child Safety Concerns Continue to Intensify Around Meta Platforms
Meta’s latest announcement comes during a period of mounting legal and regulatory pressure tied to youth safety concerns on social media.
Regulators in Europe have increasingly focused on whether large technology platforms are adequately enforcing age restrictions and protecting children from harmful online experiences. Preliminary findings from the European Commission reportedly suggested that Meta’s current safeguards may not fully satisfy obligations under the Digital Services Act, particularly regarding the prevention of underage access.
The Digital Services Act imposes strict responsibilities on large online platforms operating within the European Union, especially concerning harmful content, platform transparency, and user protection standards.
At the same time, Meta is also facing legal challenges in the United States connected to allegations surrounding the safety of its platforms for children and teenagers.
Reports recently indicated that a New Mexico jury imposed a civil penalty of 375 million dollars against Meta after determining that the company had misled the public regarding how safe its platforms are for minors. The case intensified public debate over the broader impact of social media on children’s mental health, online behavior, and digital wellbeing.
The company has repeatedly defended its child safety efforts and says it continues investing heavily in moderation systems, AI safety tools, and parental controls.
Meta Pushes for App Store Age Verification Laws
As part of its broader child safety strategy, Meta also renewed calls for legislation that would require app stores to verify user ages before allowing access to social media applications.
The company argued that app stores are in a stronger position to confirm ages at the device level and could provide more consistent verification standards across digital platforms.
Meta claimed that a large majority of parents in the United States support this approach. According to the company, 88 percent of American parents favor requiring app stores to verify user ages and share that information with app developers.
The proposal reflects a growing industry debate over who should bear the primary responsibility for online age verification. Technology companies, lawmakers, privacy advocates, and app marketplace operators continue to disagree on the best balance between user safety, data privacy, and platform accountability.
While Meta supports app store level verification, critics argue that social media companies themselves must remain directly responsible for enforcing their own age restrictions and moderation policies.
AI Moderation Becomes Central to Platform Governance
Meta’s latest move also highlights the increasing role artificial intelligence now plays in large scale platform moderation and digital safety enforcement.
As billions of photos, videos, comments, and interactions flow across social media platforms every day, technology companies are relying more heavily on automated systems to identify harmful behavior, policy violations, and safety risks.
The company said it is also improving its reporting systems so users can more easily flag accounts suspected of belonging to children under 13. Meta plans to supplement human moderation teams with AI systems capable of applying more consistent standards during review processes.
Industry analysts believe the growing use of AI moderation tools represents both an operational necessity and a major strategic shift for technology platforms. Automated systems can review massive volumes of content far faster than human teams alone, though concerns remain about accuracy, fairness, privacy, and the risk of incorrect enforcement actions.
Meta’s emphasis that its age estimation system is not facial recognition appears aimed at addressing potential public concerns about surveillance and biometric privacy. Facial recognition technology has faced strong criticism globally over issues involving consent, data security, and misuse.
By separating its new system from facial recognition terminology, Meta appears to be positioning the technology as a safety focused moderation tool rather than an identity tracking mechanism.
Growing Global Debate Over Children and Social Media
The expansion of AI driven age detection reflects a much broader global conversation about children’s access to social media and the psychological impact of digital platforms on young users.
Governments across several countries are considering stricter rules involving screen time limits, age verification requirements, parental controls, and restrictions on algorithm driven recommendations for minors.
Researchers and child safety organizations have repeatedly warned that excessive social media exposure may contribute to mental health challenges among young users, including anxiety, depression, social pressure, and harmful online interactions.
Technology companies including Meta, TikTok, Snap, and others are now under sustained pressure to demonstrate stronger safeguards for younger audiences.
Meta maintains that its latest AI systems are part of a long term strategy focused on improving child safety while maintaining user privacy protections. However, the effectiveness of these measures will likely face continued examination from regulators, lawmakers, and advocacy groups worldwide.
The company’s latest actions signal that artificial intelligence will play an increasingly central role in how social media platforms enforce age restrictions and manage online safety in the years ahead.
Frequently Asked Questions
What new AI technology is Meta using on Facebook and Instagram?
Meta is using AI systems that estimate a person's age from photos and videos by analyzing visual cues such as height and bone structure to identify possible underage users.
Is Meta using facial recognition to detect underage users?
Meta said the technology is not facial recognition. The AI does not identify who a person is and only estimates approximate age based on visual analysis.
Why is Meta introducing AI based age detection tools?
Meta says the goal is to improve child safety by identifying accounts that may belong to children under 13, who are not allowed to use Facebook or Instagram.
What happens if Meta suspects an account belongs to a child under 13?
Accounts flagged by the system are suspended immediately. Users must complete Meta's age verification process to regain access or the account may be permanently removed.
What signals does Meta use to identify underage users?
Meta combines visual AI analysis with text and behavioral signals, including posts, captions, comments, and profile information that may reveal a user's age.
What are Meta Teen Accounts?
Teen Accounts are restricted account settings for users aged 13 to 17 that include private accounts by default, limited direct messages, and stronger content protections.
Which countries are receiving Meta's expanded Teen Accounts feature?
Meta is expanding Teen Accounts on Instagram to Brazil and all 27 European Union member states while also launching the feature on Facebook in the United States.
Why is Meta facing pressure over child safety?
Meta is under scrutiny from regulators and lawmakers over concerns that its platforms may not provide enough protection for children and teenagers online.
What concerns have regulators raised about Meta's platforms?
European regulators have questioned whether Meta's existing child safety measures fully comply with Digital Services Act requirements for preventing underage access.
What did the New Mexico case against Meta involve?
Reports said a New Mexico jury imposed a 375 million dollar civil penalty against Meta over allegations that the company misled the public about child safety on its platforms.
Does Meta support mandatory age verification laws?
Yes. Meta supports legislation that would require app stores to verify user ages and share that information with app developers.
How is artificial intelligence changing online platform moderation?
AI allows platforms like Meta to review large amounts of content faster and apply automated systems to detect policy violations, safety risks, and underage accounts.
Edit Profile
Help improve @KR

Was this page helpful to you?
Contact Khogendra Rupini
Are you looking for an experienced developer to bring your website to life, tackle technical challenges, fix bugs, or enhance functionality? Look no further.
I specialize in building professional, high-performing, and user-friendly websites designed to meet your unique needs. Whether it's creating custom JavaScript components, solving complex JS problems, or designing responsive layouts that look stunning on both small screens and desktops, I can collaborate with you.
Create something exceptional with us. Contact us today
Open for Collaboration
If you're looking to collaborate, I'm available for a variety of professional services, including -
- Website Design & Development
- Advertisement & Promotion Setup
- Hosting Configuration & Deployment
- Front-end & Back-end Code Implementation
- Code Testing & Optimization
- Cybersecurity Solutions & Threat Prevention
- Website Scanning & Malware Removal
- Hacked Website Recovery
- PHP & MySQL Development
- Python Programming
- Web Content Writing
- Protection Against Hacking Attempts
