If you’ve opened YouTube recently, you may have stumbled across a sudden, intrusive, familiar geometric pop-up informing you that your settings have been changed and that you must verify your age to avoid the automatic implementation of child safety settings. YouTube’s new policy for child safety comes amid a country — and even worldwide — wave of child safety crackdowns driven by concerns about increasing accessibility to age-inappropriate content online, fueled even more so from the controversy over child safety in popular online game Roblox and the implementation of new child safety measures in the EU.
In reaction to backlash from social media and politicians, California and several other states have begun taking legislative action, putting pressure on the companies behind major information-sharing platforms like YouTube to implement new safety measures for children. In the scramble to meet new safety regulations, some of these companies have turned to using AI software to moderate and monitor user activity to verify age. However, the collection of data in this manner by AI is problematic due to personal user data, especially that of minors, being in the hands of corporations that have histories of misusing data without consent. Furthermore, the fallibility of AI that follows when it attempts to deduce a user’s age compromises privacy.
A recent act passed by Congress monitors AI in terms of child safety, requiring “covered social media platforms” — including video games, messaging applications, video streaming services, or any other platform likely to be used by individuals under 17 — to implement tools and safeguards to protect minors from exposure to age-inappropriate content. It encourages corporations to exercise care when designing features to mitigate the data mining of minors, such as including tools for parents, access to privacy settings and mechanisms where visitors could report concerns of harm towards minors. Through enforcement by the Federal Trade Commission, these corporations would be prohibited from conducting market or product research on students under the age of 13 entirely and students between 13 and 17 without parental consent.
In California, minors are protected by the California Age-Appropriate Design Code Act, requiring corporations to remain transparent with the public when promoting online products and services directed towards minors and to take into account “unique needs of different age ranges” to set up protections based on “developmental stages” in childhood listed in the Act. For example, ages 10 to 12 are categorized as “transition years,” ages 13 to 15 as “early teens,” and 16 to 17 as “approaching adulthood.”
YouTube was accused of using AI to collect and analyze personal user data regardless of the user’s age to formulate algorithms in 2019, effectively bypassing existing child protection laws. Instead of openly declaring itself as a platform not meant for children, YouTube’s algorithms have inconspicuously collected data from all users, leading to legal violations that resulted in regulatory action and fines.
More recently, the company has once again come under scrutiny for similar practices. While YouTube claims its AI age verification systems are designed to ensure safer online environments, they have created new problems — these AI algorithms analyze user activity on YouTube, such as the video topics someone watches, to flag possible minors. One major concern is that AI often misidentifies users’ ages this way, as it is hard to broadly categorize a type of activity or content as specific to an age group accurately.
Regulating companies through legislation is often ineffective, as corporations with vast resources consistently find ways to circumvent laws without having to face serious penalties. As mentioned, YouTube has exploited loopholes by avoiding clear classification as a platform “meant for children,” while companies like TikTok and Meta have faced fines for child privacy violations, yet all continue to operate similarly with little meaningful change. Financial penalties are simply treated as the cost of doing business, rather than as an adequate punishment that would prevent companies from invading this privacy.
AI also cannot always be trusted to properly moderate users; many users have complained about being incorrectly identified as minors by imperfect algorithms, and are forced to either submit sensitive personal info to verify their age or give up the full accessibility they were entitled to before. This goes against one of the most fundamental guidelines of online privacy about not sharing sensitive information online, and it is thus reasonable to have concerns about whether large corporations like YouTube will handle such sensitive data responsibly, considering their track record.
Given the limitations of legislation, unreliability of AI and the ease with which large corporations exploit loopholes, responsibility cannot rest on legal and corporate regulation alone. Parents and educators must take a proactive role in teaching children how to navigate platforms driven by AI systems that could possibly invade their privacy. Instead of focusing exclusively on what companies should block, we should shift towards preparing children to make informed decisions online by educating them on how AI algorithms work, why certain content is pushed to them and what data privacy means.
While it is true that educating children on responsible usage of AI does not eliminate data collection, we must start with regulating access in the household by thoroughly reading terms and conditions and adjusting settings on platforms with AI on kids’ devices. As students, we should stay cautious, making sure that we are being especially mindful of the data we are entering into the social media and video streaming apps we commonly use. Though the responsibility is on the companies not to misuse our data, there is no perfect solution to this yet, or maybe ever. There are always loopholes to avoid the consequences of the law, and to these huge corporations, the consequences are almost non-existent, and occur after data has already been collected and used for monetization.
Therefore, parents should inform their children on how to approach platforms that use AI to abuse data collection, and make sure to review terms and conditions to claim their right to privacy through regulating settings on devices to opt out of data collection wherever possible.


