Discord Introduces Age Verification Measures Amid Growing Pressure to Protect Children Online

  1. Home
  2. »
  3. Data Privacy & Governance Hub
  4. »
  5. Discord Introduces Age Verification Measures Amid Growing Pressure to Protect Children Online

Key Takeaways

  • Discord is defaulting all accounts to a “teen appropriate” mode, requiring adult users to verify their age through an AI-driven inference model or through a video selfie or government ID upload.
  • This new policy raises concerns about transparency about how much behavioral monitoring is required to train and run these models, and whether tech companies can be trusted to safeguard users’ sensitive data.
  • The absence of federal social media legislation has left a fragmented patchwork of state laws, and Discord’s measures reflect growing pressure to act on child safety before Congress does.

What Happened

Due to concerns about protecting children on online platforms, Discord, the popular messaging app used in online gaming, is implementing a new policy under which all accounts will be automatically set to a “teen appropriate” version of the platform unless evidence is provided confirming that they are adults. Discord stated that the majority of adult users wouldn’t have to go through this verification process because of the company’s age-inference model, which uses previously stored information to verify users’ ages. The company estimated that about 10% of users would have to upload a video selfie or a photo of a government ID to verify their age. Age verification seems to be a common answer to concerns about kids and teens online, but this raises questions about how to implement it properly without posing a risk to free speech and data privacy.

Privacy and Governance Concerns

In their press statement, Discord described how age determination works using “account-level signals: how long your account has existed, whether you have a payment method on file, what types of servers you’re in, and general patterns of account activity.” The company plans to use an advanced machine learning model to predict user age groups based on these usage patterns and “other signals” associated with the account, excluding the actual content of the messages sent in it. If the determination still can’t be made and the user is among the 10% who will be asked to verify their age, they face a couple of options. They can either verify their age through uploading visual confirmation (like a selfie or ID) or lose the ability to access age-restricted content and change certain default safety settings. Discord has assured users that nothing else would change regarding profiles, servers, friend lists, etc., if one opts out.

Many privacy experts are curious about exactly how much monitoring is required for this age verification without collecting sensitive data, and how they train the models that perform it. Some critics of age verification policies argue that there are inherent risks to the safety of personal data, like that on government IDs or biometric information, in order to pass these checks, ultimately questioning if tech companies can truly be trusted to keep it secure. Discord has stated that video selfies would never leave the user’s device and that identification documents would be deleted immediately after age confirmation in most cases, but the risks remain.

Why It Matters / Policy Considerations

There is a lack of federal standards in terms of social media regulation, leaving state-level laws as a patchwork of guardrails that tech companies argue stifles innovation. Social media apps are pushing app stores to take on the burden of age verification, while the stores would prefer that developers handle it. Despite bipartisan agreement that social media platforms aren’t doing enough to protect underage users, Congress has yet to address the issue with concrete legislation.

 

Tags :

Facebook
Twitter
LinkedIn
Pinterest