In recent years, we've seen a disturbing wave of age verification laws. These are mandates that require services to check users’ ages—often through invasive tools like ID checks and face scans—before letting them access certain content online. Now, the social messaging platform Discord is adopting age verification voluntarily—despite knowing the privacy and security risks age verification poses.
According to Discord, their platform used by gamers and other communities hosts 200 million monthly active users. And last year, attackers accessed roughly 70,000 of those users’ government ID photos—submitted as part of age verification schemes—after compromising Discord’s third-party customer support system. Nevertheless, Discord announced this month that it's rolling out age verification globally.
Discord says its new age verification system will delete records of any user-uploaded government IDs, and that any facial scans will never leave users’ devices. The company also says that it will not associate a user’s ID with their account (only using that information to confirm their age) and that identifying documents won’t be retained. We take those commitments seriously. However, users have little independent visibility into how these safeguards operate. They're being asked to simply trust that this time will be different.
Beyond the security risks, we’ve written extensively about why age verification schemes are censorship and surveillance nightmares that chill speech, are prone to error, and violate users' privacy. For a platform with as much market power as Discord, voluntarily imposing age verification is unacceptable.
READ MORE…