Bvoxro Stack

Revised GUARD Act Still Poses Threats to Privacy and Online Speech, Critics Warn

Revised GUARD Act narrows AI companion focus but retains invasive age verification, endangering privacy and choice. Critics warn vague definitions and heavy penalties remain problematic.

Bvoxro Stack · 2026-05-10 07:47:41 · Software Tools

Revised Bill Narrows Scope, But Core Flaws Remain

Lawmakers have significantly narrowed the GUARD Act, a bill originally targeting minors' access to AI, but critics warn the revised version still endangers privacy and free expression. The amended legislation now focuses on “AI companions” — conversational systems simulating emotional or interpersonal interactions — rather than all AI chatbots or search tools. However, the new text retains mandatory age verification tied to real-world identities, raising alarms among privacy advocates.

Revised GUARD Act Still Poses Threats to Privacy and Online Speech, Critics Warn
Source: www.eff.org

“The changes address the broadest concerns, but the core problem persists: the bill still forces users to surrender personal data to access AI services,” said a spokesperson from the Electronic Frontier Foundation (EFF). “This creates chilling effects for privacy and parental choice.”

Burdensome Age Verification Remains

The revised GUARD Act requires companies offering AI companions to implement “reasonable age verification” systems. While the allowed methods have expanded — now including age-verified app store accounts — they remain linked to real-world identity, such as financial records or government IDs.

“Millions of Americans lack current IDs or bank accounts,” the EFF spokesperson added. “Requiring identity-linked verification to use online speech tools creates real risks for privacy, anonymity, and data security. Many people will simply opt out rather than compromise their security.”

Vague Definitions and Heavy Penalties

The bill narrows the definition of “AI companion” to systems that engage users in “emotional disclosures” or maintain a “persistent identity, persona or character.” Yet boundaries remain unclear. Developers face steep penalties for misjudging whether their AI qualifies.

“Congress is trying to solve a complicated social problem with vague legal standards and privacy-invasive verification systems,” the EFF warned. “This heavy liability approach will stifle innovation and limit access to beneficial tools.”

Revised GUARD Act Still Poses Threats to Privacy and Online Speech, Critics Warn
Source: www.eff.org

Background

Initially, the GUARD Act faced widespread criticism for its sweeping scope, potentially covering almost every AI chatbot or search tool. After public backlash, lawmakers narrowed it to focus on AI companions. The original bill drew opposition from tech companies, civil liberties groups, and parents who feared it would block tools that help isolated teens practice social interaction or enable military parents to set up storytellers for children.

“Families who decide an AI companion benefits their children would still face significant hurdles under this bill,” the EFF noted. “A parent deployed overseas might want a persistent AI storyteller for a young child — but the verification system could block that.”

What This Means

The revised GUARD Act still imposes onerous identity checks that undermine anonymity online. Privacy advocates argue that even parents who want their teens to use AI companions could be locked out if they don’t have required forms of ID or financial accounts.

“This bill doesn’t solve the underlying issue of AI safety; it just shifts the burden onto users,” the EFF concluded. “Until Congress addresses these core flaws, the GUARD Act remains a threat to privacy, speech, and parental choice.”

Recommended