top of page

Senate Judiciary Committee Advances Bill Barring Minors from AI Companions in Unanimous Vote

The Senate Judiciary Committee advanced legislation Thursday that would prohibit artificial intelligence companies from allowing children to access AI companion chatbots, marking one of the more sweeping congressional efforts to regulate AI interactions with minors.

 

The bill, known as the GUARD Act and led by Sen. Josh Hawley (R-MO), passed out of committee in a unanimous bipartisan vote.

 

Under the legislation, AI companies would be required to implement age-gating systems for all internet users, who must verify their ages through a "reasonable age verification" process before engaging with an AI companion. The bill further mandates ongoing verification — meaning users must present identification, biometric data, or financial information each time they initiate a conversation with an AI companion.

 

The GUARD Act also requires AI chatbots to inform users of all ages that they are not human and do not hold professional credentials. It additionally makes it a federal crime for AI companions to knowingly solicit sexual content from minors or to produce it.

 

Companies found in violation of the law could face fines of up to $100,000 per violation.

 

The bill's definition of an AI chatbot is notably broad, covering any system that provides answers not "fully predetermined" by its developers — a scope that critics say extends well beyond social companion apps.

 

Civil liberties organizations have raised significant objections. The Electronic Frontier Foundation warned in a Monday blog post that the steep penalties would pressure companies to overcorrect. "Faced with legal uncertainty and serious liability, companies won't parse small distinctions. They'll restrict access, limit features, or block minors entirely," the organization wrote, adding that the bill "trades away privacy, access, and useful technology in exchange for a blunt system that misses the mark."

 

Privacy advocates have also argued that the age verification requirements — which demand biometric identifiers or financial data on a recurring basis — create new risks to user privacy that the legislation does not adequately address.

 

Supporters of the bill counter that the threat chatbots pose to children is demonstrable and urgent. Senators backing the measure have pointed to several high-profile cases in which AI chatbots are alleged to have played a role in the deaths of minors.

 

In February 2024, 14-year-old Sewell Setzer died by suicide after spending several hours daily interacting with a chatbot that told him to "come home" during their final exchange. In April 2025, 16-year-old Adam Raine also died by suicide; his parents say the chatbot he had been communicating with discussed methods of self-harm with him.

 

The GUARD Act must still clear a full Senate floor vote and pass the House before it could be signed into law. Its unanimous committee passage signals bipartisan appetite for some form of AI regulation targeting child safety, even as the broader debate over where to draw the line between protection and access remains unresolved.

 

bottom of page