Ranking Member Cantwell Says Blackburn-Cruz AI Moratorium Amendment Does Nothing to Protect Kids and Consumers
June 30, 2025
WASHINGTON, D.C. – U.S. Senator Maria Cantwell (D-Wash.), Ranking Member of the Senate Committee on Commerce, Science and Transportation, spoke out against the last-minute deal between Senators Cruz and Blackburn on a proposed five-year moratorium on states’ ability to regulate AI. The deal comes as parents, school districts and states are fighting social media companies in court to stop them from pushing their harmful and addictive products on kids, and this provision will give Meta and TikTok a get out of jail free card.
“The Blackburn-Cruz amendment does nothing to protect kids or consumers,” said Sen. Cantwell. “It’s just another giveaway to tech companies. This provision gives AI and social media a brand-new shield against litigation and state regulation. This is Section 230 on steroids. And when Howard Lutnick has the authority to force states to take this deal or lose all of their BEAD funding, consumers will find out just how catastrophic this deal is.
Important facts about the Blackburn-Cruz Amendment
This amendment—just as the underlying GOP bill does—may force States to accept the Five-Year AI Moratorium:
This amendment allows Secretary of Commerce Howard Lutnick to force states to take the $500 million as part of a new allocation of BEAD funds under the formula versus allowing states to apply voluntarily. He will have the ability to deny funding unless a State accepts the new funding and includes it in its broadband plan, that the Secretary must approve. This is especially the case for unallocated or de-obligated funds because Lutnick issued new guidance on the BEAD program, revoking all non-deployment approvals, and he could force States to agree to the moratorium or shift all the funding to AI projects.
Cleary Mr. Lutnick sees this “compromise” impacting the entire nation.
This amendment will prevent states from protecting children and consumers from AI-assisted harms:
The Blackburn-Cruz amendment will have a direct impact on thousands of pending lawsuits against social media companies to protect kids and consumers online. This is because the amendment will require states, school districts, parents, and consumers to prove that the law or regulation will not be an “undue or disproportionate burden” on the tech company. Social media companies are counting on addicting kids to their products as part of maximizing their profits so they will call any effort to stop them an undue or disproportionate burden.
The Amendment purports to create an exemption from the moratorium for certain state laws:
“A generally applicable law or regulation such as a law or regulation pertaining to unfair or deceptive acts or practices, child online safety, child sexual abuse material, rights of publicity, protection of a person’s name, image, voice, or likeness and any necessary documentation for enforcement, or a body of common law that may address without undue or disproportionate burden artificial intelligence models, artificial intelligence systems or automated decision systems to reasonably effectuate the broader underlying purpose of the law or regulation”.
This language is so broad that it would preempt states that regulate, for example, robocalls. Some states have decades-old laws on the books that prohibit the use of automated systems for selecting and dialing telephone numbers without the consent of consumers. Because these laws treat telemarketers that use automatic dialing systems differently from telemarketers that use real people to dial consumers, those laws are not “generally applicable” and would be preempted. The outcome will be more unwanted robocalls.
Examples of current cases and enacted and proposed legislation that will be impacted include:
Generally applicable laws that protect kids and consumers
- Kids online safety laws that seek to regulate harmful algorithms because those laws necessarily will regulate the automated decision systems that power those algorithms.
- Criminal laws to prevent the spread of non-consensual intimate imagery because those laws may specifically burden AI systems used to create the non-consensual intimate images.
- Laws requiring companies to disclose whether a consumer is interacting with an AI chatbot or a real human because those laws are only directed to AI systems.
Existing litigation to protect kids online. Many lawsuits seeking to protect children from the harmful effects of social media will be impacted by this AI moratorium, including lawsuits against social media companies for designing their products to use addictive algorithms to hook kids on social media and for promoting harmful content through algorithms to keep young users online and engaged. The Blackburn-Cruz amendment will give tech companies a loophole by arguing that these laws and lawsuits impose an undue or disproportionate burden on their AI and automated decision systems. Examples include:
- Lawsuits brought by state attorneys general and school districts to protect kids and teens in their states, including a current lawsuit brought by dozens of state AGs alleging that Meta’s social media platforms are designed to maximize screen time and addict teens, resulting in emotional and physical harms to teens.
- A lawsuit against TikTok brought by families of children that died while attempting viral social media challenges and a young girl who developed an eating disorder after TikTok’s AI-powered algorithms allegedly repeatedly served her videos promoting anorexia.
- A lawsuit against Snapchat brought by the family of a child who committed suicide after developing an addiction to the platform.
- A lawsuit brought against Meta and Snapchat by a young girl who developed an eating disorder allegedly from the platforms’ recommendation algorithms.
Proposed laws to protect kids from harmful AI chatbots. Many states are considering laws to protect children from the harmful effects of interacting with and becoming dependent upon chatbots. If an AI moratorium is in place, these laws would not be permitted to go into effect – leaving kids entirely unprotected for the duration of the moratorium. Examples of proposed laws include:
- A New York proposal that would require chatbot companies to provide suicide hotline information if they mention self-harm or suicidal thoughts.
- A Minnesota proposal to prohibit companion chatbot platforms from allowing minors to access the platforms for “recreational purposes”.
- A North Carolina bill which would establish a “duty of loyalty” that chatbot platforms would owe their users.
Artificial Intelligence Governance Laws. State laws that directly regulate AI models, AI systems, or automated decision systems would be preempted. Examples:
- California AI Training Data Transparency. On or before Jan. 1, 2026, requires developers of generative AI systems to publicly disclose on their websites specified information regarding the data sets that were used in the development of the generative AI system.
- California AI Transparency Act. Effective Jan. 1, 2026, companies that provide generative AI systems must include provenance disclosures in the content they generate and provide a publicly accessible detection tool so that users can identify AI-generated content.
- Colorado Consumer Protections for Artificial Intelligence. Beginning Feb. 1, 2026, developers and deployers of high-risk AI systems must use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the system. The law presumes reasonable care if the developer or deployer complies with specific provisions of the Act.
- Utah Artificial Intelligence Policy Act. Took effect May 1, 2024. Requires certain businesses to disclose that a consumer is interacting with generative AI systems. The law also creates an office of AI Policy and requires all other businesses to disclose that the company uses generative AI when asked by the consumer.
- Maine enacted a new law on June 12, 2025 that would prohibit a company from using an AI chatbot in a way that would mislead a consumer into believing that they are engaging with a human by requiring a clear and conspicuous disclosure that the consumer is not engaging with another human.
- Louisiana recently enacted a law that requires an attorney to verify the authenticity of evidence before offering it in court and disclose whether the evidence was artificially manipulated.
Laws Prohibiting the Use of Rent-Setting Algorithms. Preempting these laws could raise prices for consumers and small businesses.
- The State of Colorado and the cities of San Francisco, Philadelphia, and Minneapolis have enacted laws that prohibit the use of AI systems to set rents. These systems are anticompetitive and have allowed landlords to collude to raise rents and drive-up costs for consumers and small businesses. Colorado is the first state to prohibit the practice, and several states are considering such legislation, including Washington. Preempting these laws could make raise prices for consumers and small businesses.
Laws Regulating the Use of AI in Political Ads. State laws that regulate the use of AI-generated content in the elections context that do not impose criminal penalties could not be enforced. Examples include:
- Washington State Law Creating Civil Liability for Election Deepfakes. In 2023, Washington passed a law that gives candidates a private right of action for injunctive relief or damages if the candidate’s appearance, speech, or conduct has been altered by AI and if the sponsor of the content did not include a disclosure that the content has been manipulated.
- Montana Law Granting Civil Remedies Election Deepfakes. Montana passed a law that grants a candidate, or political party, the right to sue to enjoin the distribution of, and obtain damages for, an election deepfake distributed within 60 days of an election that does not include a disclosure that the media “has been significantly edited by artificial intelligence and depicts speech or conduct that falsely appears to be authentic or truthful.” The law also authorizes the elections regulator to issue civil penalties for a violation.
- Arizona Civil Penalty for Election Deepfakes. This law imposes a civil penalty against creating and distributing a deceptive and fraudulent deepfake of a political candidate within 90 days before an election without including a disclosure that the media includes content generated by AI.
Provisions of Privacy Laws. Many states have provisions in their privacy laws that address automated decision-making, such as giving consumers the right to opt-out of profiling and requiring risk assessments of their data processing activities. The moratorium is unlikely to preempt state privacy laws in their entirety, but it will preempt the enforcement of provisions that apply to AI systems.
- Right to Opt Out of Use of Personal Data for Profiling. Many state privacy laws give consumers the right to opt out of profiling, which is usually defined as automated processing of personal data to evaluate or predict personal aspects concerning a consumer’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements. California, Colorado, Connecticut, Delaware, Indiana, Kentucky, Maryland, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, Oregon, Rhode Island, Tennessee, Texas, and Virginia all allow consumers to opt out of profiling.
- Risk Assessments. Many states require companies to assess their processing activities that present a heightened risk of harm to consumers, including when the processing is for profiling or other activities that could result in a foreseeable risk of harm; some states require assessments for risk of discrimination. California, Colorado, Connecticut, Delaware, Indiana, Kentucky, Maryland, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, Oregon, Rhode Island, Tennessee, Texas, and Virginia all require an impact assessment to some degree that will require an assessment of AI systems.
Robocall Laws. Many states, some dating back decades, have robocall laws that explicitly cover automated systems. Examples include:
- Louisiana has been regulating systems that automatically select or dial telephone numbers and play recorded messages since 1991 – for 34 years.
- Arkansas. For 44 years - since 1981 - it has been unlawful in Arkansas to use an automated system for selecting and dialing telephone numbers and playing a recorded message for telemarketing.
- North Carolina passed a law 22 years ago in 2003 that regulates the use of automatic dialing machines to make unsolicited telemarketing calls to consumers. These dialing systems fall within the broad definition of “artificial intelligence” because they are automated systems that can dial telephone numbers without a human.
- Oklahoma’s telephone solicitation law that was enacted in 2022 expressly prohibits the use of an automated system for selecting and dialing telephone numbers without the prior express written consent of the consumer.
###