WASHINGTON, D.C. – In his opening statement for the record for today’s Subcommittee on Consumer Protection, Product Safety and Data Security hearing titled “The Need for Transparency in Artificial Intelligence,” Commerce Committee Ranking Member Ted Cruz (R-Texas) stressed the immense potential artificial intelligence (AI) presents for economic growth, and cautioned against allowing fear of new technology to reflexively trigger a new regulatory superstructure that would hold back individuals from creating new products and ideas and make us less competitive against China.
Earlier today, Sen. Cruz sent a letter to Federal Trade Commission (FTC) Chairwoman Lina Khan seeking answers regarding her plans to regulate AI, including the constitutionally protected speech used to train large language models, without any explicit statutory congressional authorization.
Sen. Cruz’s remarks, as submitted for the record, are included below:
“Thank you, Chairman Hickenlooper and Ranking Member Blackburn. And thank you, Chairwoman Cantwell for calling this hearing. This Committee should be at the forefront of discussions about artificial intelligence, or AI.
“It is important to keep in mind that while AI is becoming more advanced, it is not new. It’s been around for decades. You use AI every day without realizing it – for example, when you receive online shopping recommendations or have your text messages autocorrected. Beyond improving mundane tasks, AI is already transforming our world for the better. It’s detecting cybersecurity threats to critical infrastructure; improving agricultural yield; and with advancements, potentially enhancing cancer detection and treatment. In these ways and more, AI has already vastly improved Americans’ quality of life.
“Congress would do well to learn from the past. This is far from the first time we’ve debated whether and how to regulate innovation. Take the internet as an example. In the 1990s, the United States made a conscious, bipartisan decision to avoid heavy government intervention that might stunt the internet’s growth, including bureaucratic organization under one agency head and influence over speech and content issues. The results speak for themselves: The U.S. now has the most successful internet companies in the entire world—it isn’t even a close contest.
“With AI, I’m afraid that we are allowing history to repeat itself – only this time, we are following our European counterparts, who made a mistake with their early regulation of the internet. You can’t read the news today without encountering Terminator-style fearmongering about AI building a weapon of mass destruction or developing sentience that will destroy humans.
“Let’s be clear: AI is computer code developed by humans. It is not a murderous robot. Humans are responsible for where and how AI is used.
“Unfortunately, the Biden Administration and some of my colleagues in Congress have embraced doomsday AI scenarios as justification for expanded federal regulation. Some of these proposals are extremely onerous: Licensing regimes, creating a new regulatory agency to police computer code, and mandatory, NEPA-style impact assessments before AI can be used. The fearmongering around AI has caused us to let our guard down to accept so-called “guardrails”—pseudo-regulations disguised as safety measures. These are often cumbersome and infeasible, especially for the startups so common in the tech sector.
“I don’t discount that there are some risks associated with the rapid development and deployment of AI. But we must be precise about what these risks are. I’ve heard concerns from my colleagues about misinformation, discrimination, and security. These certainly present challenges, but I have a hard time viewing them as existential risks, or even worthy of new regulation.
“To me, the biggest existential risk we face is ourselves. At this point, Congress understands so little about AI that it will do more harm than good.
“It is critical that the United States continue to lead in AI development—especially when allies such as the European Union are charging toward heavy-handed regulation.
“Let me propose an alternative. Instead of riling fears and pausing AI development, let’s pause before we regulate.
“We can start by fully assessing the existing regulatory landscape before burdening job-creating businesses—especially startups—with new legal obligations. Many of our existing laws already apply to how AI systems are used. For example, the Fair Credit Reporting Act protects consumer information with reporting agencies and the Civil Rights Act of 1964 prohibits discrimination. We should faithfully enforce these laws, as written, without overstepping.
“The FTC’s investigation of OpenAI is a clear abuse of authority. As I wrote to Chairwoman Khan this week, fearmongering and fanciful speculation do not justify enforcement action against Americans creating new AI tools. The FTC’s unprecedented and aggressive policing of AI would undoubtedly require statutory authority from Congress.
“Leading the AI race is also important for national security. If we stifle innovation, we may enable adversaries like China to out-innovate us. I’ve been cautioning against ceding leadership on AI development to China since 2016, when I held Congress’s first-ever hearing on AI. We cannot enact a regulatory regime that slows down innovation and lets China get ahead of us.
“Think about if we had let fear get the best of us with other technological developments. Panics about new technology have occurred throughout history—and the panics have generally not borne out. There was widespread fear and calls to regulate around the advent of innovations such as automobiles, recorded sound, typewriters, and weaving machines. Every time, after the hysteria died down, we adapted and allowed technology to improve our lives, spur economic growth, and create new jobs.
“The same opportunity exists today with AI. Let’s not squander it.”
STATEMENTS FOR THE RECORD:
In addition to Sen. Cruz’s remarks, the Information Technology & Innovation Foundation (ITIF), the Cato Institute, and the R Street Institute have submitted statements for the record expressing concern regarding burdensome AI regulations. Key excerpts from these statements are included below:
“Policymakers should be careful of holding AI systems to a higher standard than they do for humans or other technologies and products on the market. This is a mistake the European Commission is making with its AI Act. The EU’s original proposal contains impractical requirements such as “error-free” data sets and impossible interpretability requirements that human minds are not held to when making analogous decisions. Policymakers should recognize that no technology is risk-free; the risk for AI systems should be comparable to what the government allows for other products on the market.”
Cato Institute :
“Because AI is a broad general-purpose technology, one-size-fits-all regulations are likely a poor fit. Building on the success of past light touch approaches that refrained from a precautionary approach to technologies including the internet, policymakers should resist the urge to engage in top-down rulemaking that attempts to predict every best- and worst-case scenario and accidentally limits the use of technology. Industrial policy that seeks to direct technological development in only one way may miss the creative uses that entrepreneurs seeking to respond to consumer demands would naturally find. For this reason, policymakers should ensure regulations are carefully targeted at specific harms or applications that would be certain or highly likely to be catastrophic or irreversible instead of broad general-purpose regulations.”
R Street Institute:
“Algorithmic systems evolve at a very rapid pace and undergo constant iteration, with some systems being updated on a weekly or even daily basis. If policy is based on making AI perfectly transparent or explainable before anything launches, then innovation will suffer because of endless bureaucratic delays and paperwork compliance burdens. Society cannot wait years or even months for regulators to eventually get around to formally signing off on mandated algorithmic audits or impact assessments, many of which would be obsolete before they were completed. […] This means that legislatively mandated algorithmic auditing or explainability requirements could also give rise to the problem of significant political meddling in speech platforms powered by algorithms, which would raise free speech concerns.”