US-SAUDI ARABIA-POLITICS-TECHNOLOGY-ENERGY

The Energy & Commerce Oversight and Investigations Subcommittee’s discussion on November 19 regarding chatbot risks served as a stark reminder of the severe human toll resulting from regulatory inaction on artificial intelligence. Nevertheless, despite repeated calls for action, AI companies maintain complete freedom to market products without any substantial safety standards or oversight. No other industry enjoys such liberty to endanger people with total impunity.

Therefore, it is alarming that House Republican leadership chose the very same day to announce their intention to insert federal preemption, which would prohibit almost all state-level AI regulation, into the National Defense Authorization Act (NDAA). Furthermore, President Donald Trump has consistently indicated his intention to sign an executive order that would grant the federal government sole authority over AI regulation.

A wide array of states, including Utah, Texas, and California, have already taken steps to implement important AI regulations that would be invalidated by such preemption. Voters in these and other states deserve better. And if Congress continues to delay enacting meaningful safeguards to protect families, it is outrageous, undemocratic, and dangerous to prevent state lawmakers from acting in their stead.

This situation is profoundly problematic for three reasons. First, it severely restricts the capacity of states to enact sensible regulations to protect our children, our communities, and our jobs from the harmful practices of large tech corporations. There is no justification for trusting Big Tech to seriously consider the concerns of parents and workers in their rush to extract maximum profit with their increasingly hazardous AI products. In fact, they are inherently designed, both economically and culturally, to “move fast and break things.”

Second, prohibiting AI regulation runs contrary to the will of the American people. Extensive polling consistently demonstrates that support for reasonable AI guardrails is widely popular across both the political right and left. What is particularly encouraging is the number of Republican voters advocating for such protections. Data indicates that over 70% of both Republicans and Democrats desire robust AI regulation. This sentiment was evident when Congress last considered preemption in July: the Senate rejected the idea by an overwhelming 99-to-1 vote. Even the Senator who proposed the amendment ultimately voted against his own proposal. So, why persist in forcing such an unpopular measure upon the American people?

Third, hastily inserting a poorly conceived preemption provision into the NDAA at the last minute makes a mockery of our democratic process, especially since this proposed provision has never been debated by either the House or the Senate. The American people are entitled to strong, sensible AI safeguards. President Trump’s stated objective is to establish a federal standard. Yet, the proper method for developing such a standard is through the normal legislative process. Big Tech and its congressional allies are attempting to bypass our standard democratic and legislative procedures. Instead of holding a meaningful debate that leads to a well-balanced federal standard, Congress is being asked to consider and vote on this momentous issue within a single week. Furthermore, it is important to note that this maneuver also jeopardizes our troops, by holding the NDAA hostage to Big Tech’s desire to avoid regulation.

Proponents of such a ban cite the necessity for a strong, federal AI law, rather than a “patchwork” of state-level AI laws. This argument might warrant consideration if a federal AI law actually existed. In reality, such a ban would simply formalize Big Tech’s unchecked privilege to rapidly deploy dangerous AI systems, maximizing profits with little regard for public safety.

Supporters also claim that regulation could have a “chilling effect” on AI innovation, but this assertion lacks any experiential basis. All other companies, from airlines to pharmaceutical firms to your local sandwich shop, are compelled to meet basic safety standards. Yet, U.S. innovation remains the envy of the world. Companies compete to innovate and generate profits; the public sector intervenes to prevent harm to people. Granting such a significant exception to the AI industry is nothing less than corporate welfare, and should be treated as such.

Earlier this year, internal policy documents from Meta explicitly stated that it was acceptable for their chatbots to “engage a child in conversations that are romantic or sensual.” At least some families are now suing ChatGPT for the ways in which they believe the large language model was involved in the deaths of their children. Kumma the Bear, the latest OpenAI-enabled toy, has been pulled from shelves after it offered advice on sex positions and where to purchase knives. It is not difficult to understand why the vast majority of Americans desire AI regulation to protect children and more.

The perplexing question is why some members of Congress do not.