top of page

OPINION: If You Want to Regulate AI — Be Careful What You Ask For

Milton Friedman, the famed American economist and the 1976 winner of the Nobel Memorial Prize in Economic Sciences, made a point about government in many of his speeches and writings. One of the more memorable ones highlighted the effectiveness of government regulation.


“If you put the federal government in charge of the Sahara Desert, in five years there'd be a shortage of sand.”


Government regulation in software doesn’t work. It’s been tried before. During the 1990s, the United States Munitions List of the International Trafficking in Arms Regulation (ITAR) regulated the sale and distribution of encryption products and restricted the publication of source code unless it was extremely weak.


This resulted in the landmark legal decision Berstein v. the U.S. Department of State. The short version is that the law was ruled unconstitutional, and encryption source code is an expression of free speech protected under the First Amendment.


The linkage to governmental interest in regulating AI is clear. This time, however, it appears the approach is not to regulate the code itself—AI—but the outcomes produced by the code. The proposed European Union AI Act incorporates a risk-based assessment: minimal, limited, high, and unacceptable.


The approach seems objective, with specific references to specific use cases. However, the devil is always in the details. There are twenty-seven countries in the EU with twenty-seven different ideas about what is best for their own country. This creates a development and compliance headache, as well as potentially stalling entry into the EU markets by the US and other countries that don’t want multiple frameworks and regulatory schemes to deal with.


‘Progress’ comes with a cost. That cost is the price not being paid by nation-state adversaries like Russia, China, North Korea, and Iran. There are no such guiding principles and frameworks limiting the development of AI technologies, especially with respect to weapons of war.


According to an article from the Center for a New American Security, “AI is a high-level priority within China’s national agenda for military-civil fusion, and this strategic approach could enable the PLA to take full advantage of private sector progress in AI to enhance its military capabilities.”


Report after report, article after article, analysis after analysis makes it clear China will use commercially developed A.I. for military purposes. There are indications that China has begun incorporating “AI technologies into its next-generation conventional missiles and missile defense intelligence, surveillance, and reconnaissance systems to enhance their precision and (sic) legality [lethality].”


But a recent Brookings Institution survey discovered something a little different.

“Thirty percent of adult internet users believe AI technologies should be developed for warfare, 39 percent do not, and 32 percent are unsure, according to a survey undertaken by researchers at the Brookings Institution. However, if adversaries already are developing such weapons, 45 percent believe the United States should do so, 25 percent do not, and 30 percent don’t know.”


Placing onerous restrictions on AI will only result in bad actors having the advantage as we remain mired in bureaucracy. Over-rotating on regulation ensures that market forces will be mitigated in developing advanced AI technologies, and government regulation will be the primary driver. This is, in my opinion, the wrong order.


There are three distinct phases to ending up with a law. First, there is litigation. Market forces determine what is acceptable and what the public is willing to pay for and tolerate. If that doesn’t achieve the desired governmental interest, then the second phase is regulation. Enacting rules on covered entities (e.g., publicly traded companies, pharma, financial institutions, etc.) to compel compliance. If the first two steps are not adequate, then it is the proper role of government to bring clarity to the market by passing legislation.


The poster child in the U.S. was the passage of the Sarbanes-Oxley Act in response to corporate malfeasance and cooking of the books. Massive failures and fraud were first litigated. Then the Securities and Exchange Commission (SEC) passed additional regulations to improve transparency, oversight, and financial controls. When that didn’t achieve the desired outcome, the Sarbanes-Oxley Act was passed in 2002—423 to 3 in the House and 99-0 in the Senate.


We’re not to the legislative phase yet. Jumping over the natural progression will only stifle the valuable development in AI, hamstring companies looking to break out with new technologies, and embolden our adversaries.


Morgan Wright is an internationally recognized expert on cybersecurity strategy, cyberterrorism, national security, and intelligence. He currently serves as a Senior Fellow at The Center for Digital Government, Chief Security Advisor for SentinelOne, and the chief technology analyst for Fox News and Fox Business.

Commenti


I commenti sono stati disattivati.
bottom of page