top of page
  • Staff

New Research Details the Four Pillars of Catastrophic AI Risks

The concerns that unchecked artificial intelligence (AI) could potentially jeopardize humanity are not new, and have been voiced by tech experts, Silicon Valley tycoons, and common citizens alike. Recent research by the Center for AI Safety (CAIS) has now elaborated on what the "catastrophic" risks from AI are.


In a paper entitled "An Overview of Catastrophic AI Risks", researchers from CAIS noted that we currently live in a world that would be unrecognizable to people from a few centuries or even decades ago. They pointed out the rapid pace of development that has brought us from the birth of Homo sapiens to the dawn of the AI revolution.


CAIS, a tech nonprofit, aims to mitigate "societal-scale risks associated with AI" by conducting safety research, expanding the community of AI safety researchers, and advocating for safety standards. It also recognizes the potential benefits that AI can offer.


In the study, the CAIS team including its director, Dan Hendrycks, identified four main categories of catastrophic AI risks: malicious use, the AI race, organizational risks, and rogue AIs.

Hendrycks noted that they aim to educate a broad audience, including policymakers, about these risks through their research. "I hope this can be useful for government leaders looking to learn about AI's impacts," he said.


The paper defines malicious use of AI as the use of the technology by harmful actors to cause widespread damage. The authors suggested steps to reduce these risks, such as improving biosecurity, restricting access to high-risk AI models, and holding AI developers legally accountable for damages caused by their AI systems.


The AI race, according to the researchers, refers to the competitive rush among governments and corporations to develop AIs. This race could lead to more destructive wars, accidental misuse or loss of control, and the co-optation of AI technologies by harmful actors. The paper suggests safety regulations, international coordination, and public control of general-purpose AIs as potential solutions.


Organizational risks arise from labs and research teams that could experience catastrophic accidents if they lack a robust safety culture. The researchers compared this to historical disasters like Chernobyl, Three Mile Island, and the fatal Challenger Space Shuttle incident.

The study also warned about the dangers of rogue AIs, which could be AI systems that go beyond their intended function and cause harm. The authors underlined the need for better organizational cultures and structures to minimize these risks.


Despite AI's enormous potential, it needs to be handled responsibly to mitigate risks and ensure it contributes to the betterment of society, the paper concluded.

Comments


Commenting has been turned off.
bottom of page