A top U.S. official emphasized the importance of incorporating security measures into artificial intelligence (AI) systems from their inception.
Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency, criticized the current norm of releasing technology products with vulnerabilities and relying on consumers to patch them. She highlighted the unique challenges posed by AI, noting its exceptional power and rapid development.
Easterly's comments followed discussions in Ottawa with Sami Khoury, head of Canada's Centre for Cyber Security. They concurred on the need for security to be an integral part of AI's entire lifecycle.
This stance aligns with new guidelines on AI cybersecurity endorsed by agencies from 18 countries, including the U.S. These guidelines, developed in the UK, advocate for secure design, development, deployment, and maintenance of AI systems.
The push for enhanced AI security also coincides with the White House's new executive order, which mandates companies to report national security risks associated with AI to federal authorities.
This move towards more stringent AI security measures reflects a growing international consensus on managing the risks of rapidly evolving AI technologies.
Leading AI developers have agreed to collaborate with governments in testing new models before their public release, aiming to build AI capabilities that are as secure and safe as possible.