NIST Expands AI Security Playbook With New Cybersecurity Framework Profile
- Staff
- 15 hours ago
- 2 min read
The National Institute of Standards and Technology (NIST) is extending its influence over how organizations secure artificial intelligence, releasing a draft companion to its widely adopted Cybersecurity Framework (CSF) that directly addresses AI-related risks and defenses.
The new Cybersecurity Framework Profile for Artificial Intelligence, published Tuesday, is designed to help organizations map AI-specific security considerations onto the CSF, one of the most commonly used cybersecurity blueprints across government and industry. The profile outlines how organizations can both protect AI systems and use AI to strengthen cyber defenses, while also guarding against AI-enabled attacks.
NIST groups its guidance into three core categories: “secure,” “defend,” and “thwart.” These reflect the different ways AI is entering enterprise environments—from internally deployed models, to AI-powered security tools, to threats posed by adversaries leveraging AI.
“AI is entering organizations’ awareness in different ways,” said Barbara Cuthill, one of the profile’s authors. “But ultimately every organization will have to deal with all three.”
The document provides AI-specific guidance for every component of the Cybersecurity Framework, spanning areas such as intrusion detection, vulnerability management, supply chain security, and incident response. Rather than replacing existing controls, the profile shows how AI considerations should be layered onto established cybersecurity practices.
In announcing the release, NIST said the profile is intended to help organizations “understand, examine and address the cybersecurity concerns related to AI and thoughtfully integrate AI into their cybersecurity strategies.”
The draft was developed with extensive input from industry, academia, and government. More than 6,500 contributors submitted ideas on how AI risks and use cases should align with the CSF. NIST is now soliciting public comments through January 30 and plans to host a virtual workshop on January 14 to gather additional feedback.
The AI-focused CSF profile is the latest addition to NIST’s growing body of AI governance and security guidance. In recent years, the agency has released an AI Risk Management Framework (2023), a generative AI profile for that framework (2024), and a separate publication in August aimed at securing AI systems using NIST’s existing security controls catalog.
Together, these efforts reflect NIST’s expanding role at the center of U.S. AI governance—an assignment reinforced across multiple administrations. Former President Joe Biden directed the agency to develop standards for AI security testing and synthetic content, while President Donald Trump has rolled back some directives and issued new ones, including instructing NIST to help federal agencies evaluate their AI models.
For organizations already aligned with the Cybersecurity Framework, the new AI profile offers a practical bridge between traditional cyber risk management and the rapidly evolving realities of AI deployment—without requiring a wholesale rethink of existing security programs.
