The National Institute of Standards and Technology wants public feedback on a plan to develop guidance for how companies can implement various types of artificial intelligence systems in a secure manner.
NIST on Thursday released a concept paper about creating control overlays for securing AI systems based on the agency’s widely used SP 800-53 framework. The overlays are designed to help ensure that companies implement AI in a way that maintains the integrity and confidentiality of the technology and the data it uses in a series of different test cases.
The agency also created a Slack channel to collect community feedback on the development of the overlays.
“The advances and potential use cases for adopting artificial intelligence (AI) technologies brings both new opportunities and new cybersecurity risks,” the NIST paper said. “While modern AI systems are predominantly software, they introduce different security challenges than traditional software.”
The project is currently based on five use cases:
- Adapting and Using Generative AI – Assistant/Large Language Model
- Using and Fine Tuning Predictive AI
- Using AI Agent Systems – Single Agent
- Using AI Agent Systems – Multi Agent
- Security Controls for AI Developers
The rapid acceleration of AI use in corporate environments has created opportunities for companies to improve workplace productivity, but it has also prompted serious concerns about whether the technology can be implemented securely.
Researchers have identified multiple ways in which malicious actors could take advantage of AI agents and steal or corrupt data. During the recent Black Hat conference in Las Vegas, researchers from Zenity Labs demonstrated how hackers could gain control of top AI agents and weaponize them for attacks such as manipulating critical workflows.
AI can also be a tool for offense. In July, researchers at Carnegie Mellon revealed that large language models (LLMs) are capable of autonomously launching cyberattacks.