<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=782557620313211&amp;ev=PageView&amp;noscript=1">

NSA's New AI Security Best Practices Explained

Picture of Abigail Moyal updated

A few weeks ago the NSA released a document that contains the “Best Practices for Deploying Secure and Resilient AI Systems”, and we read it so you don't have to. 

The NSA continues to emphasize the importance of securing AI systems while also acknowledging the various risks and vulnerabilities that come with the territory. This document offers various strategies for continued protection, secure deployment, operation, and maintenance of these AI systems. It is also important to note that this recent release was a collaborative effort from the NSA, CISA, FBI, and others from countries like Australia, Canada, New Zealand, and the UK.

The key points are as follows:

Secure deployment environment: One of the four main points discussed is the need for a secure deployment environment. This means that when workers are setting up a system, it is well-managed, carefully planned, and strongly secured to block unauthorized access and reduce security risks. The four main aspects of this are:

  • Governance management: Ensure that workers are coordinating with IT so that the security standards are up to par. This includes outlining the specific roles and expectations for setting up the system.
  • Robust Architecture: Set up security protocols where the AI and IT systems meet to protect against weak spots. This includes using strict security rules and zero-trust strategies. (learn more about zero trust from our blog: https://heyiris.ai/about-iris/beyond-the-buzz-zero-trust-as-the-smart-cyber-playbook)
  • Harden Confirguations: Using various methods such as isolating systems, keeping an eye on networks, and setting up firewalls to ensure hardware and software are up to date and can protect against threats both stored and sent.
  • Protect Deployment Networks: Assume that breaches can happen and be prepared in how to detect and respond to these issues quickly and efficiently.

Continuous Protection of AI systems: Must regularly check and secure systems which include the integrity of the data, codes, and models, and testing for vulnerabilities 

  • Validate AI systems before and during use: Keep sensitive data stored in secure backup locations and consistently require only authorized access when accessing the information.
  • Secure exposed APIs: Application Programming Interfaces (APIs) have the potential to be exposed by AI, so implementing authentication methods is more essential than ever.
  • Actively Monitor Model Behavior: Regularly monitor activity by maintaining a collection of both inputs and outputs, as well as any attempts to enter that are not permitted.
  • Protect Model Weights: Make it difficult for attackers to steal or change the core AI model code by utilizing strong encryption codes and setting up restriction zones in the software. 

 

Secure Operation and Maintenance: Following approved IT practices for any operation and maintenance checks will help in staying ahead of the curve.

  • Enforce Strict Access Controls: Do this by implementing Role-based access control (RBAC). This is essentially allowing and restricting access to parts of the system based on a employees role in the organization. For example: HR benefits team should not have access to the AI core code. 
  • Ensure User Awareness and Training: Must train the users, administrators, and developers of the software on important practices in order to reduce risk and human error. 
  • Conduct Audits and Penetration Testing: Hire expert hackers to try and break into the software and fix any vulnerabilities before presenting the AI to the public. 
  • Implement Robust Logging and Monitoring: Ensure you have an employee monitoring how the AI system works and track all attempts at breaking in or out of the system, this can be done by setting up alerts for unusual activities that may arise. 
  • Update and Patch Regularly: Continuously update the model to ensure all new versions perform and meet the expected standard. 
  • Prepare for High Availability and Disaster Recovery: Rely on backup systems in secure locations that can quickly be activated in case of a hacking disaster.
  • Plan Secure Delete Capabilities: Automatically and permanently delete sensitive data like models or keys after use, ensuring they cannot be recovered or accessed again.

Every company should follow these protocols to ensure the smart use of AI and less likelihood of a hacking disaster. Stay tuned for the next update,

The IRIS team!



Sources

NSA. Deploying AI Systems Securely, Apr. 2024, media.defense.gov/2024/Apr/15/2003439257/-1/-1/0/CSI-DEPLOYING-AI-SYSTEMS-SECURELY.PDF. 

ABOUT AUTHOR

Great updates

Subscribe to our email newsletter today!