James A Lang

View Original

Useful Resources for AI Security

AI Security development:

Principles for the security of machine learning

The NCSC’s detailed guidance on developing, deploying or operating a system with an ML component.

OWASP AI Exchange

This content is feeding into standards for the EU AI Act, ISO/IEC 27090 (AI security), the OWASP ML top 10, the OWASP LLM top 10, and OpenCRE - which we want to use to provide the AI Exchange content through the security chatbot OpenCRE-Chat.

OWASP ML Top 10

The primary aim of the OWASP Machine Learning Security Top 10 project is to deliver an overview of the top 10 security issues of machine learning systems.

OWASP LLM Top 10

The OWASP Top 10 for Large Language Model Applications project aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs). 

Secure by Design - Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Secure by Design Software

Co-authored by CISA, the NCSC and other agencies, this guidance describes how manufacturers of software systems, including AI, should take steps to factor security into the design stage of product development, and ship products that come secure out of the box.

AI Security Concerns in a Nutshell

Produced by the German Federal Office for Information Security (BSI), this document provides an introduction to possible attacks on machine learning systems and potential defences against those attacks.

Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems and Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems

These documents, produced as part of the G7 Hiroshima AI Process, provide guidance for organisations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems with the aim of promoting safe, secure, and trustworthy AI worldwide.

AI Verify

Singapore’s AI Governance Testing Framework and Software toolkit that validates the performance of AI systems against a set of internationally recognised principles through standardised tests.

Multilayer Framework for Good Cybersecurity Practices for AI — ENISA (europa.eu)

A framework to guide National Competent Authorities and AI stakeholders on the steps they need to follow to secure their AI systems, operations and processes

ISO 5338: AI system life cycle processes (Under review)

A set of processes and associated concepts for describing the life cycle of AI systems based on machine learning and heuristic systems.

AI Cloud Service Compliance Criteria Catalogue (AIC4)

BSI’s AI Cloud Service Compliance Criteria Catalogue provides AI-specific criteria, which enable evaluation of the security of an AI service across its lifecycle.

NIST IR 8269 (Draft) A Taxonomy and Terminology of Adversarial Machine Learning

A set of processes and associated concepts for describing the life cycle of AI systems based on machine learning and heuristic systems.

An Overview of Catastrophic AI Risks (2023)

Produced by the Center for AI Safety, this document sets out areas of risk posed by AI.


Large Language Models: Opportunities and Risks for Industry and Authorities

Document produced by BSI for companies, authorities and developers who want to learn more about the opportunities and risks of developing, deploying and/or using LLMs.

Introducing Artificial Intelligence

Blog from the Australian Cyber Security Centre which provides approachable guidance on Artificial Intelligence and how to securely engage with it.

NSA

Deploying AI Systems Securely & Best Practices for Deploying Secure and Resilient AI Systems - Deploying artificial intelligence (AI) systems securely requires careful setup and configuration that depends on the complexity of the AI system, the resources required (e.g., funding, technical expertise), and the infrastructure used (i.e., on premises, cloud, or hybrid).

Tools

Open-source projects to help users security test AI models include:

·       Adversarial Robustness Toolbox (IBM)

·       CleverHans (University of Toronto)

·       TextAttack (University of Virginia)

·       Prompt Bench (Microsoft)

·       Counterfit (Microsoft)

·       AI Verify (Infocomm Media Development Authority, Singapore)

 

Cybersecurity Frameworks

NCSC CAF Framework
The Cyber Assessment Framework (CAF) provides guidance for organisations responsible for vitally important services and activities.

NIST CSF
NIST Releases Version 2.0 of Landmark Cybersecurity Framework

NIST SP 800-161 Rev. 1 
Cybersecurity Supply Chain Risk Management Practices for Systems and Organisations

CISA’s Cybersecurity Performance Goals
A common set of protections that all critical infrastructure entities should implement to meaningfully reduce the likelihood and impact of known risks and adversary techniques.

MITRE's Supply Chain Security Framework
A framework for evaluating suppliers and service providers within the supply chain.

Risk management

NIST AI Risk Management Framework (AI RMF)

The AI RMF outlines how to manage socio-technical risks to individuals, organisations, and society uniquely associated with AI.

ISO 27001: Information security, cybersecurity and privacy protection

This standard provides organisations with guidance on the establishment, implementation and maintenance of an information security management system

ISO 31000: Risk management

An international standard that provides organisations with guidelines and principles for risk management within organisations 

NCSC Risk Management Guidance

This guidance helps cyber security risk practitioners to better understand and manage the cyber security risks affecting their organisations.


Vendor Frameworks

Databricks

Introducing the Databricks AI Security Framework (DASF) An actionable framework to manage AI security

AWS

Artificial intelligence and machine learning (AI/ML) is transforming businesses. AI/ML has been a focus for Amazon for over 20 years, and many of the capabilities customers use with AWS, including security services, are driven by AI/ML. This creates a built-in differentiated value, because you can build securely on AWS without requiring your security or application development teams to have expertise in AI/ML.

AZURE

For a comprehensive list of Azure service security recommendations see the Azure AI services security baseline article.

Google

Google's Secure AI Framework (SAIF) - The potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs security standards for building and deploying AI responsibly. That’s why we introduced the Secure AI Framework (SAIF), a conceptual framework to secure AI systems.

AI Security (AISec) Assessment

AISEC Assessment

This audit tool provides insights on your organisation’s AI infrastructure cybersecurity maturity