This repository is used to develop and manage the Security and Compliance Work Group's assets as well as those from its subgroups. This may include use cases, threat models, profiles, and other artifacts.
The LF AI & Data Security and Compliance Work Group is dedicated to formulating interconnected security use cases, threat models, and policies that can be leveraged to create a comprehensive security and compliance strategy for AI-enabled applications throughout their lifecycle. The committee will establish a framework, which references and incorporates existing, relevant projects, standards and technologies, that enables an automated, self-sustaining cycle where effective governance fosters secure AI development, deployment and operations and AI-driven governance systems that can reduce risk and improve compliance in critical regulated environments.
Important
The Security & Compliance Work Group meets, bi-weekly on Tuesdays @9am US Central, 7am US Pacific, 14:00 UTC/GMT using Zoom. Select the "Need an invite" link on our LF AI & Data calendar entry and join us!
The work group has 2 subgroups which have separate meetings you can sign up for:
- Use Cases and Threat Modeling subgroup:
- Weekly meetings, Tuesdays @12pm US Eastern, 16:00 GMT
- Slack: #security-use-cases-and-threat-modeling
- Risk and Compliance subgroup:
- Bi-weekly meetings, Tuesdays @10am US Eastern, 14:00 GMT
- Slack: #risk-and-compliance-subgroup
You will need to assure you have accounts created in both the Linux Foundation (LF) and the LF AI & Data Foundation (LFAI):
The work group will use the LF meeting management platform for all calls and formal communications and requires an LF account to participate.
In addition, the LF AI & Data Foundation has a separate account to which will be used by work group members for work group-specific communications and calendaring:
-
Create an Linux Foundation account
- https://docs.linuxfoundation.org/lfx/sso/create-an-account
- Fill out your LF profile in the Individual Dashboard: openprofile.dev
- Please “Connect your Github” using an email address associated with your Github account.
-
Register for an LF AI & Data Account
- https://lists.lfaidata.foundation/register
- Please use the same email address as your LF account.
- https://lists.lfaidata.foundation/register
Work group meetings will be held bi-weekly
- 9am US Central, 7am US Pacific, 14:00 UTC/GMT
- The meeting day/time will be revisited via member poll for 2026.
The LF AI & Data Foundation allows for self-registration to meetings via the foundation's Zoom.
-
Using the LF AI & Data Community Calendar: https://zoom-lfx.platform.linuxfoundation.org/meetings/lf-ai-foundation
-
Find and click on the meeting that interests you, and click "Register" to sign up.


then join the project channel:
Initially, the work group will establish two subgroups to better divide and focus work against specific subject areas each with their own home page:
Work group members are encouraged to join and contribute to these subgroups each of which hosts its own bi-weekly meetings.
The subgroups will provide updates of its activities as part of the work group's meeting agenda.
A high-level view of the activity areas the work group and its subgroups will explore and develop concrete assets for:
The work group intends to collaborate with and reference work from other foundations and organizations including:
- OWASP Foundation
– GenAI Security Project,
- Application Security Verification Standard (ASVS)
- Software Component Verification Standard (SCVS)
- CycloneDX - Bill-of-Materials (BOM) standard and its work groups and profiles including:
- Machine Learning Transparency (MLBOM)
- Threat Modeling (TMBOM)
- Linux Foundation and its projects including:
- Software Package Data Exchange (SPDX) - Bill-of-Materials (BOM) standard and its area of interest including:
- OpenSSF and its work groups and guidelines:
- AI/ML Security work group
- Supply-chain Levels for Software Artifacts (SLSA) - specification and its ability to measure Secure Software Development Framework (SSDF) compliance.
- NIST and its standards:
- Open Security Controls Assessment Language (OSCAL) - security controls and profiles.s
- Secure Software Development Framework (SSDF) - secure software development practices.
This section contains additional references to projects and resources that the work group might find useful:
- LF AI & Data public calendar - Zoom calendar for all meetings
- Projects:
- Data Prep. Kit (DPK) - accelerates unstructured data preparation for LLM app developers
- Docling - simplifies document processing, parsing diverse formats including advanced PDF understanding
- BeeAI - empowers developers to discover, run, and compose AI agents from any framework
- OpenSSF Model Signing (OMS)
- Specification: ossf/model-signing-spec
- Tooling: sigstore/model-transparency
- OWASP AI Exchange - website with general AI threat matrix, controls and risks.
- OWASP Threat Model Library - first, open-sourced, structured, peer-reviewed threat modeling dataset.
- OWASP Threat Dragon - a modeling tool used to create threat model diagrams as part of a secure development lifecycle.
- Cloud Security Alliance (CSA)
- "Agentic AI Threat Modeling Framework: MAESTRO" - overview of known threat model frameworks and their pros/cons.
- CNCF OSCAL Compass - a set of tools that enable the creation, validation, and governance of documentation artifacts for compliance needs.
- OpenSSF Gemara - a logical model to describe the categories of compliance activities, how they interact, and the schemas to enable automated interoperability between them.
- European Commission - EU Cybersecurity Policies
- Cybersecurity and Infrastructure Security Agency (CISA)
- MITRE
- Common Weakness Enumeration (CWE) - a category system for hardware and software weaknesses and vulnerabilities.
- As threat modeling aims to identify and address potential weaknesses, CWE provides a standard for categorization of actual weaknesses. Our WGs should look to assure semantic similarity
- MITRE
- Cloud Security Alliance (CSA) - "ISO 42001 Lessons learned"
- IBM Risk Atlas Nexus project on GitHub.
- and hosted on Huggingface: Risk Atlas Nexus
The work group and its subgroups adhere to the LF AI & Data's Code of Conduct (CoC) as published here:
All repository content is licensed under the Apache 2.0 license unless otherwise noted. Including:
- Displayed logos are copyrighted and/or trademarked by their respective owners.