AIBOM Documentation

AIBOM (AI Bill of Materials) is a standards-first, CI-native generator for AI applications. It performs static analysis to detect AI/ML components and produces structured inventory documents with audit evidence bundling, drift detection, and heuristic risk overlays.

Quickstart

Get started with AIBOM in three simple steps:

bash
# 1. Clone the repository
git clone https://github.com/akumar0205/AIBOM.git
cd AIBOM

# 2. Install AIBOM
pip install -e .

# 3. Generate your first AIBOM
aibom generate . --output AI_BOM.json

What is AIBOM?

AIBOM is an AI inventory and risk analyzer — a tool that scans your codebase to discover AI models, agents, prompts, and tools while assessing security risks. While traditional SBOM tools track dependencies, they miss critical AI-specific components like:

  • Large Language Models (LLMs) and their providers
  • AI agents and their configurations
  • Prompt templates and system prompts
  • Vector stores and embedding models
  • AI tools and external APIs

AIBOM detects these components across Python, JavaScript/TypeScript, Java, Go, and .NET codebases, providing complete visibility into your AI supply chain.

Info

AIBOM aligns with the OWASP LLM Top 10 risk framework for AI security.

Installation

From Source

bash
git clone https://github.com/akumar0205/AIBOM.git
cd AIBOM
pip install -e .

Docker

bash
docker build -f deploy/Dockerfile -t aibom .
docker run --rm -v $(pwd):/workspace aibom generate /workspace -o /out/aibom.json

CLI Reference

generate

Generate an AIBOM document from your codebase.

bash
aibom generate [TARGET] [OPTIONS]

Options

  • -o, --output - Output file path (default: AI_BOM.json)
  • --include-prompts - Include prompt content (requires acknowledgment)
  • --acknowledge-prompt-exposure-risk - Acknowledge risk of exposing prompts
  • --include-runtime-manifests - Include runtime dependency manifests
  • --redaction-policy - Evidence redaction policy (strict/default/off)
  • --audit-mode - Enable audit mode with full evidence collection
  • --bundle-out - Create evidence bundle at specified path
  • --risk-policy - Path to custom risk policy file
  • --fail-on-unsupported-threshold - Fail if unsupported artifacts exceed threshold

Examples

bash
# Basic generation
aibom generate . -o AI_BOM.json

# Audit mode with evidence bundle
aibom generate . --audit-mode --bundle-out evidence.zip

# Include runtime manifests
aibom generate . --include-runtime-manifests

# Custom risk policy
aibom generate . --risk-policy policy.json

export

Export AIBOM to standard formats (SPDX, CycloneDX, SARIF, VEX).

bash
aibom export --input AI_BOM.json --format spdx-json -o SPDX.json

Supported Formats

  • spdx-json - SPDX 2.3 JSON format
  • cyclonedx-json - CycloneDX 1.5 JSON format
  • sarif-json - SARIF 2.1.0 format for security tools
  • vex-json - OpenVEX format for vulnerability tracking

validate

Validate an AIBOM document against the JSON schema.

bash
aibom validate AI_BOM.json

diff

Compare two AIBOM documents and detect changes.

bash
aibom diff old.json new.json --fail-on new-model,new-tool,new-external-provider

bundle

Create an evidence bundle with AIBOM, SPDX, and optional diff.

bash
aibom bundle --input AI_BOM.json --out evidence.zip --baseline baseline.json

attest

Sign and verify evidence bundles with X.509 certificates.

bash
# Sign a bundle
aibom attest --bundle evidence.zip --signing-key key.pem --signing-cert cert.pem

# Verify a bundle
aibom attest --bundle evidence.zip --signature evidence.zip.sig --signing-cert cert.pem --verify

risk

Show risk findings from an AIBOM document.

bash
aibom risk --input AI_BOM.json

# With custom risk policy
aibom risk --input AI_BOM.json --risk-policy policy.json

periodic-scan

Schedule recurring scans with trend analysis.

bash
aibom periodic-scan . --output periodic_scan.json --interval daily

# With history window
aibom periodic-scan . --history-window 10 --interval daily

Risk Analysis

AIBOM includes built-in risk analysis aligned with the OWASP LLM Top 10. The following risk rules are included by default:

Built-in Risk Rules

  • Third-Party Provider (LLM07) - Detects external model providers like OpenAI, Anthropic
  • Exfiltration Surface (LLM06) - Identifies tools that may leak sensitive data
  • Prompt Injection Surface (LLM01) - Flags prompt templates that may be vulnerable

Custom Risk Policies

You can define custom risk policies in JSON or YAML format:

json
{
  "policy_id": "org-risk-rules",
  "version": "2026.03",
  "rule_overrides": {
    "third-party-provider": {
      "severity": "high",
      "threshold": 1,
      "allowlist": [
        {
          "entity_type": "model",
          "name": "ChatOpenAI",
          "source_file": "app.py",
          "reason": "approved-external-provider"
        }
      ]
    }
  }
}

CI/CD Integration

GitHub Actions

yaml
name: AIBOM Security Check
on: [pull_request]

jobs:
  aibom:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.11"
      - run: pip install aibom
      - run: aibom generate . -o new_aibom.json
      - run: |
          aibom diff .aibom/baseline.json new_aibom.json \
            --fail-on new-model,new-tool,new-external-provider

Drift Detection

Use aibom diff to detect changes between AIBOM versions and gate your CI/CD pipeline:

bash
aibom diff baseline.json new.json --fail-on new-model,new-tool,new-external-provider
Important

Store your baseline AIBOM in version control to track changes over time.