Scan your AI codebase and automatically map the models, agents, prompts, tools, datasets, and APIs inside it. Supports Python, Jupyter notebooks, and JavaScript/TypeScript. The Trivy for AI applications.
AIBOM finds AI components in your code and maps them to a structured inventory with OWASP LLM risk analysis.
from langchain import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4",
temperature=0.7
)
import OpenAI from "openai";
const client = new OpenAI({
model: "gpt-4o-mini"
});
from langchain.agents import initialize_agent
agent = initialize_agent(
tools,
llm,
agent="zero-shot-react-description"
)
from langchain.tools import Tool
search = Tool(
name="SerpAPI",
func=search_run
)
embeddings = OpenAIEmbeddings(
model="text-embedding-3-small"
)
vectorstore = FAISS.from_documents(
docs, embeddings
)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant..."),
("human", "{input}")
])
AIBOM generates a structured JSON document mapping all AI components in your codebase with risk findings and provenance tracking.
Beyond simple detection, AIBOM provides enterprise-grade features for AI supply chain security.
Built-in heuristics aligned with OWASP LLM Top 10. Detect third-party providers, exfiltration surfaces, and prompt injection risks.
Detect AI components in Python, Jupyter notebooks, JavaScript, and TypeScript codebases with framework-specific parsers.
Sign evidence bundles with X.509 certificates. Verify provenance, certificate chains, and enforce signer allowlists.
Compare AIBOM versions to detect new models, tools, or external providers. Gate CI/CD pipelines on unauthorized changes.
Define organization-specific risk rules with allowlists, thresholds, and severity overrides in JSON or YAML format.
Schedule recurring scans with trend analysis. Track novel components over time and maintain historical snapshots.
AI systems contain many hidden dependencies that traditional tooling cannot see.
Models, prompts, tools, datasets, and APIs form complex dependency graphs that are invisible to traditional SBOM tools.
Prompt injection, model poisoning, and data leakage require visibility into how AI components interact with your systems.
Organizations need to track AI usage for compliance, risk management, and responsible AI practices.
Third-party models and APIs introduce supply chain risks that must be audited and monitored continuously.
Three simple steps to complete AI supply chain visibility.
AIBOM analyzes your codebase using AST parsing to identify Python files, notebooks, JavaScript/TypeScript files, and configuration files.
Multiple specialized detectors identify models, agents, prompts, tools, datasets, and frameworks with precise source locations and provenance tracking.
Produces a structured JSON document with full inventory, OWASP LLM risk findings, and export formats like SPDX and CycloneDX.
Install AIBOM and scan your first AI project in minutes.
pip install aibom
aibom generate .
aibom export --format spdx-json
aibom generate . --audit-mode --bundle-out evidence.zip
Create signed evidence bundle
aibom diff baseline.json new.json --fail-on new-model
Detect drift between versions
aibom periodic-scan . --interval daily
Schedule recurring scans
AIBOM is built by and for the AI security community. We welcome contributions from AI security researchers, AI engineers, and LangChain developers.