Generate an AI Bill of Materials for your AI codebase. Automatically detect models, agents, prompts, tools, and APIs across Python, JavaScript/TypeScript, Java, Go, and .NET with built-in OWASP LLM risk analysis.
git clone https://github.com/akumar0205/AIBOM.git && cd AIBOM
pip install -e . && aibom generate .AIBOM finds AI components in your code and maps them to a structured inventory with OWASP LLM risk analysis.
from langchain import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4",
temperature=0.7
)
import OpenAI from "openai";
const client = new OpenAI({
model: "gpt-4o-mini"
});
import dev.langchain4j.model.openai.OpenAiChatModel;
OpenAiChatModel model = OpenAiChatModel.builder()
.modelName("gpt-4")
.build();
import "github.com/openai/openai-go"
client := openai.NewClient()
resp, err := client.Chat.Completions.New(
ctx,
openai.ChatCompletionNewParams{
Model: "gpt-4",
},
)
from langchain.agents import initialize_agent
agent = initialize_agent(
tools,
llm,
agent="zero-shot-react-description"
)
from langchain.tools import Tool
search = Tool(
name="SerpAPI",
func=search_run
)
AIBOM detects AI components across your entire polyglot codebase with specialized parsers for each language.
Full AST parsing for LangChain, OpenAI, Anthropic, and more
Pattern-based detection for OpenAI SDK, LangChain.js
Detect LangChain4j, Spring AI, and OpenAI integrations
Find OpenAI and Anthropic SDK usage in Go applications
Scan C# projects for Semantic Kernel and AI integrations
AIBOM generates a structured JSON document mapping all AI components in your codebase with risk findings and provenance tracking.
Beyond simple detection, AIBOM provides enterprise-grade features for AI supply chain security.
Built-in heuristics aligned with OWASP LLM Top 10. Detect third-party providers, exfiltration surfaces, and prompt injection risks.
Detect AI components in Python, Jupyter notebooks, JavaScript, TypeScript, Java, Go, and .NET codebases.
Sign evidence bundles with X.509 certificates. Verify provenance, certificate chains, and enforce signer allowlists.
Compare AIBOM versions to detect new models, tools, or external providers. Gate CI/CD pipelines on unauthorized changes.
Define organization-specific risk rules with allowlists, thresholds, and severity overrides in JSON or YAML format.
Schedule recurring scans with trend analysis. Track novel components over time and maintain historical snapshots.
AI systems contain many hidden dependencies that traditional tooling cannot see.
Models, prompts, tools, datasets, and APIs form complex dependency graphs that are invisible to traditional SBOM tools.
Prompt injection, model poisoning, and data leakage require visibility into how AI components interact with your systems.
Organizations need to track AI usage for compliance, risk management, and responsible AI practices.
Third-party models and APIs introduce supply chain risks that must be audited and monitored continuously.
Three simple steps to complete AI supply chain visibility.
Get the code from GitHub and install with pip. It's just a few commands to get started.
git clone https://github.com/akumar0205/AIBOM.git && cd AIBOM && pip install -e .
Run AIBOM against your project. It analyzes Python, TypeScript, Java, Go, and .NET files to detect AI components.
aibom generate . --output AI_BOM.json
Get a structured JSON document with risk findings. Export to SPDX, CycloneDX, SARIF, or VEX formats for your compliance tools.
aibom export --input AI_BOM.json --format spdx-json -o spdx.json
Clone the repo and scan your first AI project in under a minute.
git clone https://github.com/akumar0205/AIBOM.git
cd AIBOM && pip install -e .
aibom generate .
aibom generate . --audit-mode --bundle-out evidence.zip
Create signed evidence bundle
aibom diff baseline.json new.json --fail-on new-model
Detect drift between versions
aibom periodic-scan . --interval daily
Schedule recurring scans
AIBOM is built by and for the AI security community. We're looking for contributors to help us support more languages, frameworks, and risk detection rules. Whether you're an AI security researcher, ML engineer, or developer — your contributions are welcome!
Build support for new AI frameworks and languages
Implement new OWASP LLM-aligned risk detections
Add new SBOM and compliance export formats
Help us improve by reporting issues and edge cases
AIBOM is MIT licensed and free to use. Join our growing community of AI security practitioners.