Open-source AI is now the foundation of modern development. Models such as BERT, GPT-2, Mistral, and DeepSeek R1 have accelerated progress by making high-quality pre-trained models widely accessible. Organizations rely on these models and their derivatives to save time and cost, but this model reuse introduces invisible, inherited risks. Models can contain poisoned data, embedded backdoors, or other malicious modifications, often without clear documentation of provenance or lineage.

This session focuses on the AI supply chain problem and the role of Artificial Intelligence Bills of Materials (AIBOMs) in mitigating these risks. Modeled after software SBOMs, AIBOMs record critical metadata about a model’s origin, training history, licensing, and known issues. This helps practitioners evaluate the trustworthiness of the models they reuse, fine-tune, or redistribute.

Topics include:

1. Why model reuse is both powerful and dangerous, and how inherited flaws propagate across model generations.
2. Common transferable threats, including backdoors, dataset poisoning, evasion attacks, and model artifact tampering.
3. Real-world case studies and research, including BadNets, BAGM, and deceptive language model training, showing how these threats persist across transfer learning.
4. Experimental results highlighting how even small amounts of tainted data can compromise downstream systems.
5. Practical assessment and mitigation strategies, such as lineage tracking, anomaly detection, secure fine-tuning workflows, and transparency practices for model repositories.
6. How AIBOMs provide a structured way to capture model provenance and risk information, making open-source AI safer and more trustworthy.

Attendees will gain a clear understanding of the hidden risks in open-source AI reuse, how adversarial behaviors persist across models, and how AIBOMs can strengthen trust and transparency in the AI supply chain. The goal is to give practitioners actionable tools to evaluate and secure the models they depend on.