Agentic workflows are rapidly becoming the next major shift in software engineering: systems that can plan, act, verify, learn, and repeat—turning “prompting” into production-grade automation. In this talk, I’ll share a practical framework for how developers can adopt agentic workflows responsibly, and why they unlock a new level of leverage for building and operating complex systems.
I’ll ground the discussion in the realities of Linux and open source: massive dependency graphs, constant upstream change, and a security landscape where “patch faster” is not optional. Agents shine when the work is repetitive, evidence-driven, and verification-heavy—drafting changes, running tests, validating policy, and producing auditable artifacts that explain what changed and why it’s safe.
From there, we’ll connect agentic workflows to the future of Linux distributions. A world-class distro isn’t just a pile of packages; it’s a factory that continuously produces secure, compatible, and traceable outputs. Using Chainguard OS as an example of the design goals, I’ll explain how agentic build–scan–attest–validate loops can raise quality and security together while increasing delivery velocity.
A key insight is compounding improvement. As workflows accumulate telemetry and guardrails, they get more reliable; and as underlying models improve, the same pipelines can “jump a level” without rewriting everything. Done right, this creates a flywheel where each run strengthens the system: faster remediation, fewer regressions, and shrinking exposure windows.
Attendees will leave with concrete patterns: where to start, what to automate first, how to keep humans accountable, and how to measure success (cycle time, toil, incidents, patch speed). The goal is simple: help the Linux community build software systems that get better over time—secure by default, and fast by design today.



