Jason is dedicated to advancing the state of the art in secure and robust AI. With a bachelor’s degree in computer science from San Diego State University, he is focused on ensuring trust, security, privacy, bias, and robustness of AI/ML models. Jason has led the development efforts of a commercial solution for the detection and repair of vulnerabilities in deep learning systems, and the co-author of multiple patents related to the cybersecurity of systems including AI/ML, embedded devices, supply chain, and others. His passion for improving the field has driven him to push the boundaries of what is possible and make a meaningful impact in the fields of AI and cybersecurity.

Presentations

23x

Beyond Static Analysis: Applying Symbolic Execution to Embedded Linux

Static analysis tools are fast, scalable, and widely used in modern software workflows, but they struggle to reason about runtime behaviors in complex embedded systems. This talk focuses on how symbolic execution can be used as a complementary technique to explore deeper execution paths and uncover subtle bugs that traditional static analysis often misses. We will explain the core concepts, key challenges like path explosion, practical mitigation strategies, and real-world case studies involving embedded Linux applications.

See Presentation
23x

What’s in the Model? Building Trust with AIBOMs

The open-source AI ecosystem is growing fast, with thousands of pre-trained and fine-tuned models readily available for reuse. This accessibility also introduces inherited risks, including data poisoning, backdoors, and model tampering that can propagate silently through the AI supply chain. This talk explores how Artificial Intelligence Bills of Materials (AIBOMs) can provide visibility, accountability, and better security practices for open-source AI, helping developers and organizations trust what they build on.

See Presentation