Orchestrating and Deploying Machine Learning Platforms at Scale with Kubeflow

Topic:

The significant amount of effort required to bring Machine Learning (ML) models into usefully deployable form is one of the main obstacles preventing the Cambrian explosion and widespread adoption of ML and AI into different industry verticals at this time. More specifically, the lack of a standardized approach for training and serving models at scale means it will take organizations a much longer time than expected to get ML models developed by Data Scientists into a deployable form that can run continuously and without any glitches in production. This is not only unacceptable from a project timeline standpoint, but it also inhibits the ability to iterate faster and course-correct in cases where the ML models are not behaving as expected in the production data pipeline. As Google Cloud partners, MavenCode specializes in building state of the art data pipelines that accelerate the process of deploying AI and ML into a production-ready state for our customers. As a company, we leverage the battle-tested approaches and frameworks provided by Google, and in the past few months, we have been heavily invested in the Kubeflow open-source project. This has allowed us to be able to build a process around the Kubeflow platform for orchestrating and deploying ML and AI models at scale.

In this presentation, I will be walking you through the process of bootstrapping a Kubernetes Cluster in the cloud for training and serving models at scale using Kubeflow, and discuss all the lessons we have learned in the process of implementing our pipelines for projects we have undertaken.

Room:
Ballroom B
Time:
Thursday, March 7, 2019 - 10:30 to 11:00