Learn the value of leveraging open source data service technologies to drive innovation.

Ever wonder about how to set up a complete end-to-end data science pipeline starting with interactive notebooks, to distributed training, CI/CD automation,and then serving and monitoring the trained models. In this workshop, we will build a complete deep learning pipeline starting from exploratory analysis, to training, model storage, model serving, and monitoring.

In this webinar, we look at what it takes to build a complete deep learning pipeline and answer questions such as:

  • How we enable data scientists to exploratively develop their model without having to worry about the underlying infrastructure
  • How we easily deploy these distributed deep learning frameworks on any public or private infrastructure