publish date
Apr 11, 2023
duration
29
min
Difficulty
Case details
You build it, you run it! has set the tone for DevOps and bound the ownership of code to its deployment. Inspired by frictionless deployments, CI/CD extends this claim and demands to produce a releasable software with every commit. For a Software Developer, it is clear what is at stake: If the code breaks, you have to fix it. What does it mean for Machine Learning (ML) practitioners, such as Data Scientists, or ML Engineers, who want to see their latest research progress being reliably represented in production systems? And what are the organisational implications for all managers who require 100% up-time, highest quality standard and quick turnaround time for new feature development or resolving problems? Integrating a ML model in a production environment can be linked to a host of challenges. And many practitioners are not familiar with how to integrate automated feedback loops that signal whether the software is reliably functioning with each commit. This work aims to contribute a practitioner’s perspective on how to improve CI/CD pipelines for Machine Learning applications.
Share case: