What is MLOps?
You’ll find conflicting definitions on MLOps: Is it a movement, a philosophy, a platform, or a job title? Most are either far too vague – a “philosophy” – or far too specific, just referring to one specific toolset.
Here’s our definition of MLOPs:
MLOps is a collection of industry-accepted best practices to manage code, data, and models in your machine learning team.
This means MLOps should help your team with the following:
- Managing code: MLOps encourages standard software development best practices and supports continuous development and deployment.
- Best practice: Guidelines ensure you move seamlessly from ideas, to experiments, to deploying reliable models in production.
- Managing data: A framework and workflow help to process, save, and track versions of your datasets.
- Efficient collaboration: Teams can share code, data, models, and experiments; run and understand each other’s work; and iterate on previous work.
- Managing models: You can easily train models, track experiment results, and deploy robust APIs.
MLOps is a broad field, so we’ll take a high-level view of the landscape then dive into topics you’ll encounter when adopting it.
The MLOps landscape
Deciding on the best tools to use and how to get started can be hard. Our Machine Learning Tools page offers an overview and simple explanations of the most useful tools.
These MLOps tools cover everything from data wrangling, visualisation, and task automation, to training and deployment. We’ve focused on open source options but with so many proprietary platforms available too, is it worth shelling out?
MLOps: Build vs buy
Numerous commercial platforms aim to make MLOps simpler and “build vs buy” is a question many teams ask. People often “buy” because they lack confidence in building their own tools or using open source options. Expensive proprietary platforms promise to simplify all the complexity.
But proprietary tools often fail on their promises and end up costing and limiting your team instead. The shortcuts they guarantee are often impossible; and your team will still need internal MLOps expertise to use them effectively.
This is why we’re strong proponents of Open Source tooling for MLOps. Free, open source tools are often the correct choice: They result in lower costs, more flexibility, more learning, and easier onboarding – in spite of what the proprietary platforms would have you believe.
If you trust your engineering team, we recommend building your own solution from existing open source components over trying to fit your unique needs into someone else’s proprietary platform.
But if you want to build your own, how do you get started without wasting months researching all the options? We looked for an open source MLOps blueprint but couldn’t find one, so we built it.
Now you can skip the months of research and engineering we did and set up an open source, production-focused MLOps framework in a few hours.
An open-source, ready-to-go MLOps architecture
If you want the power and flexibility of your own solution but the simplicity and speed of a managed proprietary solution, take a look at our Open MLOps architecture. It’s a set of terraform scripts to set up the same system we use internally on Kubernetes and you can be set up in under a day.
Here are the advantages of Open MLOps:
- Free, flexible, and open source: We built Open MLOps entirely using open source tooling. This means that if you have other needs, it’s easy to adapt: Simply swap out the components for other tools or your own custom solutions.
- Easy to start: Many MLOps tools have steep learning curves. We’ve written step-by-step guides so you can walk through an example project in a few hours, then start running your own.
- Scalable: Open MLOps runs on Kubernetes, so it’s easy to scale up or down, depending on your needs. If you’re running a huge workload, you can just add more compute power. If you’re on a budget, you can run it on a small cluster.
Open MLOps includes the following components:
- JupyterHub, a shared notebook environment for your team to collaborate.
- MLFlow to track your experiments and models.
- Prefect to manage your workflows and scheduled tasks.
- Seldon to productionize your models and turn them into APIs.
You can set up everything in your existing Kubernetes cluster or run our simple set up script to create a new one using AWS EKS. Even if you use pre-built solutions, you’ll probably still need to build internal expertise in MLOps so we’ve listed some good resources to get you started.
We’ve written many articles on MLOps. Here are our favourites, to help you at different stages of your MLOps journey.
Getting started with MLOps and Machine Learning
If you’re looking to adopt MLOps and want to learn more before you start building, you should start with the following:
- The Four Ways of Doing Machine Learning: Understand the different ways to adopt machine learning at a high level.
- MLOps for Research Teams: Understand why MLOps isn’t just for industry but is important for research teams too.
- Machine Learning Architecture Components: Understand the important pieces for most machine learning solutions.
- Open Source Software for MLOps: Understand why open source is usually better than proprietary solutions.
- Software Development vs Machine Learning Engineering: Understand why machine learning engineering has additional challenges that trip up even experienced software engineers.
- MLOps for Model Decay: Understand why you can’t “fire and forget” your models, even if they perform well.
- Why is a Model Registry Valuable?: Understand how and why it’s important to track your models.
- Why You Need a Model Serving Tool: Understand how to serve your models in a production environment.
Choosing tools to set up your own MLOps platform
Once you’ve chosen to adopt MLOps, you’ll need some specific tools and platforms. Given the breadth of options, this is probably the hardest part so we’ve compared our favourites and cut through the marketing speak to make it easier for you.
- Choosing a Feature Store: Why you need a feature store and how to choose one.
- Why We Love Prefect as a Data and Workflow Platform: Why you should consider Prefect as your task scheduler and workflow tool.
- Kubeflow: Not Ready for Production?: The problems we found with Kubeflow before switching to Prefect.
- Comparing MLOps Platforms: Dataiku vs Alteryx vs SageMaker vs Databricks
- Comparing Dashboarding Solutions: Streamlit vs Dash vs Shiny vs Voila
- Comparing Workflow Platforms: Airflow vs Luigi vs Prefect vs MLflow vs Kubeflow
- Comparing Data Wrangling Tools: Pandas vs Dask vs Vaex vs Modin vs Ray
Setting up Open MLOps
Once you’ve chosen your tools, you’ll need to set up and configure them. If you want to emulate the setup we use internally at Data Revenue using Open MLOps, we’ve created some step-by-step guides to get JupyterHub, Prefect, Seldon, Dask, and MLFlow up and running quickly.
- Setting up Open MLOps: An Open Source Production Architecture for Machine Learning
- Using JupyterHub, Prefect, and MLFlow on Open MLOps
- Deploy Your Model as an API with MLFlow, Prefect, and Seldon
If you need help setting up your team with MLOps, feel free to reach out.