Task orchestration tools and workflows
Recently there’s been an explosion of new tools for orchestrating task- and data workflows (sometimes referred to as “MLOps”). The quantity of these tools can make it hard to choose which ones to use and to understand how they overlap, so we decided to compare some of the most popular ones head to head.
Overall Apache Airflow is both the most popular tool and also the one with the broadest range of features, but Luigi is a similar tool that’s simpler to get started with. Argo is the one teams often turn to when they’re already using Kubernetes, and Kubeflow and MLFlow serve more niche requirements related to deploying machine learning models and tracking experiments.
Before we dive into a detailed comparison, it’s useful to understand some broader concepts related to task orchestration.
What is task orchestration and why is it useful?
Smaller teams usually start out by managing tasks manually – such as cleaning data, training machine learning models, tracking results, and deploying the models to a production server. As the size of the team and the solution grows, so does the number of repetitive steps. It also becomes more important that these tasks are executed reliably.
The complex ways these tasks depend on each other also increases. When you start out, you might have a pipeline of tasks that needs to be run once a week, or once a month. These tasks need to be run in a specific order. As you grow, this pipeline becomes a network with dynamic branches. In certain cases, some tasks set off other tasks, and these might depend on several other tasks running first.
This network can be modelled as a DAG – a Directed Acyclic Graph, which models each task and the dependencies between them.
Workflow orchestration tools allow you to define DAGs by specifying all of your tasks and how they depend on each other. The tool then executes these tasks on schedule, in the correct order, retrying any that fail before running the next ones. It also monitors the progress and notifies your team when failures happen.
CI/CD tools such as Jenkins are commonly used to automatically test and deploy code, and there is a strong parallel between these tools and task orchestration tools – but there are important distinctions too. Even though in theory you can use these CI/CD tools to orchestrate dynamic, interlinked tasks, at a certain level of complexity you’ll find it easier to use more general tools like Apache Airflow instead.
[Want more articles like this? Sign up to our newsletter. We share a maximum of one article per week and never send any kind of promotional mail].
Overall, the focus of any orchestration tool is ensuring centralized, repeatable, reproducible, and efficient workflows: a virtual command center for all of your automated tasks. With that context in mind, let’s see how some of the most popular workflow tools stack up.
Just tell me which one to use
You should probably use:
- Apache Airflow if you want the most full-featured, mature tool and you can dedicate time to learning how it works, setting it up, and maintaining it.
- Luigi if you need something with an easier learning curve than Airflow. It has fewer features, but it’s easier to get off the ground.
- Prefect if you want something that’s very familiar to Python programmers and stays out of your way as much as possible.
- Argo if you’re already deeply invested in the Kubernetes ecosystem and want to manage all of your tasks as pods, defining them in YAML instead of Python.
- KubeFlow if you want to use Kubernetes but still define your tasks with Python instead of YAML. You can also read about our experiences using Kubeflow and why we decided to drop it for our projects at Kubeflow: Not ready for production?
- MLFlow if you care more about tracking experiments or tracking and deploying models using MLFlow’s predefined patterns than about finding a tool that can adapt to your existing custom workflows.
For a quick overview, we’ve compared the libraries when it comes to:
- Maturity: based on the age of the project and the number of fixes and commits;
- Popularity: based on adoption and GitHub stars;
- Simplicity: based on ease of onboarding and adoption;
- Breadth: based on how specialized vs. how adaptable each project is;
- Language: based on the primary way you interact with the tool.
These are not rigorous or scientific benchmarks, but they’re intended to give you a quick overview of how the tools overlap and how they differ from each other. For more details, see the head-to-head comparison below.
Luigi vs. Airflow
Luigi and Airflow solve similar problems, but Luigi is far simpler. It’s contained in a single component, while Airflow has multiple modules which can be configured in different ways. Airflow has a larger community and some extra features, but a much steeper learning curve. Specifically, Airflow is far more powerful when it comes to scheduling, and it provides a calendar UI to help you set up when your tasks should run. With Luigi, you need to write more custom code to run tasks on a schedule.
Both tools use Python and DAGs to define tasks and dependencies.
- Use Luigi if you have a small team and need to get started quickly.
- Use Airflow if you have a larger team and can take an initial productivity hit in exchange for more power once you’ve gotten over the learning curve.
Luigi vs. Argo
Argo is built on top of Kubernetes, and each task is run as a separate Kubernetes pod. This can be convenient if you’re already using Kubernetes for most of your infrastructure, but it will add complexity if you’re not. Luigi is a Python library and can be installed with Python package management tools, such as pip and conda. Argo is a Kubernetes extension and is installed using Kubernetes. While both tools let you define your tasks as DAGs, with Luigi you’ll use Python to write these definitions, and with Argo you’ll use YAML.
- Use Argo if you’re already invested in Kubernetes and know that all of your tasks will be pods. You should also consider it if the developers who’ll be writing the DAG definitions are more comfortable with YAML than Python.
- Use Luigi if you’re not running on Kubernetes and have Python expertise on the team.
Luigi vs. Prefect
Luigi and Prefect both aim to be easy for Python developers to onboard to and both aim to be simpler and lighter than Airflow. Prefect makes fewer assumptions about how your workflows are structured than Luigi does, and allows you to turn any Python function into a task.
Prefect is less mature than Luigi and follows an open core model, while Luigi is fully open source.
Use Prefect if you need to get something lightweight up and running as soon as possible. Use Luigi if you need something that is more strongly battle-tested and fully open source
Luigi vs. Kubeflow
Luigi is a Python-based library for general task orchestration, while Kubeflow is a Kubernetes-based tool specifically for machine learning workflows. Luigi is built to orchestrate general tasks, while Kubeflow has prebuilt patterns for experiment tracking, hyper-parameter optimization, and serving Jupyter notebooks. Kubeflow consists of two distinct components: Kubeflow and Kubeflow Pipelines. The latter is focused on model deployment and CI/CD, and it can be used independently of the main Kubeflow features.
- Use Luigi if you need to orchestrate a variety of different tasks, from data cleaning through model deployment.
- Use Kubeflow if you already use Kubernetes and want to orchestrate common machine learning tasks such as experiment tracking and model training.
Luigi vs. MLFlow
Luigi is a general task orchestration system, while MLFlow is a more specialized tool to help manage and track your machine learning lifecycle and experiments. You can use Luigi to define general tasks and dependencies (such as training and deploying a model), but you can import MLFlow directly into your machine learning code and use its helper function to log information (such as the parameters you’re using) and artifacts (such as the trained models). You can also use MLFlow as a command-line tool to serve models built with common tools (such as scikit-learn) or deploy them to common platforms (such as AzureML or Amazon SageMaker).
Airflow vs. Argo
Argo and Airflow both allow you to define your tasks as DAGs, but in Airflow you do this with Python, while in Argo you use YAML. Argo runs each task as a Kubernetes pod, while Airflow lives within the Python ecosystem. Canva evaluated both options before settling on Argo, and you can watch this talk to get their detailed comparison and evaluation.
- Use Airflow if you want a more mature tool and don’t care about Kubernetes.
- Use Argo if you’re already invested in Kubernetes and want to run a wide variety of tasks written in different stacks.
Airflow vs. Prefect
Prefect was built to solve many perceived problems with Airflow, including that Airflow is too complicated, too rigid, and doesn’t lend itself to very agile environments. Even though you can define Airflow tasks using Python, this needs to be done in a way specific to Airflow. Using Prefect, any Python function can become a task and Prefect will stay out of your way as long as everything is running as expected, jumping in to assist only when things go wrong.
Airflow is an Apache project and is fully open source. Prefect is open core, with proprietary extensions.
Use Airflow if you need a more mature tool and can afford to spend more time learning how to use it. Use Prefect if you want to try something lighter and more modern and don’t mind being pushed towards their commercial offerings.
Airflow vs. Kubeflow
Airflow is a generic task orchestration platform, while Kubeflow focuses specifically on machine learning tasks, such as experiment tracking. Both tools allow you to define tasks using Python, but Kubeflow runs tasks on Kubernetes. Kubeflow is split into Kubeflow and Kubeflow Pipelines: the latter component allows you to specify DAGs, but it’s more focused on deployment and model serving than on general tasks.
- Use Airflow if you need a mature, broad ecosystem that can run a variety of different tasks.
- Use Kubeflow if you already use Kubernetes and want more out-of-the-box patterns for machine learning solutions.
Airflow vs. MLFlow
Airflow is a generic task orchestration platform, while MLFlow is specifically built to optimize the machine learning lifecycle. This means that MLFlow has the functionality to run and track experiments, and to train and deploy machine learning models, while Airflow has a broader range of use cases, and you could use it to run any set of tasks. Airflow is a set of components and plugins for managing and scheduling tasks. MLFlow is a Python library you can import into your existing machine learning code and a command-line tool you can use to train and deploy machine learning models written in scikit-learn to Amazon SageMaker or AzureML.
- Use MLFlow if you want an opinionated, out-of-the-box way of managing your machine learning experiments and deployments.
- Use Airflow if you have more complicated requirements and want more control over how you manage your machine learning lifecycle.
Argo vs. Kubeflow
Parts of Kubeflow (like Kubeflow Pipelines) are built on top of Argo, but Argo is built to orchestrate any task, while Kubeflow focuses on those specific to machine learning – such as experiment tracking, hyperparameter tuning, and model deployment. Kubeflow Pipelines is a separate component of Kubeflow which focuses on model deployment and CI/CD, and can be used independently of Kubeflow’s other features. Both tools rely on Kubernetes and are likely to be more interesting to you if you’ve already adopted that. With Argo, you define your tasks using YAML, while Kubeflow allows you to use a Python interface instead.
- Use Argo if you need to manage a DAG of general tasks running as Kubernetes pods.
- Use Kubeflow if you want a more opinionated tool focused on machine learning solutions.
Argo vs. MLFlow
Argo is a task orchestration tool that allows you to define your tasks as Kubernetes pods and run them as a DAG, defined with YAML. MLFlow is a more specialized tool that doesn’t allow you to define arbitrary tasks or the dependencies between them. Instead, you can import MLFlow into your existing (Python) machine learning code base as a Python library and use its helper functions to log artifacts and parameters to help with analysis and experiment tracking. You can also use MLFlow’s command-line tool to train scikit-learn models and deploy them to Amazon Sagemaker or Azure ML, as well as to manage your Jupyter notebooks.
- Use Argo if you need to manage generic tasks and want to run them on Kubernetes.
- Use MLFlow if you want an opinionated way to manage your machine learning lifecycle with managed cloud platforms.
Kubeflow vs. MLFlow
Kubeflow and MLFlow are both smaller, more specialized tools than general task orchestration platforms such as Airflow or Luigi. Kubeflow relies on Kubernetes, while MLFlow is a Python library that helps you add experiment tracking to your existing machine learning code. Kubeflow lets you build a full DAG where each step is a Kubernetes pod, but MLFlow has built-in functionality to deploy your scikit-learn models to Amazon Sagemaker or Azure ML.
- Use Kubeflow if you want to track your machine learning experiments and deploy your solutions in a more customized way, backed by Kubernetes.
- Use MLFlow if you want a simpler approach to experiment tracking and want to deploy to managed platforms such as Amazon Sagemaker.
No silver bullet
While all of these tools have different focus points and different strengths, no tool is going to give you a headache-free process straight out of the box. Before sweating over which tool to choose, it’s usually important to ensure you have good processes, including a good team culture, blame-free retrospectives, and long-term goals. If you’re struggling with any machine learning problems, get in touch. We love talking shop, and you can schedule a free call with our CEO.