Bodywork is a command line tool that deploys ML pipelines to Kubernetes. It takes care of everything to do with containers and orchestration, so that you don't have to.

Who is this for?

Bodywork is aimed at teams who want a solution for running ML pipelines and deploying models to Kubernetes. It is a lightweight and simpler alternative to Kubeflow, or to building your own platform based around a workflow orchestration tool like Apache AirflowArgo Workflows or Dagster.

Pipeline = Jobs + Services

Any stage in a Bodywork pipeline can do one of two things:

You can use these to compose pipelines for many common ML use-cases, from serving pre-trained models to running continuous training on a schedule.

No boilerplate code Required

Authoring a stage is as simple as developing an executable Python module or Jupyter notebook that performs the required tasks, and then committing it to your project's Git repository. You are free to structure your codebase as you wish and there are no new APIs to invest in.

project_structure_map.png

Easy to Configure

Stages are assembled into DAGs that define your pipeline's workflow. This and other key configuration is contained within a single bodywork.yaml file.

Simplified DevOps for ML

Bodywork removes the need for you to build and manage container images for any stage of your pipeline. It works by running all stages using Bodywork's custom container image, that starts by pulling all files required for a stage directly from your project's Git repository (e.g. from GitHub). It'll then pip-install any required dependencies, before running the stage's designated Python module or Jupyter notebook.

ml_pipeline.png

More Features