Deploying webKnossos with Kubernetes & Helm

February 21, 2018 • DevOps

Deploying webKnossos with Kubernetes & Helm

Today, we deploy several setups of webKnossos, a browser-based 3D microscopy data annotation tool, multiple times per hour to our cluster. We enable a DevOps culture by surfacing those deployments in git. In this blog post I will explain why and how we built it.

Containers, Bare-Metal & Infrastructure-as-Code

When we decided how to build our deployment infrastructure, we first thought about the requirements and wishful properties we would need, to sustain comfortably the deployment of webKnossos. We already decided on containerizing the application with Docker to encapsulate its dependencies. Also, our webKnossos setups must run on bare-metal servers, due to the data-heavy nature of the application.

One major point in the decision about our deployment stack was to establish Infrastructure-as Code, making every installation and configuration visible and editable to everybody in the team. This enables us to store the infrastructure configuration on Github and have all the benefits of modern software development, such as versioning, pull requests and continuous integration.

Other requirements are to support multiple deployments of the same application with slightly different configurations, fast release cycles and simple upgrades.

So, how did we achieve those goals?

The case for Kubernetes & Helm

As we were looking for a bare-metal container deployment solution, we could skip the plenitude of cloud providers. From the available self-hosted solutions we chose Kubernetes for multiple reasons:

  • It is an opinionated system that lets you configure the important bits and pieces (such as network overlays), but comes with sensible defaults, e.g. etcd as a distributed store and dnsmasq for cluster-internal dns. So, setting up the cluster was simple.
  • Kubernetes employs useful abstractions to specify resources, such as deployments, services, ingress-endpoints and cronjobs.
  • All resources can be represented declaratively as YAML. This allows us to handle the whole cluster configuration as versioned code.
  • Advanced handling of those resources is already included, such as container monitoring, resource limits, health checks and rolling upgrades.
  • We expect Kubernetes to still exist in ten years, as it is backed by the Cloud Native Computing Foundation. Many companies, including Kubernetes’ origin, Google Cloud, support it actively.

Our first webKnossos deployment with Kubernetes worked fine. But how about having multiple releases, using different configurations for each customer? For the second release, we simply duplicated the resource definitions and changed them, where necessary. This strategy surely does not scale for more releases.

At this point Helm comes in handy, which allows to template those resource definitions and manage them across multiple release. Therefore, all templated Kubernetes resources belonging to one application are treated as a package (called Chart in Helm-land), having possibly multiple instantiations as releases. This also allows to rollback on a release level, instead of hand-picking the correct Kubernetes resources.

To deploy a release to our cluster with a simple code change, we added two missing links: Releases and their template parameters are not managed by Helm. This is easily handled by specifying the template parameters in one file per release and deploying them idempotently, using the following command: helm upgrade release-a webknossos-chart -i -f webknossos/release-a-parameters.yaml

In the meantime, the off-the-shelf Helm plugin Helmfile was developed and provides similar functionality.

Finally, the last step was to automatically deploy the state of the master branch of our infrastructure git-repository. Therefore we developed the Auto-Deploy webhook, which itself is hosted in our Kubernetes cluster, of course. This triggers Helm upgrades on change notifications from Github. To keep the upgrades fast, we compare the rendered Kubernetes resources (using helm template) with the actual deployed resources (derived with helm get manifest) and only upgrade changed releases.

The following table illustrates how each part of the deployment system (on the top) contributes to our requirements (listed on the left):

Kubernetes Helm Auto-Deploy
Containerized Applications x
Bare-Metal Servers x
Infrastructure-as-Code x x x
Configurable Releases x
Fast & Simple Upgrades x x

Having all this in place, upgrading to a newer Docker image only means changing a template parameter in the git-repository. Then, Auto-Deploy determines which releases need to be upgraded and commands Helm to render and apply the Kubernetes resources. After Kubernetes pulled the new Docker image, the container is started and, when ready, will serve incoming requests. Have a look for yourself. Surfacing this magic is surely necessary, and will be explained in the next DevOps blog post about ChatOps.

If you are interested in builduing your infrastructure with code, or need support with Kubernetes, please get in touch with us.

by Jonathan Striebel

Related posts