I needed a simple S3-compatible API today, and it seems that Minio is very much an enterprise thing nowadays. Instead of trying to figure how to spin it up, I looked elsewhere and found Garage, an actually simple S3-compatible server.

Garage comes with a helm chart to simplify its installation on kubernetes. Unfortunately, and typically for helm, some values that I need to be overridden aren’t exposed through values.yaml. In my case, there’s no way to add extra attributes to the Service, and I use that to do cross-cluster routing with cilium.

I use nix to deploy my helm charts, though, so let’s go over how I solved this problem.

1
2
3
4
{
  inputs.kubegen.url = "github:farcaller/nix-kube-generators";
  inputs.garage.url = "git+https://git.deuxfleurs.fr/Deuxfleurs/garage";
  inputs.garage.flake = false;

First, I pull in the dependencies—kubegen provides helper functions for working with Helm, and garage is the source repository for garage itself. While garage is technically a nix flake, i don’t import it as such as I only need the raw files, thus flake = false.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
outputs = { self, nixpkgs, kubegen, flake-utils, garage }: flake-utils.lib.eachDefaultSystem (system:
  let
    pkgs = nixpkgs.legacyPackages.${system};
    lib = pkgs.lib;
    kubelib = kubegen.lib { inherit pkgs; };
  
    patchService = object:
      if object.kind == "Service" && object.metadata.name == "garage" then
        (lib.recursiveUpdate object {
          metadata.annotations."io.cilium/global-service" = "true";
        })
      else object;
    
    foldResources = yamls: pre: resources: builtins.foldl'
      (
        acc: y:
          let
            r = kubelib.fromYAML (builtins.readFile y);
          in
          if pre then r ++ acc else acc ++ r
      )
      resources
      yamls;
  in
  {

Next, I do the usual boilerplate of flake-utils—my desktop is an x86_64, but I run argocd on arm64, so I need to support several architectures in the flake, even though it’s mostly data. I also define two helper functions. patchService takes a kubernetes yaml object, and if that’s a garage Service, it adds that extra annotation into it. lib.recursiveUpdate is a very handy function here that merges the passed in value into the original object. foldResources is a pipe handling step, that takes a list of yaml files, parses them, and appends to the output. Here’s how these are used:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
      packages.kubernetesConfiguration = lib.pipe
        {
          name = "garage";
          chart = "${garage}/script/helm/garage";
          namespace = "garage";
          values = {
            garage.replicationMode = 1;
            deployment.replicaCount = 1;
            persistence = {
              meta.storageClass = "zfspv";
              meta.size = "100Mi";
              data.storageClass = "zfspv";
              data.size = "1Gi";
            };
            monitoring.metrics.enabled = true;
            monitoring.metrics.serviceMonitor.enabled = true;

            podAnnotations."io.cilium.proxy-visibility" = "<Egress/53/UDP/DNS>,<Ingress/3900/TCP/HTTP>,<Ingress/3902/TCP/HTTP>,<Ingress/3903/TCP/HTTP>";
          };
        } [
        kubelib.buildHelmChart
        builtins.readFile
        kubelib.fromYAML
        (map patchService)
        (foldResources [ ./svc-web.yaml ] false)
        kubelib.mkList
        kubelib.toYAMLFile
      ];
    });
}

As you can see, I pull the chart from the ${chart}/script/helm/garage, which is the checkout of the garage input. Nix stores the individual commit in the lock file, and I have a cron job that updates those nighty. It allows me to easily track the changes from upstream, while still maintaining control on which version of them I pull.

The values are passed in directly to helm. They look neater than usual yaml/json thanks to nix’s chaining dot operator.

Finally, the actual pipeline runs through lib.pipe. It takes the first value (the chart definition), passes it to the kubelib.buildHelmChart which builds the helm chart, then reads the result and parses it, runs patchService on all the defined objects, appends objects from svc-web.yaml to the end of it, and finally outputs a yaml back, to be consumed by argocd.

Now, on one hand this seems incredibly complex, but now I don’t need to keep an eye on the upstream versions (nix flake takes care of that), I can easily add local patches without having my own fork of helm chart, and I don’t even have to define everything in nix, I can use usual yaml for extra objects I need to have in the chart. Because it’s all a single entity, it’s applied in one go in argocd, which is much better than having a helm chart and some side objects managed in a separate application.