My oh my, its been quite the adventure the past couple of weeks… i’ve now gotten to the point where things are running, i’m feeling some growing pains, have tweaked the hardware config considerably (as i’ll document in another post) and have reached the point where I needed persistent storage that would survive pod creation/termination/scale up/scale down and node replacement. NFS is something I had a lot of prior experience and infrastructure setup for, so I started there…
Kubernetes doesn’t support NFS out the box
Despite all its bells and whistles, NFS isn’t something you get for “free” with Kubernetes (on any variant I could find). Instead you need to install a Provisioner
which you can later use for your Storage Classes
. The official Kubernetes documentation
is quite helpful here, after some WWW consensus, I went with the very popular and aptly named NFS Subdir External Provisioner.
K3s doesn’t ship with Helm or many of the other things many websites suggest, so I took a lot of inspiration from this blog post targetted specifically at k3s and instead went for a slightly clunky, however dependency-less route of a manually inserted Helm Chart.
I’ll save you a lot of effort, essentially you need to create a file /var/lib/rancher/k3s/server/manifests/nfs.yaml
and have its contents similar to the below (of course filling in the blanks).
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: nfs
namespace: default
reclaimPolicy: Retain
spec:
chart: nfs-subdir-external-provisioner
repo: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
targetNamespace: default
set:
nfs.server: NFS.SERVER.IP
nfs.path: /your/nfs/export
storageClass.name: nfs
Wait a few seconds and if successful everything should now be working. You can verify this with k3s kubectl get storageclasses
.
You’re done.
Now you have a piece of shared persistent storage available to all your nodes, the only thing left to do is to leverage PersistentVolumeClaims
for each of your Deployments
that need them. (TBC)…