Solutions Architect with 10+ years of CyberSecurity, DevOps, Software Development and Automation experience. Parth has hands-on …
SPIFFE/SPIRE CSI Driver
SPIFFE/SPIRE is a great tool to use when you want to perform workload attestation and mutually authentication in heterogeneous environment. It provides short lived cryptographic identities (called SVIDs) that can be used by workloads to authenticate to other workloads and establish a mutual TLS (mTLS) connection. Currently the SPIRE agent workload API would need to be exposed as a
hostPath to the kubernetes cluster such that other workloads could mount the the workload API and request a SVID.
hostPath bring a whole slew of drawbacks and security concerns when used in a production environnement. Many hardening guide recommend against using
hostPath as it can be a gateway into privilege escalation. In this post, lets see what is the alternative and how it can be used to deploy SPIRE!
Overview of SPIFFE and SPIRE
SPIFFE (Secure Production Identity Framework for Everyone) is an open-source standard for securely identifying software systems in dynamic and heterogeneous environments. Like I stated above, it can be used to establish mTLS connection or sign and verify JWT tokens. As an open standard, multiple tools implement SPIFFE in some way. These tools are SPIRE (which we will focus on today), Istio (service mesh), HashiCorp Consul (another service mesh) and Kuma (third service mesh).
SPIRE is a production ready implementation of the SPIFEE framework. It perform node and workload attestation in oder to issue SVIDs to workloads. SPIRE consists of a central SPIRE server and SPIRE agents that run on the nodes (one or more depending on the environment). The server acts as the signing authority for all the workloads that are registered. The SPIRE agents are responsible for requesting the signed SVIDs, attesting the identities of the workloads that call the workload API and finally provide the SVID once successfully attested.
Why is hostPath a security concern?
The current deployment of SPIRE relies heavily on
hostPath. This enables the workload API to be accessible by other workloads and attestation process can occur. Using
hostPath in a pod can compromise the security of your cluster and allow an attacker to perform a privilege escalation attack. For example, if the administrator has not limited what can be mounted, an attacker could mount the entire host’s filesystem into your pod. This could give them read and write access on the host’s filesystem! (1)
hostPath is needed for proper functionality of the SPIRE, is there another way we could deploy SPIRE that is more secure? Yes! Lets talk about that next.
What is a CSI Driver?
CSI (Container Storage Interface) is an open standard that allows for exposing of block and file storage systems to containerized workloads. Use of the CSI driver allows for the kubernetes volume layer to become extensible. This allows for third-party storage providers to deploy plugins that expose the new storage system in kubernetes without having to touch the core kubernetes code. This results in more storage options for users and makes the system more secure and reliable.
Deploying SPIRE with CSI Driver
Currently the SPIRE agent is deployed as a
DaemonSet where each node has workload API exposed. Each workload would mount the workload API as a
hostPath volume. The motivation for the CSI driver creation was to remove the need for the workload pods to mount the workload API. Thus, only the SPIRE agent pod that contains the CSI driver containers would require the
hostPath volume mounts (to interact with the Kubelet). This is the only limitation of this driver as using an
emptyDir volume would result in the backing directory to be removed if the SPIFFE CSI Driver pod is restarted, invalidating the mount into workload containers.
- A running Kubernetes cluster (Docker Desktop Kubernetes, k3d, minikube)
Note: You can find a script and configs of this example on our github: spiffe-csi-driver
First we need to deploy out the
apiVersion: storage.k8s.io/v1 kind: CSIDriver metadata: name: "csi.spiffe.io" spec: attachRequired: false podInfoOnMount: true fsGroupPolicy: None volumeLifecycleModes: - Ephemeral
These are the important fields (3):
name: This correspond to the full name of the CSI driver.
attachRequired: Indicates this CSI volume driver requires an attach operation. In SPIRE there is no reason for attaching as we are using ephemeral volumes. So this would be set to false.
podInfoOnMount: Indicates this CSI volume driver requires additional pod information (like pod name, pod UID, etc.) during mount operations.
Kubelet will pass pod information as volume_context in CSI NodePublishVolume calls:
fsGroupPolicy: Controls if this CSI volume driver supports volume ownership and permission changes when volumes are mounted. In this case we do not want to change ownership of the Workload API. So this is set to
volumeLifecycleModes: Informs Kubernetes about the volume modes that are supported by the driver. Default is
Persistent. SPIRE uses
Ephemeralto share the workload API as none of the usual volume features (restoring from snapshot, cloning volumes, etc.) are needed.
Next we need to create the namespace for spire:
apiVersion: v1 kind: Namespace metadata: name: spire
The deployment of the SPIRE server is the same as the normal deployment in k8s. To make things easier, you can deploy using the provided yaml in the following location: https://raw.githubusercontent.com/spiffe/spiffe-csi/main/example/config/spire-server.yaml
kubectl apply -f https://raw.githubusercontent.com/spiffe/spiffe-csi/main/example/config/spire-server.yaml
The SPIRE server deployment uses the current released version image:
apiVersion: apps/v1 kind: Deployment metadata: name: spire-server namespace: spire labels: app: spire-server spec: replicas: 1 selector: matchLabels: app: spire-server template: metadata: namespace: spire labels: app: spire-server spec: serviceAccountName: spire-server shareProcessNamespace: true containers: - name: spire-server image: ghcr.io/spiffe/spire-server:1.1.1 imagePullPolicy: IfNotPresent args: ["-config", "/run/spire/config/server.conf"] ports: - containerPort: 8081 volumeMounts: - name: spire-config mountPath: /run/spire/config readOnly: true volumes: - name: spire-config configMap: name: spire-server
The CSI driver integration is all done within the SPIRE agent
DaemonSet. Lets break down the
DaemonSet to understand which containers are being deployed.
First its the SPIRE Agent container like normal using the image:
- name: spire-agent image: ghcr.io/spiffe/spire-agent:1.1.1 imagePullPolicy: IfNotPresent args: ["-config", "/run/spire/config/agent.conf"] volumeMounts: - name: spire-config mountPath: /run/spire/config readOnly: true - name: spire-bundle mountPath: /run/spire/bundle readOnly: true - name: spire-token mountPath: /var/run/secrets/tokens - name: spire-agent-socket-dir mountPath: /run/spire/sockets
The next container is the SPIFFEE CSI Driver. Here the important pieces are the
volumeMounts which consist of the the SPIRE agent socket, CSI driver socket and the kubelet mount to interact with mounts for containers.
- name: spiffe-csi-driver image: ghcr.io/spiffe/spiffe-csi-driver:nightly imagePullPolicy: IfNotPresent args: [ "-workload-api-socket-dir", "/spire-agent-socket", "-csi-socket-path", "/spiffe-csi/csi.sock", ] env: # The CSI driver needs a unique node ID. The node name can be # used for this purpose. - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName volumeMounts: # The volume containing the SPIRE agent socket. The SPIFFE CSI # driver will mount this directory into containers. - mountPath: /spire-agent-socket name: spire-agent-socket-dir readOnly: true # The volume that will contain the CSI driver socket shared # with the kubelet and the driver registrar. - mountPath: /spiffe-csi name: spiffe-csi-socket-dir # The volume containing mount points for containers. - mountPath: /var/lib/kubelet/pods mountPropagation: Bidirectional name: mountpoint-dir securityContext: privileged: true
The last container is the CSI Node Driver Registrar which takes care of all the little details required to register a CSI driver with the kubelet.
- name: node-driver-registrar image: quay.io/k8scsi/csi-node-driver-registrar:v2.0.1 imagePullPolicy: IfNotPresent args: [ "-csi-address", "/spiffe-csi/csi.sock", "-kubelet-registration-path", "/var/lib/kubelet/plugins/csi.spiffe.io/csi.sock", ] volumeMounts: # The registrar needs access to the SPIFFE CSI driver socket - mountPath: /spiffe-csi name: spiffe-csi-socket-dir # The registrar needs access to the Kubelet plugin registration # directory - name: kubelet-plugin-registration-dir mountPath: /registration
Finally lets look at the actual
volumes that attach to these
hostPath needs to be used here only as the SPIRE agent socket is shared between the containers running in the
DaemonSet. The SPIFFE CSI Driver needs to interact with the kubelet hence the path
- name: spire-agent-socket-dir hostPath: path: /run/spire/agent-sockets type: DirectoryOrCreate # This volume is where the socket for kubelet->driver communication lives - name: spiffe-csi-socket-dir hostPath: path: /var/lib/kubelet/plugins/csi.spiffe.io type: DirectoryOrCreate # This volume is where the SPIFFE CSI driver mounts volumes - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: Directory # This volume is where the node-driver-registrar registers the plugin # with kubelet - name: kubelet-plugin-registration-dir hostPath: path: /var/lib/kubelet/plugins_registry type: Directory
The SPIRE Agent with all the configured CSI driver containers can be easily deployed via this yaml configuration: https://raw.githubusercontent.com/spiffe/spiffe-csi/main/example/config/spire-agent.yaml
kubectl apply -f https://raw.githubusercontent.com/spiffe/spiffe-csi/main/example/config/spire-agent.yaml
Once the SPIRE server and Agent are running in your cluster, the next step it to register. This can be done via:
echo "Registering node..." kubectl exec -it \ -n spire \ deployment/spire-server -- \ /opt/spire/bin/spire-server entry create \ -node \ -spiffeID spiffe://example.org/node \ -selector k8s_psat:cluster:example-cluster echo "Registering workload..." kubectl exec -it \ -n spire \ deployment/spire-server -- \ /opt/spire/bin/spire-server entry create \ -parentID spiffe://example.org/node \ -spiffeID spiffe://example.org/workload \ -selector k8s:ns:default
Here we are using the
k8s_psat plugin for the node registration and
k8s plugin for the workload. This can be modified based on your needs.
Test SPIRE with CSI Driver
Now that SPIRE server and agent is registered and running, lets test out to make sure that the CSI driver works. We will deploy a client workload that will utilize the CSI driver instead of using the usual
apiVersion: v1 kind: Pod metadata: name: example-workload spec: containers: - name: example-workload image: ppatel1989/spiffe-csi-driver-example-workload:example volumeMounts: - name: spiffe-workload-api mountPath: /spiffe-workload-api readOnly: true env: - name: SPIFFE_ENDPOINT_SOCKET value: unix:///spiffe-workload-api/spire-agent.sock volumes: - name: spiffe-workload-api csi: driver: "csi.spiffe.io"
Check the workload logs to see the update received over the Workload API:
kubectl logs pod/example-workload
The output should be similar to:
2021/11/23 18:46:33 Update: 2021/11/23 18:46:33 SVIDs: 2021/11/23 18:46:33 spiffe://example.org/workload 2021/11/23 18:46:33 Bundles: 2021/11/23 18:46:33 example.org (1 authorities)
If you are having trouble getting things to run, you can visit our github to find the full example and script to get it running quickly: spiffe-csi-driver
As I have described in this post, using the
hostPath can be a very big security risk. With the traditional SPIRE deployment, each workload pod also needed the
hostPath mount attached to access the Workload API. With the CSI Driver, this limits the use of
hostPath to only the SPIRE agent deployment. This still gives you all the benefits of using SPIRE in your environment without having to use