EFS Provisioner for EKS with CSI Driver

Reading Time: 5 minutes

What is CSI driver?

CSI driver is typically deployed in Kubernetes as two components: a controller component and a per-node component.

  • Controller Plugin: The controller component can be deployed as a Deployment or StatefulSet on any node in the cluster. It consists of the CSI driver that implements the CSI Controller service and one or more sidecar containers. These controller sidecar containers typically interact with Kubernetes objects and make calls to the driver’s CSI Controller service.
  • Node plugin: The node component should be deployed on every node in the cluster through a DaemonSet. It consists of the CSI driver that implements the CSI Node service and the node-driver-registrar sidecar container.

  • How the two components works?

What is Amazon EFS CSI driver?

  • The Amazon EFS Container Storage Interface (CSI) driver provides a CSI interface that allows Kubernetes clusters running on AWS to manage the lifecycle of Amazon EFS file systems.
  • EFS CSI driver supports dynamic provisioning and static provisioning. Currently Dynamic Provisioning creates an access point for each PV. This mean an AWS EFS file system has to be created manually on AWS first and should be provided as an input to the storage class parameter. For static provisioning, AWS EFS file system needs to be created manually on AWS first. After that it can be mounted inside a container as a volume using the driver.

Prerequisites

  • EKS cluster with an associated OIDC provider.
  • EFS File System created.
  • AWS cli.

Setup

Create an IAM policy and assign it to an IAM role. The policy will allow the Amazon EFS driver to interact with your file system.

curl -o iam-policy-example.json https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/docs/iam-policy-example.json

aws iam create-policy \
    --policy-name AmazonEKS_EFS_CSI_Driver_Policy \
    --policy-document file://iam-policy-example.json

Create an IAM role and attach the IAM policy to it. Annotate the Kubernetes service account with the IAM role ARN and the IAM role with the Kubernetes service account name.

a) Determine your cluster’s OIDC provider URL. Replace my-cluster with your cluster name.

aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text

Example Output:

https://oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE

b) Create the IAM role, granting the Kubernetes service account the AssumeRoleWithWebIdentity action.

  • Copy the following contents to a file named trust-policy.json. Replace 111122223333 with your account ID. Replace EXAMPLED539D4633E53DE1B71EXAMPLE and region-code with the values returned in the previous step.
        	{					
				"Version": "2012-10-17",
				"Statement": [
					{
					"Effect": "Allow",
					"Principal": {
						"Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
					},
					"Action": "sts:AssumeRoleWithWebIdentity",
					"Condition": {
							"StringEquals": {
						"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:efs-csi-controller-sa"
						}
					}
					}
				]
			}
  • Create the role. You can change AmazonEKS_EFS_CSI_DriverRole to a different name, but if you do, make sure to change it in later steps too.
      aws iam create-role --role-name AmazonEKS_EFS_CSI_DriverRole --assume-role-policy-document file://"trust-policy.json"

c) Attach the IAM policy to the role. Replace 111122223333 with your account ID. before running the following command.

aws iam attach-role-policy --policy-arn arn:aws:iam::111122223333:policy/AmazonEKS_EFS_CSI_Driver_Policy --role-name AmazonEKS_EFS_CSI_DriverRole

Install the Amazon EFS CSI driver using the Helm chart. You can find the corresponding Amazon ECR repository URL prefix for your AWS region in the EKS documentation.

helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver

helm repo update

helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \ 
  --namespace kube-system \ 
  --set image.repository=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver \ 
  --set controller.serviceAccount.create=false \ 
  --set controller.serviceAccount.name=efs-csi-controller-sa

Output should show:

Release "aws-efs-csi-driver" does not exist. Installing it now.
NAME: aws-efs-csi-driver
LAST DEPLOYED: Mon Oct  4 17:52:15 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that aws-efs-csi-driver has started, run:

    kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-efs-csi-driver,app.kubernetes.io/instance=aws-efs-csi-driver"

Verify pods have been deployed and should return new pods with csi driver:

NAME                       READY   STATUS    RESTARTS   AGE
efs-csi-controller-78587b6668-2fhsp   3/3     Running   0          4h41m
efs-csi-controller-78587b6668-khg82   3/3     Running   0          4h41m
efs-csi-node-2s2gz                    3/3     Running   0          35m
efs-csi-node-fnl2x                    3/3     Running   0          35m
efs-csi-node-jrps4                    3/3     Running   0          35m

Dynamic Provisioning of EFS

This shows how to create a dynamically provisioned volume created through EFS access points and Persistent Volume Claim (PVC) and consume it from a pod. Create the storage class as follows:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
mountOptions:
  - tls
parameters:
  provisioningMode: efs-ap
  fileSystemId: fs-xxxxxx
  directoryPerms: "700"
  gidRangeStart: "1000"
  gidRangeEnd: "2000"
  basePath: "/dynamic_provisioning"
  • provisioningMode – The type of volume to be provisioned by efs. Currently, only access point based provisioning is supported efs-ap.
  • fileSystemId – The file system under which Access Point is created.
  • directoryPerms – Directory Permissions of the root directory created by Access Point.
  • gidRangeStart (Optional) – Starting range of Posix Group ID to be applied onto the root directory of the access point. Default value is 50000.
  • gidRangeEnd (Optional) – Ending range of Posix Group ID. Default value is 7000000.
  • basePath (Optional) – Path on the file system under which access point root directory is created. If path is not provided, access points root directory are created under the root of the file system.

Deploy the example which creates the persistent volume claim (PVC) and the pod which consumes PV:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
    - name: app
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
      volumeMounts:
        - name: persistent-storage
          mountPath: /data
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: efs-claim

Static Provisioning of EFS

This shows how to make a static provisioned EFS persistent volume (PV) mounted inside container.

Create  persistent volume (PV) with following yaml. Replace VolumeHandle value with FileSystemId of the EFS filesystem that needs to be mounted.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: efs.csi.aws.com
    volumeHandle: [FileSystemId] 

Deploy the following example which creates the persistent volume claim (PVC) and the pod which consumes PV:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ""
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

Conclusion

Amazon Elastic File System can automatically scale from gigabytes to petabytes of data, supports automatic encryption of your data at rest and in transit, and offers seamless integration with AWS backup. With the introduction of dynamic provisioning for EFS PersistentVolumes in Kubernetes, we can now dynamically provision storage and provide a better integration with modern containerised applications. EFS access points can be used in conjunction with the CSI driver to enforce user identity and offer a nice out-of-the-box logical separation between storage spaces within the same EFS file system.



Written by 

I am an DevOps engineer having experience working with the DevOps tool and technologies like Kubernetes, Docker, Ansible, AWS cloud, prometheus, grafana etc. Flexible towards new technologies and always willing to update skills and knowledge to increase productivity.