Introduction
Adding storage to an OpenShift cluster can greatly increase the types of workloads you can run, including workloads such as OpenShift Virtualization, or databases such as MongoDB and PostgreSQL. Persistent volumes can be supplied in many different ways within OpenShift including using LocalVolumes, or OpenShift Data Foundation, or provided by an underlying Cloud Provider such as the vSphere provider. Storage providers for external storage arrays such as Pure CSI Driver, Dell, Infinidat CSI Driver and Synology CSI Driver also exist. While I do not have the a Pure Storage Array or an Infinibox in my home lab, I do have a Synology array, that supports iSCSI and this will be the focus of the blog. The Synology CSI driver supports the creation of ReadWriteOnce (RWO) persistent file volumes along with ReadWriteMany (RWX) persistent block volumes as well as the creation of snapshots on both these volume types.
NOTE: This post will focus on the official Synology driver, but there is another driver that may also work called the Democratic CSI.
Due to the security model of OpenShift, in order to deploy the Synology CSI driver in OpenShift we will need to add some additional permissions to the driver through SCC profiles and we will need to enable the iSCSId service on each node in order to mount the persistent volumes. We will use the tools given to us in OpenShift including MachineConfigs and SecurityContextConstraints to allow the driver run in OpenShift in this post.
Prerequisites
The Synology CSI driver works with DSM version 7.0 and above. You will need to have a Synology array that supports version 7.0. To see if your Synology Array can be upgraded to version 7.0 check the DSM 7.0 Release Notes. You will also need an OpenShift cluster to install the CSI driver on. The steps below have been tested on OpenShift 4.9, but should work for OpenShift versions 4.7 and above. In addition you will need the following command line utilities:
- git
- oc
PreSteps
Red Hat CoreOS does not ship with the iSCSI daemon started by default. Because the Synology driver relies on the Kubernetes “CSI-Attacher” pod for mounting iSCSI devices we will need to enable the iSCSId daemon. We will leverage the OpenShift MachineConfigOperator (MCO) to enable iSCSId. The iSCSId service is already present on all OpenShift RHCOS machines, however it is disabled by default. The MachineConfig below updates systemd to enable the iSCSI service so that the CSI-Attacher can run iSCSI commands on the node. Start by creating a new file called “99-worker-custom-enable-iscsid.yml” and place the following contents in the file:
1--- 2apiVersion: machineconfiguration.openshift.io/v1 3kind: MachineConfig 4metadata: 5 labels: 6 machineconfiguration.openshift.io/role: worker 7 name: 99-worker-custom-enable-iscsid 8spec: 9 config:10 ignition:11 version: 3.1.012 systemd:13 units:14 - enabled: true15 name: iscsid.service
NOTE: The MachineConfig listed above will apply to all worker nodes only. Depending on your specific cluster configuration you may need to adjust this label to properly select your required nodes. For a default install, this should work fine. Applying this change will reboot each individual worker node.
Apply the MachineConfig to your cluster by logging into your cluster with the oc
command and then confirm that the change is applied to all of your nodes.
$ oc login$ oc create -f 99-worker-custom-enable-iscsid.yml$ oc get machineconfigpoolNAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGEmaster rendered-master-c5da True False False 3 3 3 0 34dworker rendered-worker-09ba True False False 3 3 3 0 34d
Ensure that all worker nodes show as being updated prior to continuing. Look for the “MACHINECOUNT” and “UPDATEDMACHINECOUNT” numbers to match.
Install
With our worker-nodes updated to support iSCSI, we can now work on installing the CSI driver. Start by cloning the github repo for the Synology CSI Driver:
$ git clone https://github.com/SynologyOpenSource/synology-csi$ cd synology-csi
Creating the Configuration File
We will need to create a configuration file which will contain the hostname of your Synology array, as well as the username and password to connect. Create a file called client-info.yml
and put the following contents in it. Be sure to update the hostname, username, and password sections with the appropriate information for your array.
1---2clients:3 - host: <hostname or IP address of array>4 port: 50005 https: false6 username: <username>7 password: <password>
NOTE: You must use a user account that has full admin privileges on the Synology Array. This is a requirement of the Synology array as the Administrator is the only account with the proper permissions to create iSCSI luns.
We will now create a new project for the synology-csi driver to run in and create a secret with the contents of the client-info.yml
that we just created:
$ oc new-project synology-csi$ oc create secret generic client-info-secret --from-file=client-info.yml
Create SCC and Apply to Cluster
Because we are using OpenShift, we need to address the Security Context Constraints or SCC. The SCC below will allow the components of the Synology CSI driver to properly operate in an OpenShift cluster. Create a file called synology-csi-scc.yml
with the following contents:
1--- 2kind: SecurityContextConstraints 3apiVersion: security.openshift.io/v1 4metadata: 5 name: synology-csi-scc 6allowHostDirVolumePlugin: true 7allowHostNetwork: true 8allowPrivilegedContainer: true 9allowedCapabilities:10- 'SYS_ADMIN'11defaultAddCapabilities: []12fsGroup:13 type: RunAsAny14groups: []15priority:16readOnlyRootFilesystem: false17requiredDropCapabilities: []18runAsUser:19 type: RunAsAny20seLinuxContext:21 type: RunAsAny22supplementalGroups:23 type: RunAsAny24users:25- system:serviceaccount:synology-csi:csi-controller-sa26- system:serviceaccount:synology-csi:csi-node-sa27- system:serviceaccount:synology-csi:csi-snapshotter-sa28volumes:29- '*'
Create the SecurityContextConstraints by applying them to the cluster:
$ oc create -f synology-csi-scc.yaml
Defining the Synology Storage Class
We now need to determine how the Synology driver will handle PersistentVolumes once they are released, as well as what Synology volume the new iSCSI devices should be created on and what filesystem should be used when creating the new PersistentVolume. The storage-class.yml
file determines this configuration. Using your favorite editor open the deploy/kubernetes/v1.22/storage-class.yml
file from the Git repo. It will contain something similar to this:
1--- 2apiVersion: storage.k8s.io/v1 3kind: StorageClass 4metadata: 5 name: synology-iscsi-storage 6 annotations: 7 storageclass.kubernetes.io/is-default-class: "true" 8provisioner: csi.san.synology.com 9parameters:10 location: '/volume2'11 fsType: 'ext4'12reclaimPolicy: Delete13allowVolumeExpansion: true
If you do not want the driver to use the default ‘volume1’ as your storage location be sure to update this section. You will also need to determine if you want the CSI driver to delete the volumes when they are no longer in use by setting the reclaimPolicy to Delete. Ensure that fsType is explicitly set to ext4 or xfs depending on your preference or you will have issues with filesystem permissions when you go to use the volumes created. If you plan to use the volume snapshot feature be sure to review the deploy/kubernetes/v1.20/snapshotter/volume-snapshot-class.yml
as well and update as appropriate.
Deploying the Driver
At this point we can use the files located in synology-csi/deploy/kubernetes/v1.20
to deploy the Kubernetes CSI Driver. We will apply each file individually.
# Start by applying the synology-csi controlleroc create -f deploy/kubernetes/v1.20/controller.yml# Next we will apply the csi-driver definitionoc create -f deploy/kubernetes/v1.20/csi-driver.yml# Next create the node handleroc create -f deploy/kubernetes/v1.20/node.yml# Finally create the storage classoc create -f deploy/kubernetes/v1.20/storage-class.yml# If you wish to use the snapshot feature we can also apply the snapshot driveroc create -f deploy/kubernetes/v1.20/snapshotter/snapshotter.yamloc create -f deploy/kubernetes/v1.20/snapshotter/volume-snapshot-class.yaml
NOTE: We are using the files from the v1.20 directory as this takes advantage of the v1 apis for persistent storage creation. If you are running an OpenShift cluster prior to version 4.7 you should use the files in the v1.19 directory.
Check to ensure that all of the pods are successfully deployed before proceeding:
$ oc get pods NAME READY STATUS RESTARTS AGEsynology-csi-controller-0 4/4 Running 6 7d1hsynology-csi-node-9496p 2/2 Running 2 7d1hsynology-csi-node-w65h9 2/2 Running 2 7d1hsynology-csi-node-wpw2d 2/2 Running 2 7d1hsynology-csi-snapshotter-0 2/2 Running 2 6d19h
NOTE: The number of “synology-csi-node-xxxxx” pods you see will directly correlate to the total number of worker nodes deployed in your cluster.
(Video) How to deploy HPE CSI Driver (beta) for Kubernetes on Red Hat OpenShift 4
Creating your first PV
With the Synology CSI driver installed we can now go ahead and create a Persistent Volume. We will create this persistent volume in a new namespace to make cleanup easier.
$ oc new-project pvctest
Create a new file called my-file-storage-claim.yml with the following in it.
1--- 2kind: PersistentVolumeClaim 3apiVersion: v1 4metadata: 5 name: my-file-storage-claim 6 namespace: pvctest 7spec: 8 accessModes: 9 - ReadWriteOnce10 resources:11 requests:12 storage: 2Gi13 storageClassName: synology-iscsi-storage14 volumeMode: Filesystem
Apply the file to your cluster to create a new PV:
$ oc create -f my-file-storage-claim.ymlpersistentvolumeclaim/my-file-storage-claim created
We can check the status of the newly created PV by running the following command:
$ oc get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmy-file-storage-claim Bound pvc-d747a9e2-3927-4e38-a201-2f691b98cbd2 2Gi RWO synology-iscsi-storage 19s
Look for the STATUS to be “Bound” to know that the new iSCSI volume was created. We can also see this in the Synology UI. Note that the VOLUME name from the oc get pvc command matches the name of the LUN in the Synology UI with “k8s-csi” prepended to the name:
Testing our PVC
In order to test out the newly created Persistent Volume, we will create a Kubernetes job that runs the Unix command dd
against the persistent volume. We will use a Red Hat UBI base image to run the command in. Create a file called test-write-job.yml with the following contents:
1--- 2apiVersion: batch/v1 3kind: Job 4metadata: 5 name: write 6spec: 7 template: 8 metadata: 9 name: write10 spec:11 containers:12 - name: write13 image: registry.access.redhat.com/ubi8/ubi-minimal:latest14 command: ["dd","if=/dev/zero","of=/mnt/pv/test.img","bs=1G","count=1","oflag=dsync"]15 volumeMounts:16 - mountPath: "/mnt/pv"17 name: test-volume18 volumes:19 - name: test-volume20 persistentVolumeClaim:21 claimName: my-file-storage-claim22 restartPolicy: Never
Apply the above Yaml to your cluster and then validate that the job was created:
$ oc create -f test-write-job.yml$ oc get jobsjob.batch/write created$ oc get jobsNAME COMPLETIONS DURATION AGEwrite 0/1 1s 1s
Using the oc get jobs command wait until the COMPLETIONS shows 1/1 before continuing. This signifies that the job has completed. With the job complete we can now review the logs generated by our test job by running oc logs job/write:
$ oc logs job/write1+0 records in1+0 records out1073741824 bytes (1.1 GB, 1.0 GiB) copied, 13.013 s, 82.5 MB/s
SUCCESS! If there are errors about permission issues be sure that you followed the instructions on Defining the Synology Storage Class and then re-run your job.
Cleanup
With our new Synology CSI driver tested, we can clean up our test PVC and job using the following commands:
$ oc delete job/writejob.batch "write" deleted$ oc delete pvc/my-file-storage-claim
If you set the reclaimPolicy to Delete in the section Defining the Synology Storage Class the Persistent Volume will be automatically deleted. You can check in the Synology UI to validate that the LUN has been destroyed.
Conclusion
If you are looking to expand the types of workloads you have in your OpenShift cluster and you have a Synology array in your data center, using the Synology CSI driver to dynamically create iSCSI LUNS is a quick and flexible storage provider to help meet this need.
- openshift
- tutorial
- iscsi
- synology
- csi
- kubernetes