Deploying Rook-Ceph on Google Kubernetes Engine(GKE)

Image for post
Image for post

Rook has gotten really popular by becoming a part of Cloud Native foundation and giving a solution for persistent storage on Kubernetes. Rook orchestrates different storage solution, but we would be setting up the ceph solution that has reached its first release.

I will be taking you through the requirements needed to setup rook on GKE, and setting them up.

Step 1. Start a Kubernetes Cluster with the ubuntu image

The Ceph requires RBD module which is not present in GKE’s default Container-Optimised OS(cos) as mentioned in the Issues 2448 & 2456. Thus, we cannot use the GKE’s default configuration. We will need to override GKE to use Ubuntu images for the node pool. Boot up the cluster with ubuntu image.

Image for post
Image for post

Step 2.Clone git repo of Rook and add FlexVolume environment variable

Download the rook release. Change the RELEASE variable value to download the release of your choice.

RELEASE="1.0.3"; wget${RELEASE}.zip && unzip v${RELEASE}.zip && rm v${RELEASE}.zip && cd rook-${RELEASE}cd cluster/examples/kubernetes/ceph

Rook uses FlexVolume to integrate with Kubernetes for performing storage operations. In GKE, the default Flexvolume plugin directory (the directory where FlexVolume drivers are installed) is read-only. So, the kubelet needs to be told to use a different FlexVolume plugin directory that is accessible and has read/write (rw) permission. This could be done by adding an environment variable in operator.yaml

Configure the Kubelet to use “/home/kubernetes/flexvolume” directory, by adding it to operator.yaml file as an environment variable

value: "/home/kubernetes/flexvolume"

Step 3. Deploy Rook operator

Next, deploy the Rook system components, which includes the Rook agent running on each node in your cluster as well as Rook’s operator pod.

kubectl create -f common.yaml
kubectl create -f operator.yaml
# verify the rook-ceph-operator, rook-ceph-agent, and rook-discover # pods are in the `Running` state before proceeding
kubectl -n rook-ceph get pod

Step 4. Deploy Rook cluster

Now that the Rook’s operator, agent, and discover pods are running, we can create the Rook Ceph cluster.

Save the cluster spec as cluster-test.yaml :

kind: CephCluster
name: rook-ceph
namespace: rook-ceph
# For the latest ceph images, see
image: ceph/ceph:v14.2.1-20190430
dataDirHostPath: /var/lib/rook
count: 3
enabled: true
useAllNodes: true
useAllDevices: false
# Important: Directories should only be used in pre-production
# environments
- path: /var/lib/rook

Create the cluster:-

kubectl create -f cluster-test.yaml

Step 5. Deploy StorageClass

Now, that the Rook-Ceph cluster is running we would be deploying the storage class so that we can request the persistent volume from ceph.

Save it as storage-class.yaml

kind: Pool
name: replicapool
namespace: rook-ceph
failureDomain: host
size: 3
kind: StorageClass
name: rook-ceph-block
pool: replicapool
clusterNamespace: rook-ceph
fstype: xfs

Deploy it, kubectl apply -f storage-class.yaml

Step 6. Test PVC

Now test the Storage Class by requesting a Persistent Volume Claim.

Save it as pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
name: pvc-rook-ceph-test
storageClassName: rook-ceph-block
- ReadWriteOnce
storage: 1Gi

Apply it, kubectl apply -f pvc.yaml, watch the PVC is bound.


From this Rook is configured on GKE you can test your chart at this deployment of Rook. But if you want a more detailed setup, follow the rook example:

Written by

Software Developer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store