The operator installs some additional components by default:
These components are installed to show the full feature set of Piraeus. They can be disabled without affecting the other components.
LINSTOR supports Kubernetes volume snapshots, which is currently in beta. To use it, you need to install a cluster wide snapshot controller. This is done either by the cluster provider, or you can use the piraeus chart.
By default, the piraeus chart will install its own snapshot controller. This can lead to conflict in some cases:
- the cluster already has a snapshot controller
- the cluster does not meet the minimal version requirements (>= 1.17)
In such a case, installation of the snapshot controller can be disabled:
--set csi-snapshotter.enabled=false
To use snapshots, you first need to create a VolumeSnapshotClass:
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
name: my-first-linstor-snapshot-class
driver: linstor.csi.linbit.com
deletionPolicy: DeleteYou can then use this snapshot class to create a snapshot from an existing LINSTOR PVC:
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: my-first-linstor-snapshot
spec:
volumeSnapshotClassName: my-first-linstor-snapshot-class
source:
persistentVolumeClaimName: my-first-linstor-volumeAfter a short wait, the snapshot will be ready:
$ kubectl describe volumesnapshots.snapshot.storage.k8s.io my-first-linstor-snapshot
...
Spec:
Source:
Persistent Volume Claim Name: my-first-linstor-snapshot
Volume Snapshot Class Name: my-first-linstor-snapshot-class
Status:
Bound Volume Snapshot Content Name: snapcontent-b6072ab7-6ddf-482b-a4e3-693088136d2c
Creation Time: 2020-06-04T13:02:28Z
Ready To Use: true
Restore Size: 500MiYou can restore the content of this snaphost by creating a new PVC with the snapshot as source:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-first-linstor-volume-from-snapshot
spec:
storageClassName: linstor-basic-storage-class
dataSource:
name: my-first-linstor-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500MiStork is a scheduler extender plugin for Kubernetes which allows a storage driver to give the Kubernetes scheduler hints about where to place a new pod so that it is optimally located for storage performance. You can learn more about the project on its GitHub page.
By default, the operator will install the components required for Stork, and register a new scheduler called stork
with Kubernetes. This new scheduler can be used to place pods near to their volumes.
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
schedulerName: stork
containers:
- name: busybox
image: busybox
command: ["tail", "-f", "/dev/null"]
volumeMounts:
- name: my-first-linstor-volume
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: my-first-linstor-volume
persistentVolumeClaim:
claimName: "test-volume"Deployment of the scheduler can be disabled using
--set stork.enabled=false