Persistent volume management¶
When not in use, Domino project files are stored and versioned in the Domino blob store. When a Domino run is started from a project, the projects files are copied to a Kubernetes persistent volume that is attached to the compute node and mounted in the run.
A storage volume in a Kubernetes cluster that can be mounted to pods. Domino dynamically creates persistent volumes to provide local storage for active runs.
A request made in Kubernetes by a pod for storage. Domino uses these to correctly match a new run with either a new PV or an idle PV that has the project’s files cached.
Idle Persistent Volume
A PV that was used by a previous run, and which is currently not being used. Idle PV’s will either be re-used for a new run or garbage collected.
Kubernetes method of defining the type, size, provisioning interface, and other properties of storage volumes.
When a user starts a new workspace or job, Domino will broker assignment of a new execution pod to the cluster. This pod will have an associated PVC which defines for Kubernetes what type of storage it requires. If an idle PV exists matching the PVC, Kubernetes will mount that PV on the node it assigns to host the pod, and the job or workspace will start. If an appropriate idle PV does not exist, Kubernetes will create a new PV according to the Storage Class.
When the user completes their workspace or job, the PV will be unmounted and sit idle until it is either reused for the user’s next job or garbage collected. By reusing PV’s, users who are actively working in a project will not need to copy data from the blob store to a PV repeatedly.
A job will only match with either a fresh PV or one previously used by that project. PV’s are not reused between projects.
Domino has configurable values to help you tune your cluster to balance performance with cost controls. The more idle volumes you allow the more likely it is that users will be able to reuse a volume and avoid needing to copy project files from the blob store. However, this comes at the cost of keeping additional idle PVs.
By default, Domino will:
Limit the total number of idle PV’s to 32. This can be adjusted by setting the following option in the central config:
Terminate any idle PV that has not been used in a certain number of days. This can be adjusted by setting the following option in the central config:
This value is expressed in terms of days. The default value is empty, which means unlimited. A value of
7dwill terminate any idle PV after seven days.
In the scenario when a user’s job fails unexpectedly, Domino will preserve the volume so data can be recovered. After a
workspace or job ends, claimed PV’s are placed into one of the following states, indicated with the
If the run ends normally, the underlying PV will be available for future runs.
If the run fails, the underlying PV will not be eligible for reuse, and is held in this state to be salvaged.
Salvaged PV’s will not be reused automatically by the future workspaces or jobs, but can be manually mounted to a workspace in order to recover work.
By default, Domino will:
Limit the total number of salvaged PV’s to 64. This can be adjusted by setting the following option in the central config:
Terminate any salvaged PV that has not been used in a certain number of days. This can be adjusted by setting the following option in the central config:
The value is expressed in terms of days. The default value is seven days. A value of
14dwill terminate any salvaged PV after fourteen days.
To recover a salavaged volume,
- Find the PV that was attached to your job or workspace, which will be in the Deployment logs for your job or workspace.
- Create a pod attached to the salvaged volume.
- Recover the files with your most convenient method (
scp, AWS CLI,
kubectl cp, etc.)
This script will do Step 2 and will provide the appropriate commands in its output. Remember to delete the PVC and PV, otherwise these resources will continue to be used.
How do I see the current PV’s in my cluster?
Run the following command to see all current PV’s sorted by last-used:
kubectl get pv --sort-by='.metadata.annotations.dominodatalab.com/last-used'
How do I change the size of the storage volume for my jobs or workspaces?
You can set the volume size for new PV’s by editing the following central config value:
||Volume size in GB (default 50)|