Hardware tier best practices


Overview

Domino Hardware Tiers define Kubernetes requests and limits and link them to specific node pools. We recommend the following best practices.

  1. Accounting for overhead
  2. Isolating workloads and users using node pools
  3. Set resource requests and limits to the same values



Accounting for overhead

When defining hardware tiers, you may need to leave room for the overhead it takes to manage a node and its executions. As a rule of thumb, node overhead is 1.5 cores and 2 GiB of RAM. Additional overhead per execution is about 1 core and 1 GiB.


Where does overhead go?

  1. Host OS, Docker, etc. (1 core and 1.5 GiB of RAM)
  2. Domino-specific managment pods for logging, caching, etc. (0.5 cores and 0.5 GiB of RAM)
  3. Execution sidecars (1 core and 1 GiB of RAM)

More specifically, for each node, there are Domino-specific pods for logging and cache management. And for each workspace, for example, there are sidecar containers that manage authentication and request routing, ensure files are in the right place, make sure dependencies get installed, etc. Domino services and execution sidecars make CPU and memory requests that Kubernetes takes into account when scheduling execution pods.

If Domino is running on your own Kubernetes cluster, you may have additional overhead.


When should I account for overhead?

Overhead is relevant if you want to define a hardware tier dedicated to one execution at a time per node, such as for a node with a single physical GPU. It is also relevant if you absolutely need to maximize node density.

More commonly, for smaller hardware tiers, overhead isn’t much of a concern.


Examples

An 8-core, 32-GiB node can accept: 1. a single execution using a hardware tier requesting 6.5 cores and 29 GiB of RAM., or, 2. three executions using a hardware tier requesting 2 CPUs and 8 GiB of RAM.

You could optimize the second hardware tier further, to squeeze four simlutaneous executions onto a node, but simpler may be better. If your users also use smaller hardware tier definitions, Kubernetes will do the “Tetris” required to soak up excess capacity on a node. So you could find a node with, for example, 3 executions using a hardware tier requesting 2 cores and 8 GiB of RAM each, as well as 2 requesting 0.5-cores, 1 GiB hardware tiers.

You can see which pods are running on a specific node by visiting the “Infrastructure” admin page and clicking on the name of the node. In the image below, there is a box around the execution pods. The other pods handle logging, caching, and other services.

../_images/pod-info.png

[ Click to view full size ]




Isolating workloads and users using node pools

Node pools are defined by labels added to nodes in a specific format: dominodatalab.com/node-pool=<your-node-pool>. In the hardware tier form, you just need to include your-node-pool. You can name a node pool anything you like, but we recommend naming them something meaningful given the intended use.

Domino typically comes pre-configured with default and default-gpu node pools, with the assumption that most user executions will run on nodes in one of those pools. As your compute needs become more sophisticated, you may want to keep certain users separate from one another or provide specialized hardware to certain groups of users.

So if there’s a data science team in New York City that needs a specific GPU machine that other teams don’t need, you could use the following label for the appropriate nodes: dominodatalab.com/node-pool=nyc-ds-gpu. In the hardware tier form, you would specify nyc-ds-gpu. To ensure only that team has access to those machines, create a NYC organization, add the correct users to the organization, and give that organization access to the new hardware tier that uses the nyc-ds-gpu node pool label.




Set resource requests and limits to the same values

With Kubernetes, resource limits must be >= resource requests. So if your memory request is 1000 GiB, your limit must be >= 1000 GiB. But while setting a request > limit can be useful - there are cases where allowing bursts of CPU or memory can be useful - this is also dangerous. Kubernetes may evict a pod using more resources than initially requested. For Domino workspaces or jobs, this would cause the execution to be terminated.

It is for this reason that we recommend setting memory and CPU requests equal to limits. In this case, Python and R cannot allocate more memory than the limit, and execution pods will not be evicted.

On the other hand, if the limit is higher than the request, it is possible for a user to use resources that another user’s execution pod should be able to access. This is the “noisy neighbor” problem that you may have experienced in other multi-user environments. But instead of allowing the noisy neighbor to degrade performance for other pods on the node, Kubernetes will evict offending pod when necessary to free up resources.

User data on disk will not be lost, because Domino stores user data on a persistent volume that can be reused. But anything in memory will be lost and the execution will have to be restarted.