domino logo
4.6
  • Tech Ecosystem
  • Get Started
  • Domino Cloud
  • Collaborate
  • Projects
  • Work with Data
  • Workspaces
  • Environments
  • Executions
  • Deploy Models and Apps
  • Model Monitoring
  • Organizations
  • Security and Credentials
  • Notifications
  • Search
  • Domino CLI
  • Troubleshooting
  • Get Help
domino logo
About Domino
Domino Data LabKnowledge BaseData Science BlogTraining
User Guide
>
Workspaces
>
Clusters
>
On-Demand Dask Overview

On-Demand Dask Overview

Dask is a distributed computing library that tightly integrates with the Python ecosystem and allows for multi-core and distributed parallel execution on larger-than-memory datasets. Dask makes it simple to scale up a single machine Python workload to a multi-machine cluster with little or no changes even if the application was never developed with Dask in mind initially.

Dask offers the following:

  • Low-level scheduling and execution APIs: Dask provides a set of APIs and facilities for scheduling and parallel execution of task graphs. This execution engine powers the high-level collections mentioned below but can also be used to develop and execute custom, user-defined distributed workloads. These low-level capabilities are an alternative to direct use of threading or multiprocessing Python libraries or other task scheduling systems like Luigi or IPython parallel.

    For additional information, see Dask Delayed and Dask Futures.

  • High-level distributed libraries: Dask provides distributed equivalents for popular Python collection libraries such as NumPy arrays, Python lists, and Pandas data frames. The Dask equivalents provide API-level compatibility and can be used as drop-in replacement when one needs to work with large datasets. Additionally, Dask provides similar compatibility with scikit-learn and integration with other popular model frameworks to enable scalable training and prediction on large models and datasets.

    For additional information, see Dask Arrays, Dask DataFrames, and Dask ML.

Orchestrate Dask on Domino

Domino offers the ability to dynamically provision and orchestrate a Dask cluster directly on the infrastructure backing the Domino instance. This allows Domino users to get quick access to Dask without having to rely on their IT team.

When you start a Domino workspace for interactive work or a Domino job for batch processing, Domino will create, manage for you, and make available to your execution a containerized Dask cluster.

Suitable use cases

Domino on-demand Dask clusters are suitable for the following workloads:

  • Working with large datasets: Dask excels at scaling up Python data analysis or transformation code where the data that needs to be processed exceeds the resources that can be provided by a single machine. With a compatible API for commonly used Python libraries, Dask is a suitable tool for Python-first data scientists who have R&D Python code for data cleansing, manipulation, and advanced analytics which needs to be scaled to a much larger production dataset. This can be done with minimal modification and without having to switch over to a different ecosystem like Spark.

  • Distributed training: Dask provides a simple way to parallelize existing scikit-learn models as a drop-in replacement. This is a great fit for models with moderate memory footprint which are CPU/GPU-bound with many individual operations which can be parallelized beyond the limits of a single machine. For larger memory-bound workloads, there are Dask specific ML libraries (for example, Parallel Meta-estimators, Incremental Hyperparameter Optimizers) which use algorithms that are specifically optimized to work with the Dask scalable NumPy and DataFrame equivalents.

  • Custom distributed computations: Lastly, in cases where the available Dask machine learning algorithms or large scale data representations are insufficient, the low-level Dask scheduling APIs can be used to build custom algorithms that can benefit from parallelism. Developers are in control of the business logic while Dask handles task dependencies, network communication, workload resilience, diagnostics, and so on.

Domino Data LabKnowledge BaseData Science BlogTraining
Copyright © 2022 Domino Data Lab. All rights reserved.