domino logo
Latest (5.4)
  • Tech Ecosystem
  • Deployment-wide Search
  • Get Started
  • Security and Credentials
  • Collaborate
  • Organizations
  • Projects
  • Domino Datasets
  • External Data
  • Workspaces
  • Environments
  • Executions
  • Model APIs
  • Publish
  • Model Monitoring and Remediation
  • Notifications
  • Download the Audit Log
  • Domino Command Line Interface (CLI)
  • Troubleshooting
  • Get Help
domino logo
About Domino
Domino Data LabKnowledge BaseData Science BlogTraining
User Guide
>
Workspaces
>
Clusters
>
Spark on Domino
>
Hadoop and Spark Overview
>
Run Local Spark on a Domino Executor

Run Local Spark on a Domino Executor

Use a local Spark cluster

Typically, users interested in Hadoop and Spark have data volumes and workloads that demand the power of cluster computing. However, some people use Spark for its expressive API, even if their data volumes are small or medium. Because Domino lets you run code on powerful VM infrastructure, with up the 32 cores in AWS, you can use Domino to create a local Spark cluster and easily parallelize your tasks across all 32 cores.

Configure Spark in Local mode

To configure Spark integration in Local mode, open your project and go to “Project settings.” Under “Integrations”, choose the “Local mode” option for Apache Spark. Click Save to save your changes.

Domino Data LabKnowledge BaseData Science BlogTraining
Copyright © 2022 Domino Data Lab. All rights reserved.