To derive business insights from your data science work, you need to deploy the trained model to an environment where it can be invoked. This process often involves complex DevOps skills and coordination between teams like Data Science, Engineering, and IT. With Domino, the model deployment process is seamless. This simplification allows your teams to derive insights and make critical business decisions.
Domino simplifies the process of deploying models regardless of the scenario in which it is deployed:
- Host Models as REST APIs
-
You can host models trained on Domino, or imported from outside, as REST interfaces for interactive and low-latency use cases.
Models trained on Domino (or imported from outside) that are used for more complex processing with unstructured data and large payloads are hosted as a REST API with asynchronous prediction interfaces.
- Use Batch Scoring
-
Domino’s jobs infrastructure helps you deploy models for performing predictions in bulk using distributed compute environments.
- Export to SageMaker
-
To take advantage of large-scale compute in the cloud or at edge networks, Domino helps package your models and set them up for deployment in AWS Sagemaker (scaling in the cloud).
- Integrate with CI/CD Workflows
-
Learn how to support sudden bursts in traffic while adhering to strict requirements: SLA, uptime, security, legal, or loyalty.
- Export to NVIDIA FleetCommand
-
To take advantage of large-scale compute in the cloud or at edge networks, Domino helps package your models and set them up for deployment in NVIDIA Fleet Command (edge deployments).
- Use Models in Snowflake Queries
-
For bulk inference scenarios where it is more suitable to move the models to where the data lives, Domino helps package and deploy your models into Snowflake.