Data flow in Domino

There are three ways for data to flow in and out of a Domino Run.

1) Domino File Store

Each Domino Run takes place in a project, and the files for the active revision of the project are automatically loaded into the local execution volume for a Job or Workspace according to the specifications of the Domino Service Filesystem. These files are retrieved from the Domino File Store, and any changes to these files are written back to the Domino File Store as a new revision of the project’s files.

2) Domino Datasets

Domino Runs may optionally be configured to mount Domino Datasets for input or output. Datasets are network volumes mounted in the execution environment. Mounting an input Dataset allows for a Job or Workspace to both start quickly and have access to large quantities of data, since the data is not transferred to the local execution volume until user code performs read operations from the mounted volume. Any data written to an output Dataset is saved by Domino as a new snapshot.

3) External data systems

User code running in Domino can use third party drivers and packages to interact with any external databases, APIs, and file systems that the Domino-hosting cluster can connect to. Users can read and write from these external systems, and they can import data into Domino from such systems by saving files to their project or writing files to an output Dataset.




The diagram below shows the series of operations that happens when a user starts a Job or Workspace in Domino, and illustrates when and how various data systems can be used.

../_images/data-flow.png

[ Click to view full size ]