The following versions have been validated with Domino 4.6. Other versions may also be compatible but are not guaranteed.
-
Kubernetes 1.18–1.20
Important
-
Dask on Domino - Domino introduces the ability to dynamically provision and orchestrate a Dask cluster directly on the infrastructure backing the Domino instance. To learn more, see Dask on Domino.
-
Branch selection in Git repository file browser - Git-based project users can now select from any branch and the latest 10 commits using drop-down lists, then browse the directories of linked Git repositories natively from the Code section on the project page. To learn more, see the documentation on Git-based projects.
-
GitLab integration - GitLab users can now create a new GitLab repository during the Git-based project creation process. To learn more, see the updated steps 8-9 in the documentation To create a Git-based project.
-
Domino Standard Environment - The Domino Analytics Distribution has been replaced by the Domino Standard Environment. The new DSE approach provides multiple base images for different use cases. The default DSE is published on Quay. You can find the full collection of Domino published environments in the Compute Environment Catalog. Existing environments do not get updated to use the new DSE on upgrade. See Domino Standard Environments.
Domino Model Monitor Enhancements
-
Co-installation with the core Domino platform - Deploy an instance of the Model Monitor in a cloud (AWS, Azure, GCP) or on-premise environment alongside the core Domino platform. Single sign-on enables seamless authentication and access to both platforms. To enable the Model Monitor at install time, see the new Domino configuration settings.
-
Enterprise-scale model monitoring - Compute data drift and model quality metrics at enterprise scale for hundreds of models with hundreds of features and billions of predictions.
-
Comprehensive model quality metrics - Get comprehensive model quality coverage with varied reporting and metrics, including accuracy, precision, recall, F1, AUC ROC, Log Loss, Gini norm, confusion matrix, and per-output-class breakdowns.
-
Integration with High-capacity Data Storage - Ingest data from Amazon S3 and Hadoop Compatible File Systems (HCFS) that include Azure Blob Store, Azure Data Lake (Gen1 and Gen2), Google Cloud Storage, and HDFS to enable model monitoring.
To get started with Domino Model Monitor, see Model Monitoring.
-
In R Studio workspaces, the default home directory is now set to
/mnt/code
for Git-based projects and/mnt
for Domino File System (DFS) projects. -
Improved handling of the case when a project owner without the Practitioner role attempts to publish an app. Whereas previously this could leave the app stuck in a “Never Started” state, now the attempt will be cleanly denied.
-
Updated guidelines for installing python-domino library in the egg format.
-
Allow execution of
.ipynb
files that have spaces in the filename. -
Fixed an issue with the Model API generating exceptions when the model is hosted outside Domino. Model API applications deployed via Export to External Registry will now run cleanly.
-
Fixed caching issues that caused deleted folders to reappear in subsequent runs, and folders from cloned git repositories to reappear after the action to stop and discard changes.
-
Aligned the progress output for a pending Domino Workspace to report the progress as ‘Pulling' until all necessary images are pulled, instead of prematurely displaying ‘Running'.
-
Fixed a synchronization issue in starting Runs (such as scheduled jobs) which was causing a small percentage of runs to error out with a message in the execution log “Failed to create pod sandbox”.
-
Fixed CLI BigSur MacOS issue with deleted files synchronization.
-
Improved performance of Models to handle multiple simultaneous queries without long queueing times.
-
Fixed caching issues that caused deleted folders to reappear in subsequent runs.
-
Enhanced error messaging in case of environment build failure due to Dockerfile parsing errors.
-
Fixed JWT credential propagation for Domino installations on OpenShift.
-
Improved the Admin usage page to handle a large number of runs in the system without becoming unresponsive or displaying an error message.
-
Fixed an issue where, on a heavily-loaded Domino, stopping a workspace sometimes resulted in a 502 error code shown in the notebook iframe of the workspace session view.
-
For executions with clusters, improved reporting of the cluster hardware tier in the execution details (Job details, workspace details) to consistently show the hardware tier name rather than the ID.
-
Improved the workspaces view to show all workspaces after the user stops any given workspace, rather than showing only the stopped workspace.
-
Optimized loading of the Administrator page that shows datasets and snapshots, to cut down on the page loading time and prevent potential timeouts.
-
Fixed a shared dataset listing issue where multiple datasets with the same project name were not being displayed.
-
Improved handling of invalid credentials during creation of a Git-based project to notify the user of the problem.
-
If
client.distributed.performance_report
is invoked more than once in a Jupyter notebook, then Dask performance reports may fail as described here: https://github.com/dask/distributed/issues/3858 -
When running a workspace on the Domino Standard Environment (DSE), performance degradation on bigger workloads can occur. Scripts may take longer to complete. In this case, the Domino Minimal Environment (DME) is recommended instead of the DSE.
-
After upgrading Kubernetes,
X509 certificate signed by unknown authority
errors appear and Domino fails to deploy workspaces, jobs, apps, models, and so on.This can occur if you upgrade to a Kubernetes version where Docker is not the default runtime. Domino depends on some Docker functionality (for managing certificate trust) that is not available in 'containerd` without special configuration. See the cluster requirements for important notes about compatibility with Kubernetes 1.20 and above, and contact your Domino customer success team for guidance.
-
If an imported project or Git repository is added between workspace restarts, the system creates empty directories.
-
If you create a subfolder in a dataset, and then navigate into the subfolder and click Take Snapshot > Include all files, the snapshot status hangs.
-
If you try to access a shared datasets that you don’t have access to, you receive a 403 error.
-
When a dataset has more than one subfolder, if you initiate a snapshot from a folder other than the root of the dataset, the snapshot is incomplete.
-
A metrics explosion is causing the nucleus Prometheus endpoint to take a long time to service requests so not all metrics from nucleus are being collected by the Prometheus server.