This topic describes how to connect to Snowflake from Domino.
You must have network connectivity between Snowflake and your Domino deployment.
To use Snowflake code integrations, such as Snowpark, you must agree to the Snowflake third party terms. To agree to these terms, you must have a Snowflake account with the ORGADMIN role. If you don’t have access to a Snowflake account with the ORGADMIN role, submit a Snowflake support ticket.
-
Use the Snowflake Python connector (snowflake-connector-python).
-
Use the following Dockerfile instruction to install snowflake-connector-python and its dependencies in your environment.
USER root
RUN apt-get install -y libssl-dev libffi-dev && \
pip install -U pip && pip install --upgrade snowflake-connector-python
USER ubuntu
+ If you encounter an error due to your Ubuntu version, use the following Dockerfile instruction:
+
USER root
RUN pip install -U pip && pip install --upgrade snowflake-connector-python
USER ubuntu
-
Set the following Domino environment variables to store secure information about your Snowflake connection.
-
SNOWFLAKE_USER
-
SNOWFLAKE_PASSWORD
-
SNOWFLAKE_ACCOUNT
See Secure Credential Storage to learn more about Domino environment variables.
-
-
See Using the Python Connector for information about how to use the package. The following is an example.
import snowflake.connector import os # Gets the version ctx = snowflake.connector.connect( user=os.environ['SNOWFLAKE_USER'], password=os.environ['SNOWFLAKE_PASSWORD'], account=os.environ['SNOWFLAKE_ACCOUNT'] cs = ctx.cursor() try: cs.execute("SELECT current_version()") one_row = cs.fetchone() print(one_row[0]) finally: cs.close() ctx.close()
Upload bulk data
The process for bulk uploads follows:
-
Write your dataframe into a .csv file.
-
Upload the .csv file to Snowflake into a table stage (or another stage).
-
Copy the data from the uploaded .csv file in the stage into the database table.
-
Delete the file from stage.
For example:
# Note the use of a vertical bar (|) as separator instead of a comma
my_dataframe.to_csv('/mnt/results/my-data-file.csv', \
index=False, sep="|")
cs.execute("PUT file:///mnt/results/my-data-file.csv @%my_table")
sfStatement = """COPY INTO my_table
file_format = (type = csv
field_delimiter = '|' skip_header = 1)"""
cs.execute(sfStatement)
You can also use generic Python JDBC or ODBC tools to connect to Snowflake. However, they are not specialized for use with Snowflake. They can have inferior performance and will require more time to set up.
See JDBC Driver and ODBC Driver for more information about JDBC and ODBC connections.
-
After connecting to your Data Source, learn how to Use Data Sources.
-
Share this Data Source with your collaborators.