upload-snowflake prevent indexing

Options

Hello, when I use the upload-snowflake command, is there a way to tell Domo not to index? When I load data from Snowflake to Domo, I load based on partitions. Since I load data in this way, I'm thinking that I don't need indexing. Or maybe I'm wrong? Thanks!

Best Answer

  • GrantSmith
    GrantSmith Coach
    Answer ✓
    Options

    If you're wanting to avoid indexing using the Java CLI upload-snowflake command you can pass in the -x flag to ignore indexing.

    > help upload-snowflake
    
    upload-snowflake: The “upload-snowflake” command creates a JDBC connection with a Snowflake database, pulls data in accordance
    with command options provided by the user, unloads the CSV file into an S3 Bucket, and then uploads the data
    from s3 to a Domo DataSet.
    
    NOTE: You will need to provide the appropriate Snowflake JDBC driver & connection string.
    
    Requirements: 
    1. Get the Snowflake and S3 credentials.
    2. Get the required Snowflake JDBC jar file.
       https://docs.snowflake.com/en/user-guide/jdbc-download.html# 
    3. Create a correct Snowflake JDBC connection string.
       jdbc:snowflake://<account name>.snowflakecomputing.com
    
    Exit Status: '0' on success and '1' on failure.
    
    Examples:
      Unloads data from Snowflake to S3 and then uploads the data from S3 to Domo.
    
        upload-snowflake --s3Bucket myS3Bucket --s3Key myS3AccessKey --s3Secret myS3SecretKey --warehouse mySnowflakeWarehouse --db mySnowflakeDatabase --schema mySnowflakeSchema --table EMPLOYEE --id b60ebcb4-0b99-4b1a-8e73-5a3796b03fae --jar snowflake-jdbc-3.10.2.jar --jdbcUrl jdbc:snowflake://<account name>.snowflakecomputing.com/ --jdbcUser mySnowflakeUser --jdbcPassword mySnowflakePassword --prefix myS3FilePrefix --s3Region us-east-1
    
      Unloads data from Snowflake to S3 and then uploads the data from S3 to Domo in append mode.
    
        upload-snowflake --s3Bucket myS3Bucket --s3Key myS3AccessKey --s3Secret myS3SecretKey --warehouse mySnowflakeWarehouse --db mySnowflakeDatabase --schema mySnowflakeSchema --table EMPLOYEE --id b60ebcb4-0b99-4b1a-8e73-5a3796b03fae --jar snowflake-jdbc-3.10.2.jar --jdbcUrl jdbc:snowflake://<account name>.snowflakecomputing.com/ --jdbcUser mySnowflakeUser --jdbcPassword mySnowflakePassword --prefix myS3FilePrefix --s3Region us-east-1 --append
    
      Unloads data from Snowflake to S3 and then uploads the data from S3 to Domo and deletes the S3 artifacts after uploading to Domo.
    
        upload-snowflake --s3Bucket myS3Bucket --s3Key myS3AccessKey --s3Secret myS3SecretKey --warehouse mySnowflakeWarehouse --db mySnowflakeDatabase --schema mySnowflakeSchema --table EMPLOYEE --id b60ebcb4-0b99-4b1a-8e73-5a3796b03fae --jar snowflake-jdbc-3.10.2.jar --jdbcUrl jdbc:snowflake://<account name>.snowflakecomputing.com/ --jdbcUser mySnowflakeUser --jdbcPassword mySnowflakePassword --prefix myS3FilePrefix --s3Region us-east-1 --cleanup
    
    
    Related commands: create-snowflake-unload-account, create-snowflake-unload-dataset, update-snowflake-unload-dataset, get-snowflake-unload-accounts
      usage:
        -a,--append                             Append to existing data - This flag
                                                is REQUIRED when doing Upserts or
                                                Partitions
        -arn,--s3Arn <ARN>                      S3 arn
        -b,--s3Bucket <BUCKET>                  S3 bucket - Required
        -c,--cleanup                            This flag indicates S3 artifacts
                                                should be deleted after the upload
                                                to Domo
        -db,--db <DB>                           Snowflake database name - Required
        -f,--s3Folder <FOLDER>                  S3 folder name
        -i,--id <ID>                            The Domo DataSet id to upload
                                                snowflake data to - Required
        -j,--jar <JAR>                          Snowflake JDBC jar file name -
                                                Required
        -k,--s3Key <KEY>                        S3 key - Required
        -part,--partition <PARTITION>           Partition tag - This option is only
                                                supported when using the APPEND
                                                parameter
        -pass,--jdbcPassword <PASSWORD>         Snowflake JDBC connection password -
                                                Required
        -pfx,--prefix <PREFIX>                  Prefix of S3 files - Required
        -r,--s3Region <REGION>                  S3 region - Required
        -s,--s3Secret <SECRET>                  S3 secret - Required
        -schema,--schema <SCHEMA>               Snowflake schema name - Required
        -t,--table <TABLE>                      Snowflake table name - Required
        -tx,--threads <THREADS>                 Thread pool size - Default: 10
        -u,--jdbcUser <USER>                    Snowflake JDBC connection user -
                                                Required
        -url,--jdbcUrl <URL>                    Snowflake JDBC connection URL -
                                                Required
        -ut,--uploadTimeout <UPLOAD_TIMEOUT>    Upload timeout - Default: 2880
                                                milliseconds
        -w,--warehouse <WAREHOUSE>              Snowflake warehouse - Required
        -x,--noIndex                            This flag indicates data should not
                                                index after upload
    
    

    **Was this post helpful? Click Agree or Like below**
    **Did this solve your problem? Accept it as a solution!**

Answers

  • GrantSmith
    GrantSmith Coach
    Answer ✓
    Options

    If you're wanting to avoid indexing using the Java CLI upload-snowflake command you can pass in the -x flag to ignore indexing.

    > help upload-snowflake
    
    upload-snowflake: The “upload-snowflake” command creates a JDBC connection with a Snowflake database, pulls data in accordance
    with command options provided by the user, unloads the CSV file into an S3 Bucket, and then uploads the data
    from s3 to a Domo DataSet.
    
    NOTE: You will need to provide the appropriate Snowflake JDBC driver & connection string.
    
    Requirements: 
    1. Get the Snowflake and S3 credentials.
    2. Get the required Snowflake JDBC jar file.
       https://docs.snowflake.com/en/user-guide/jdbc-download.html# 
    3. Create a correct Snowflake JDBC connection string.
       jdbc:snowflake://<account name>.snowflakecomputing.com
    
    Exit Status: '0' on success and '1' on failure.
    
    Examples:
      Unloads data from Snowflake to S3 and then uploads the data from S3 to Domo.
    
        upload-snowflake --s3Bucket myS3Bucket --s3Key myS3AccessKey --s3Secret myS3SecretKey --warehouse mySnowflakeWarehouse --db mySnowflakeDatabase --schema mySnowflakeSchema --table EMPLOYEE --id b60ebcb4-0b99-4b1a-8e73-5a3796b03fae --jar snowflake-jdbc-3.10.2.jar --jdbcUrl jdbc:snowflake://<account name>.snowflakecomputing.com/ --jdbcUser mySnowflakeUser --jdbcPassword mySnowflakePassword --prefix myS3FilePrefix --s3Region us-east-1
    
      Unloads data from Snowflake to S3 and then uploads the data from S3 to Domo in append mode.
    
        upload-snowflake --s3Bucket myS3Bucket --s3Key myS3AccessKey --s3Secret myS3SecretKey --warehouse mySnowflakeWarehouse --db mySnowflakeDatabase --schema mySnowflakeSchema --table EMPLOYEE --id b60ebcb4-0b99-4b1a-8e73-5a3796b03fae --jar snowflake-jdbc-3.10.2.jar --jdbcUrl jdbc:snowflake://<account name>.snowflakecomputing.com/ --jdbcUser mySnowflakeUser --jdbcPassword mySnowflakePassword --prefix myS3FilePrefix --s3Region us-east-1 --append
    
      Unloads data from Snowflake to S3 and then uploads the data from S3 to Domo and deletes the S3 artifacts after uploading to Domo.
    
        upload-snowflake --s3Bucket myS3Bucket --s3Key myS3AccessKey --s3Secret myS3SecretKey --warehouse mySnowflakeWarehouse --db mySnowflakeDatabase --schema mySnowflakeSchema --table EMPLOYEE --id b60ebcb4-0b99-4b1a-8e73-5a3796b03fae --jar snowflake-jdbc-3.10.2.jar --jdbcUrl jdbc:snowflake://<account name>.snowflakecomputing.com/ --jdbcUser mySnowflakeUser --jdbcPassword mySnowflakePassword --prefix myS3FilePrefix --s3Region us-east-1 --cleanup
    
    
    Related commands: create-snowflake-unload-account, create-snowflake-unload-dataset, update-snowflake-unload-dataset, get-snowflake-unload-accounts
      usage:
        -a,--append                             Append to existing data - This flag
                                                is REQUIRED when doing Upserts or
                                                Partitions
        -arn,--s3Arn <ARN>                      S3 arn
        -b,--s3Bucket <BUCKET>                  S3 bucket - Required
        -c,--cleanup                            This flag indicates S3 artifacts
                                                should be deleted after the upload
                                                to Domo
        -db,--db <DB>                           Snowflake database name - Required
        -f,--s3Folder <FOLDER>                  S3 folder name
        -i,--id <ID>                            The Domo DataSet id to upload
                                                snowflake data to - Required
        -j,--jar <JAR>                          Snowflake JDBC jar file name -
                                                Required
        -k,--s3Key <KEY>                        S3 key - Required
        -part,--partition <PARTITION>           Partition tag - This option is only
                                                supported when using the APPEND
                                                parameter
        -pass,--jdbcPassword <PASSWORD>         Snowflake JDBC connection password -
                                                Required
        -pfx,--prefix <PREFIX>                  Prefix of S3 files - Required
        -r,--s3Region <REGION>                  S3 region - Required
        -s,--s3Secret <SECRET>                  S3 secret - Required
        -schema,--schema <SCHEMA>               Snowflake schema name - Required
        -t,--table <TABLE>                      Snowflake table name - Required
        -tx,--threads <THREADS>                 Thread pool size - Default: 10
        -u,--jdbcUser <USER>                    Snowflake JDBC connection user -
                                                Required
        -url,--jdbcUrl <URL>                    Snowflake JDBC connection URL -
                                                Required
        -ut,--uploadTimeout <UPLOAD_TIMEOUT>    Upload timeout - Default: 2880
                                                milliseconds
        -w,--warehouse <WAREHOUSE>              Snowflake warehouse - Required
        -x,--noIndex                            This flag indicates data should not
                                                index after upload
    
    

    **Was this post helpful? Click Agree or Like below**
    **Did this solve your problem? Accept it as a solution!**
  • serissamcanally
    Options

    Perfect! Thanks!