Using partitioning in ETL or Workbench
Good afternoon.
Typically, to not do full replaces of data pulls into Domo from our databases, we have used recursive dataflows with ETL. I am trying to think of a way of accomplishing this using the partitioning features in ETL and/or workbench to avoid the ETL still having to process the whole set of updated+historical.
I either can't wrap my head around it, or just don't have a strong enough grasp on the partitioning features, or I'm barking up the wrong function-tree.
I have attached a very small sample of what one of our raw jobs might look like when it gets into Domo and am wondering if anyone thinks partitioning is a good use case for this.
Each separate color column header represents a different table in the SQL Server database we're pulling into workbench from. Each table is connected with the column "donation_id". Each row represents a single, unique donation for which we would only ever want 1 row of in the ETL output.
Other than "donation_id", any of the columns can be modified at any time (the day after a donation or 10 years after a donation). When 1 table is modified in the database, it does not prompt the other tables to be modified.
Example, in the screen shot, the collection_date is 1/1/2020. If any one of these columns changed on 3/30/2023, we would want to bring in that new row, and eliminate the existing row.
Best Answers
-
Partitions are used to replace a group of records typically. If you have some records being replaced for a date then pull in all records for that date and use the date as your partition key. This will replace all records with that date.
Alternatively if records aren’t being deleted you can utilize UPSERT in workbench to update just that single record based on the id
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**1 -
id be very carefull with write backs we found it was not copying rows where the keys were duplicates.
column1 and column2 are the key identifier columns, column 3 new values to bring
ie old dataset
column 1
column2
column3
Date1
AAA
23
Date2
BBB
23
Date1
AAAC
24
new dataset
column 1
column2
column3
Date1
AAA
27
Date2
BBB
23
Date1
AAA
24
Date2
BBB
23
from the new dataset 1st column whould not copy over because partitiuon thought it already existed.
Maybe more documentation is required
1
Answers
-
Partitions are used to replace a group of records typically. If you have some records being replaced for a date then pull in all records for that date and use the date as your partition key. This will replace all records with that date.
Alternatively if records aren’t being deleted you can utilize UPSERT in workbench to update just that single record based on the id
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**1 -
id be very carefull with write backs we found it was not copying rows where the keys were duplicates.
column1 and column2 are the key identifier columns, column 3 new values to bring
ie old dataset
column 1
column2
column3
Date1
AAA
23
Date2
BBB
23
Date1
AAAC
24
new dataset
column 1
column2
column3
Date1
AAA
27
Date2
BBB
23
Date1
AAA
24
Date2
BBB
23
from the new dataset 1st column whould not copy over because partitiuon thought it already existed.
Maybe more documentation is required
1 -
How do you use upsert in Workbench? I'm using the latest version and I do not see any way to set unique keys, upsert or anything that is mentioned in these posts.
Looking at the schema, I only have the following:
Source Column
Destination Column
Data Type
Include
Lock Data Type
Protect
Right clicking the column does not give me any options, it does nothing.
Looking at the JSON definition of the job, I do see where it has attributes assigned, but have no way to modify these in the GUI.
Example: "UpsertKey":false
I would like to be able to set this to "true".
I attempting to do an upsert, but I am getting duplicate rows.0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.8K Visualize
- 2.5K Charting
- 737 Beast Mode
- 56 App Studio
- 40 Variables
- 684 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 46 Workflows
- 10 DomoAI
- 35 Predict
- 14 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 123 Manage
- 120 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive