How to append based on new IDs only - not upsert
Is there a way to append only NEW IDs in an upload job? I understand that I can do a "Replace" job for an initial load of a table, and then a regularly scheduled "Append" job using an UPSERT key and a smaller subset of data, such as data with a change date in the past 7 days per the Domo example here:
https://domohelp.domo.com/hc/en-us/articles/360043038714-Using-Upsert-in-Workbench-5-1. "In this example, we know that our data could be updated within 7 days of the transaction, so we are pulling the last 7 days' worth of transactions to capture any updated rows as well as any new ones each day. This is only one simple example of many different ways to identify the "change data" or "delta".
What if I don't really need to UPDATE data but rather just want to append any data with an ID that doesn't exist on the Domo side? If I do an UPSERT against the whole dataset, wouldn't that unnecessarily be updating data that doesn't need to be updated?
Answers
-
@JunkDoom given that you are describing ingesting Transactions into Domo, I would strongly recommend you investigate using Partitions instead of UPSERT.
UPSERT works well when you're uploading a massive Customer, Product (or similar) table where you have no idea when a particular Customer_ID or SKU might be updated.
Partition is the preferred ingest method on transactional data because you could (as you say) upload all activity that was created in the last 7 days and partition on the creation_date column (or whatever's appropriate.
While UPSERT might sound easier, the downside is processing, for each row i want to insert into DOMO i'd have to check if the transaction_id exists. Keeping in mind that the data is stored in a flat file, that search grows increasingly expensive as your data volume increases (more values to check) whereas with Partitioning i just have to answer "does this volume (date) exist? if yes, discard the existing date and replace, if no, create a new volume"
hope that helps.
Jae Wilson
Check out my 🎥 Domo Training YouTube Channel 👨💻
**Say "Thanks" by clicking the ❤️ in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"1 -
The lack of documentation on Partitioning in Workbench gives me pause.
If we're not specifying an UPSERT key, wouldn't this only work for transactional data that never changes at the source?
For example, say I create a "Replace" job for an initial load. I partition on a "DateUpdated" field. I then change the query to only to grab the past 3 days ("DateUpdated") and I run that daily. A month down the road, "John A. Smith" changes to "John B Smith" and his record gets updated. Since we're not specifying an UPSERT key, a new record would get inserted, but his old record would still be there with the old "DateUpdated", right?
I think I have a few situations where source data never changes once written where this would work. Would I still filter the query to just the past few days, which would wipe everything for those days out and rebuild them and leave older data alone?
Regardless of how I set these up, I feel like I'll always need to have a "replace" job at the ready to reload the entire table in case something old got updated, either expectedly or not.
0 -
Don't partition a table of customers. that is the perfect use case for UPSERT if you use the customer_id as the upsert key.
"UPSERT works well when you're uploading a massive Customer, Product (or similar) table where you have no idea when a particular Customer_ID or SKU might be updated."
Yes, the ideal case for partitioning is the assumption that history never changes (i.e. transactional records). if you have back posted activity (financial stuff) then yes, you may need to periodically reload history, or if you have two columns posting_date and last_modified_date, if you identify that you back post one day last month AND you are partitioning by month, then you'd have to re-upload the entire previous month.
what you don't want to do is UPSERT a (large) transactional dataset because that would be a massive strain on Domo as the dataset grows and eventually cause the data to perform inconsistently as volume increases. (low millions of rows will probably be fine but 10s or 100s of millions would probably be where upload starts to suffer.
Jae Wilson
Check out my 🎥 Domo Training YouTube Channel 👨💻
**Say "Thanks" by clicking the ❤️ in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"1
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.6K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 9 Federated
- 2.9K Transform
- 102 SQL DataFlows
- 626 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 755 Beast Mode
- 61 App Studio
- 41 Variables
- 693 Automate
- 178 Apps
- 456 APIs & Domo Developer
- 49 Workflows
- 10 DomoAI
- 38 Predict
- 16 Jupyter Workspaces
- 22 R & Python Tiles
- 398 Distribute
- 115 Domo Everywhere
- 276 Scheduled Reports
- 7 Software Integrations
- 130 Manage
- 127 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 11 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 110 Community Announcements
- 4.8K Archive