Is there any way to do an UPSERT with data coming from Postgres?
Our main SQL database is Postgres, and we're populating DataSets with calls similar to
select * from view_for_domo
We're using full replacement, rather than incremental pulls.
We're interested in using incremental updates rather than full refreshes of the DataSet to speed things up. After checking the options I can find in Domo, and see that UPSERT
(INSERT ON CONFLICT
in Postgres) and MERGE
are not supported for the Postgres connector.
The replace method is fine, until it's too slow for the amount of data we're accumulating.
The incremental method is likely fine, for an append-only table, but we don't have many of those.
There's no way to reliably fake an UPSERT
or MERGE
, and we do regularly delete and revise rows that have already been imported into Domo.
Is there some feature, behavior, or beta feature that might help us out here?
Thanks a lot.
Answers
-
@DataSquirrel you are going to want to look into recursive dataflows. Here are few KB articles to start with depending on how you would like to implement it:
Implement with Magic ETL 2.0 (will likely run the fastest)
Implement with SQL
Implement with Magic ETL 1.0
You can also find a lot of chatter about it in the Dojo by searching for recursive.
Hope this helps!
**Check out my Domo Tips & Tricks Videos
**Make sure to any users posts that helped you.
**Please mark as accepted the ones who solved your issue.0 -
If you're using Workbench 5 there is the ability to define an UPSERT key to replace existing records based on the defined key. It's outlined in the knowledgebase here: https://knowledge.domo.com/Connect/Connecting_to_Data_Using_Workbench_5/Using_Upsert_in_Workbench_5
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**1 -
@MarkSnodgrass
Thanks for the suggestion and links, I'll check this out. It's been a couple of years since I did a review of Domo's features, looks like there are a lot more options now.
@GrantSmith
Thanks to you too, the Workbench 5 strategy looks pretty good. In our case, we in the process of decommissioning Workbench, but I'm glad to have this option in the back of my mind.
0 -
How much data volume are we talking here?
How frequently are you updating your raw data in Domo?
This video is a little less polished than my usual content, but here's how you can enable upsert using the CLI. It doesn't matter if you use Workbench or CLI, the considerations outlined in the video are the same.
https://knowledge.domo.com/Administer/Other_Administrative_Tools/Command_Line_Interface_(CLI)_Tool
Jae Wilson
Check out my 🎥 Domo Training YouTube Channel 👨💻
**Say "Thanks" by clicking the ❤️ in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"0 -
Thanks for the answer, and the questions. I was just looking at your Complete Guide to Recursive DataFlows in MagicETL piece, thanks!
It's been 2-3 years since I last had a good look at Domo's features, and it seems that a ton of great stuff has been added. I'm going to check out the CLI toolset too. In our case, we're working on DataSets populated from Postgres running on RDS. In that environment, you don't have a file system or command line, so I think I'm limited to Workbench and connectors for automation. Since the data is in Postgres, a connector seems the better bet. (We discovered that Excel has some sort of row limit, around 1M+, which isn't enough for many of our DataSets.)
> How much data volume are we talking here?
Just checked, and we've got around 30 DataSets populated with data from Postgres now, and most are in the few 10s of thousands, and 8 have row counts over 1M. The largest is currently over 16M rows, and powers over 300 cards. We're adding facilities (more data) and DataSets (more pulls) all of the time. On overage, that DataSet takes around 30 minutes to refresh, but the history screen shows times more than 3x that. In fact, in looking at the larger DataSets, the load times vary by about 3.5x, even with identical queries (I'm always pulling
select * from view_name_for_domo
.) The rate of change within the DataSets is pretty variable. Some are close to append-only logs, some are replacement snapshots, and some take all manner of revisions. Even our "static" data sometimes needs adjustment. For example, a colleague found a discrepancy in some of the outputs on that 16.3M row DataSet. He tracked it down to 63 records that had an inconsistent default (0 vs. 1). So, we needed to revise those 63 rows, even though they were, in theory, historical.Currently, we're updating daily, but some of these DataSets would be better if they were updated at least hourly.
What I'm trying to accomplish is something like this:
* Do a full pull of the initial data.
* After that, pull only the changes and apply them.
* Support new, modified, and deleted rows.
That's close to what Postgres provides in its version of
UPSERT
,INSERT ON CONFLICT
, but a bit more. Because ofDELETE
, so more like what SQL Server implements asMERGE
. I don't have the MagicETL v2 beta, but took your advice and am seeing about getting access to it.It looks like a Recursive Data Flow could do the trick....I think. I could not get the "hello world" example in the KB article to work, nor have I been able to with some silly test data. I like to start from a sample that I can spot-check visually. And, here's my initial data:
month_number full_name last_updated_dts marked_for_deletion
1 January 2021-02-24 08:26:04.365915 false
2 February 2021-02-24 08:26:04.365915 false
3 March 2021-02-24 08:26:04.365915 false
4 April 2021-02-24 08:26:04.365915 false
5 May 2021-02-24 08:26:04.365915 false
6 June 2021-02-24 08:26:04.365915 false
7 July 2021-02-24 08:26:04.365915 false
8 August 2021-02-24 08:26:04.365915 false
9 September 2021-02-24 08:26:04.365915 false
10 October 2021-02-24 08:26:04.365915 false
12 Declember 2021-02-24 08:26:04.365915 false
13 Smarch 2021-02-24 08:26:04.365915 false
Take
month_number
as a unique primary key. (Like I said, stripped down example.) There are three errors above:* December is misspelled, so we need an
UPDATE
.* November is missing, so we need an
INSERT
.* "Smarch" isn't a month, so we need a
DELETE
, implemented here as an UPDATE as we usually have a soft-delete flag field for reasons unrelated to Domo.Given all of that, here's the delta data:
month_number full_name last_updated_dts marked_for_deletion
11 November 2021-02-24 08:26:04.381418 false
12 December 2021-02-24 08:26:04.381418 false
13 Smarch 2021-02-24 08:26:04.381418 true
What I'm after is
month_number full_name last_updated_dts marked_for_deletion
1 January 2021-02-24 08:26:04.365915 false
2 February 2021-02-24 08:26:04.365915 false
3 March 2021-02-24 08:26:04.365915 false
4 April 2021-02-24 08:26:04.365915 false
5 May 2021-02-24 08:26:04.365915 false
6 June 2021-02-24 08:26:04.365915 false
7 July 2021-02-24 08:26:04.365915 false
8 August 2021-02-24 08:26:04.365915 false
9 September 2021-02-24 08:26:04.365915 false
10 October 2021-02-24 08:26:04.365915 false
11 November 2021-02-24 08:26:04.381418 false
12 December 2021-02-24 08:26:04.381418 false
13 Smarch 2021-02-24 08:26:04.381418 true
Even better would be to clear deleted records as a list of IDs to clear in Domo, and end up with
month_number full_name last_updated_dts marked_for_deletion
1 January 2021-02-24 08:26:04.365915 false
2 February 2021-02-24 08:26:04.365915 false
3 March 2021-02-24 08:26:04.365915 false
4 April 2021-02-24 08:26:04.365915 false
5 May 2021-02-24 08:26:04.365915 false
6 June 2021-02-24 08:26:04.365915 false
7 July 2021-02-24 08:26:04.365915 false
8 August 2021-02-24 08:26:04.365915 false
9 September 2021-02-24 08:26:04.365915 false
10 October 2021-02-24 08:26:04.365915 false
11 November 2021-02-24 08:26:04.381418 false
12 December 2021-02-24 08:26:04.381418 false
Does a Recursive Data Flow sound like the right tool for these goals?
0 -
lol @DataSquirrel, the scale of your question is a consulting engagement with a Sr. technical consultant at Domo ... or a TSM maybe. Just sayin ;)
1) i would evaluate if Magic 2.0 can meet the requirements
2) you could try going the UPSERT route, but you would still have run your data through logic to flag soft deletes. (i would accummulate ALL data into a dataset and then build a DSV on top if it that just has the valid rows. Or maybe put the flagged for deletes in a second dataset.
3) if you're running UPSERT, as described there is no DELETE function. The two options there are to do a periodic full load (on Saturday or something) OR if you have a list of row_ids then you can run a script that will delete rows from an UPSERT dataset using the UPSERT_KEY (see java Cli)
this video is rough and unfinished ... but it's got the information you need
https://www.youtube.com/watch?v=nFSu6hpc8SE
4) the biggest trick / problem people run into with Domo is figuring out how to architect their dataflows to minimize the volume of data that gets transformed everytime you execute your pipeline. (problem with recursive queries is there's linear growth) ... so questions like "how late in my pipe can i do my JOINs" becomes super important.
As i said, your root question (performance at scale) is not just a technology question. the tools are there. it's a best practices and architecture planning question... hence why i said this is usually a paid consulting engagement.
fortunately, i know a guy ;)
Jae Wilson
Check out my 🎥 Domo Training YouTube Channel 👨💻
**Say "Thanks" by clicking the ❤️ in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"0 -
Thanks for the answer. I think that you're on UTC+0, and I'm on UTC+10, so it's getting later here...
Hoping to check out Magic 2.0.
I'm a bit surprised to hear that what I'm asking isn't a straightforward question, but, actually, maybe that makes sense.
Architecture <> Mechanics
.But, speaking of mechanics, I'd just like to try and get a basic example working, that seems like a good first step.
And, for the soft delete, I can also do a hard delete, or publish a delete change set with only the deleted IDs to do an anti-join on. It just seems crazy to reload a 16M row DataSet from scratch to clear few rows. (For the record, we don't use materialized views in Postgres, for similar reasons. Domo DataSets function like materialized views, in a good way...and also in the bad way. Just trying to make them incrementally updateable.)
0 -
And thanks for the video link, looks interesting! I'll check it out in the AM.
0 -
domo datasets are CSVs stored in a data lake environment similar to S3.
"I'm a bit surprised to hear that what I'm asking isn't a straightforward question, but, actually, maybe that makes sense.
Architecture <> Mechanics
."There are books on building performant data models in powerbi. I recommend having similar expectations for Domo. (Domo does automate a lot of problems away, and does trivialize a lot of optimization... but for performance on bigger sets of data ... we gotta start thinking about it.)
To be fair... IMHO Domo's strength is how easy it is to get return on time investment. domo's weakness is that it encourages people to not think strategically about what this will look like in a year. :P
Jae Wilson
Check out my 🎥 Domo Training YouTube Channel 👨💻
**Say "Thanks" by clicking the ❤️ in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"0 -
Yeah, my role is more in PostgresLand, where architectural choices are...many. That, I understand. Domo's internals, not so much. My instinct for some time has been to use Postgres for pre-processing digest/rollup/summary data for Domo. That doesn't always suit because you lose whatever level of granularity you summarize away, but you can get charts to render instantly and your data feeds become quite small. It's not hard to do rollups, incremental rollups, pivots and so on through Postgres. Logical replication makes this much easier. You're able to flow data through a secondary server in commit order, which makes 100% accurate live rollups achievable at low cost with no easte. (The logical subscriber runs its own triggers on incoming data, and can add novel columns onto the table not found in the original publisher.) From there, we can have Domo pull. And, also helpful,you can use calculated columns to materialize calculations in-row, so the Domo pull is only grabbing raw values, not executing any runtime calculations in Postgres.
That's the thinking for now, at least. Kind of only makes sense if you're in the weeds on Postgres.
0 -
I just finished your tutorial video, and it is fantastic. You explained more in 15 minutes than I've been able to figure out by hunting through the docs and such for hours. Thanks! Highly recommended.
1
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.6K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 9 Federated
- 2.9K Transform
- 102 SQL DataFlows
- 626 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 753 Beast Mode
- 61 App Studio
- 41 Variables
- 692 Automate
- 177 Apps
- 456 APIs & Domo Developer
- 49 Workflows
- 10 DomoAI
- 38 Predict
- 16 Jupyter Workspaces
- 22 R & Python Tiles
- 398 Distribute
- 115 Domo Everywhere
- 276 Scheduled Reports
- 7 Software Integrations
- 130 Manage
- 127 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 11 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 110 Community Announcements
- 4.8K Archive