Magic ETL Efficiency
Hello,
I have a simple Magic ETL that I use to filter a dataset from our company's source system to only show the latest version of each record. The dataflow finds the latest timestamp for each record, then uses an inner join on the record ID and timestamp to only include the most recent row. I've attached a screenshot of the dataflow configuration for reference.
Currently it takes about 15-20 minutes for this dataflow to run, processing about 16M rows to produce an output of 1.7M rows. I'm trying to come up with ways to improve the efficiency of this dataflow as the input dataset continues to grow to ensure that it doesn't impact the timing of downstream dataflows. Does anyone have any suggestions or alternatives? Any help would be greatly appreciated.
Thanks!
Best Answer
-
Hi @MichelleH
you could try the following. I think it might help improved the efficiency of your flow
Follow the instructions in the screenshot attached to
- Remove the Group By and add a "Rank & Window"
- instructions for the Rank window set up are in the screenshot
- Click on Addf Function
- Name the new column "Rank_Desc"
- Select the function to apply a "Dense Rank" from the drop dow menu selection and click Apply
- continue to step 2 selecting the column that identifies the lastest version
- continue to step 3 and choose "Descending"
- continue to step 4 and select the column that makes the partition, in your case "RecordID"
- after setting up the rank window function , use the new column "Rank_Desc" to filter your dataset where the column = 1
Domo Arigato!
**Say 'Thanks' by clicking the thumbs up in the post that helped you.
**Please mark the post that solves your problem as 'Accepted Solution'2
Answers
-
Hi @MichelleH
you could try the following. I think it might help improved the efficiency of your flow
Follow the instructions in the screenshot attached to
- Remove the Group By and add a "Rank & Window"
- instructions for the Rank window set up are in the screenshot
- Click on Addf Function
- Name the new column "Rank_Desc"
- Select the function to apply a "Dense Rank" from the drop dow menu selection and click Apply
- continue to step 2 selecting the column that identifies the lastest version
- continue to step 3 and choose "Descending"
- continue to step 4 and select the column that makes the partition, in your case "RecordID"
- after setting up the rank window function , use the new column "Rank_Desc" to filter your dataset where the column = 1
Domo Arigato!
**Say 'Thanks' by clicking the thumbs up in the post that helped you.
**Please mark the post that solves your problem as 'Accepted Solution'2 -
Hi @Godiepi ,
Thanks for the suggestion! I created a copy of the original dataflow and updated it using your instructions, then let the two dataflows run concurrently for a couple of days to observe the results.
While your method was faster overall, it only reduced the average run time by 1-2 minutes. I agree that using Dense Rank with a filter is a better option than a Group By and Join, however I'm still open to additional suggestions that could possibly help get the run time to under 10 minutes if possible.
Thanks again!
1 -
There's not much else going on in the dataflow, especially nothing that adds complexity. I don't know if this is an option for you, but have you thought about decreasing the size of the input set? Perhaps increase the frequency of the import and only bring in rows added since the last import? I don't know how wide your data is, but reducing the number of columns might add some speed (assuming there are columns you could eliminate).
2 -
Great Suggestion @Godiepi! I often forget about using Rank!
I agree with @bdavis, and was about to suggest adding column selection in the front so it doesn't have to chunk through so much.
You also may be able to remove the 'Remove Duplicates' function if the selection of Rank=1 does the trick. It seems small, but it's still looking through every row across many attributes, especially if it's before your column selection.
DataMaven
Breaking Down Silos - Building Bridges
**Say "Thanks" by clicking a reaction in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"1 -
@DataMaven I hadn't thought about the Remove Duplicates, but I'll get rid of that and see if that helps!
1 -
Did it help?
DataMaven
Breaking Down Silos - Building Bridges
**Say "Thanks" by clicking a reaction in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.6K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 9 Federated
- 2.9K Transform
- 102 SQL DataFlows
- 626 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 753 Beast Mode
- 61 App Studio
- 41 Variables
- 692 Automate
- 177 Apps
- 456 APIs & Domo Developer
- 49 Workflows
- 10 DomoAI
- 38 Predict
- 16 Jupyter Workspaces
- 22 R & Python Tiles
- 398 Distribute
- 115 Domo Everywhere
- 276 Scheduled Reports
- 7 Software Integrations
- 130 Manage
- 127 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 11 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 110 Community Announcements
- 4.8K Archive