Magic ETL Efficiency
Hello,
I have a simple Magic ETL that I use to filter a dataset from our company's source system to only show the latest version of each record. The dataflow finds the latest timestamp for each record, then uses an inner join on the record ID and timestamp to only include the most recent row. I've attached a screenshot of the dataflow configuration for reference.
Currently it takes about 15-20 minutes for this dataflow to run, processing about 16M rows to produce an output of 1.7M rows. I'm trying to come up with ways to improve the efficiency of this dataflow as the input dataset continues to grow to ensure that it doesn't impact the timing of downstream dataflows. Does anyone have any suggestions or alternatives? Any help would be greatly appreciated.
Thanks!
Best Answer
-
Hi @MichelleH
you could try the following. I think it might help improved the efficiency of your flow
Follow the instructions in the screenshot attached to
- Remove the Group By and add a "Rank & Window"
- instructions for the Rank window set up are in the screenshot
- Click on Addf Function
- Name the new column "Rank_Desc"
- Select the function to apply a "Dense Rank" from the drop dow menu selection and click Apply
- continue to step 2 selecting the column that identifies the lastest version
- continue to step 3 and choose "Descending"
- continue to step 4 and select the column that makes the partition, in your case "RecordID"
- after setting up the rank window function , use the new column "Rank_Desc" to filter your dataset where the column = 1
Domo Arigato!
**Say 'Thanks' by clicking the thumbs up in the post that helped you.
**Please mark the post that solves your problem as 'Accepted Solution'2
Answers
-
Hi @MichelleH
you could try the following. I think it might help improved the efficiency of your flow
Follow the instructions in the screenshot attached to
- Remove the Group By and add a "Rank & Window"
- instructions for the Rank window set up are in the screenshot
- Click on Addf Function
- Name the new column "Rank_Desc"
- Select the function to apply a "Dense Rank" from the drop dow menu selection and click Apply
- continue to step 2 selecting the column that identifies the lastest version
- continue to step 3 and choose "Descending"
- continue to step 4 and select the column that makes the partition, in your case "RecordID"
- after setting up the rank window function , use the new column "Rank_Desc" to filter your dataset where the column = 1
Domo Arigato!
**Say 'Thanks' by clicking the thumbs up in the post that helped you.
**Please mark the post that solves your problem as 'Accepted Solution'2 -
Hi @Godiepi ,
Thanks for the suggestion! I created a copy of the original dataflow and updated it using your instructions, then let the two dataflows run concurrently for a couple of days to observe the results.
While your method was faster overall, it only reduced the average run time by 1-2 minutes. I agree that using Dense Rank with a filter is a better option than a Group By and Join, however I'm still open to additional suggestions that could possibly help get the run time to under 10 minutes if possible.
Thanks again!
1 -
There's not much else going on in the dataflow, especially nothing that adds complexity. I don't know if this is an option for you, but have you thought about decreasing the size of the input set? Perhaps increase the frequency of the import and only bring in rows added since the last import? I don't know how wide your data is, but reducing the number of columns might add some speed (assuming there are columns you could eliminate).
2 -
Great Suggestion @Godiepi! I often forget about using Rank!
I agree with @bdavis, and was about to suggest adding column selection in the front so it doesn't have to chunk through so much.
You also may be able to remove the 'Remove Duplicates' function if the selection of Rank=1 does the trick. It seems small, but it's still looking through every row across many attributes, especially if it's before your column selection.
DataMaven
Breaking Down Silos - Building Bridges
**Say "Thanks" by clicking a reaction in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"1 -
@DataMaven I hadn't thought about the Remove Duplicates, but I'll get rid of that and see if that helps!
1 -
Did it help?
DataMaven
Breaking Down Silos - Building Bridges
**Say "Thanks" by clicking a reaction in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 738 Beast Mode
- 56 App Studio
- 40 Variables
- 685 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 47 Workflows
- 10 DomoAI
- 36 Predict
- 15 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 124 Manage
- 121 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive