How to remove rows from a dataset, with the output being the same dataset
Summary: I want to remove rows from a dataset based on certain conditions. I want the final dataset to be the same as the input dataset (but with the deleted rows)
Use Case:
Current Usage:
Step 1: I get a file with a 100,000 rows everyday (through the Amazon S3 connector). I use the append method, so each day's data keeps on adding to the old data. Let's call this "S3 File".
Step 2: I then use the SQL dataflow to join this "S3 File" with some another table (to add additional information). I get another output file from this flow. Let's call this the "Final File". I use the "Final File" in a dashboard.
Problem:
The "S3 File", over a period of a month, now has over 3 million rows. This has 2 drawbacks:
.1. The SQL dataflow (Step 1 in my explanation) takes a lot of time to run. Now its over 1 hour
.2. In the morning when hundreds of our company's dataflows are running, the server gets slowed down (we're assuming because of huge files like the one discussed being processed). Processing times are doubled, even tripled.
.3. We have a limit to the number of rows we're allowed, and it will soon run out if this keeps on going.
.4. With a dataflow time of one hour (which can soon go to 2 hours or even more), our dashboard users get to see the latest data very late in the day, which they prefer seeing earlier.
Proposed Solution:
I'm looking for a way in which I can only keep the rows that are relevant in the dataset. I determine relevance in Step 2 (in my explanation above) when I do the SQL dataflow (when I join tables with the "S3 File"). For simplicity, let's assume I see the "start date" for each row. If the start date is 30 days older than the current date, I want to delete that row.
One way proposed was to use the replace method in Step 1 (for the "S3 File"). I then take file to Step 2, perform a SQL dataflow, and weed out outdated rows (with delete command, or just the select command), storing the relevant ones only. The problem with this is that the data that we get daily is ALWAYS relevant that day, for upto 30 days. So I cannot just weed out outdated rows when we get data.
The only way then is to perform SQL operations on the big dataset, and delete the outdated rows. The problem with this is that in a SQL dataflow, the output dataset cannot be the same as the input dataset (I tried naming them the same, but it doesn't help). Even if I make a new dataset with reduced rows, there is still going to be a big dataset in the backend.
I would love an answer on this from the Domo experts.
Best Answer
-
hamza_123,
The way to do what you're looking for is to include the output of the dataflow in an input to the same dataflow. This is referred to as a recursive dataflow. It takes the output of the previous run of the dataflow and brings it back in as an input. You can then append the new data to the recursive input, filter out the rows you don't want to keep and then output to the same recursive output. This will work in either SQL-based or Magic ETL dataflows.
The key to making this work is that you have to run the dataflow once so that the output exists in the data center. Then go back and edit the dataflow and add the new dataset as an input.
I hope that helps!
1
Answers
-
hamza_123,
The way to do what you're looking for is to include the output of the dataflow in an input to the same dataflow. This is referred to as a recursive dataflow. It takes the output of the previous run of the dataflow and brings it back in as an input. You can then append the new data to the recursive input, filter out the rows you don't want to keep and then output to the same recursive output. This will work in either SQL-based or Magic ETL dataflows.
The key to making this work is that you have to run the dataflow once so that the output exists in the data center. Then go back and edit the dataflow and add the new dataset as an input.
I hope that helps!
1
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.6K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 9 Federated
- 2.9K Transform
- 102 SQL DataFlows
- 626 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 754 Beast Mode
- 61 App Studio
- 41 Variables
- 693 Automate
- 178 Apps
- 456 APIs & Domo Developer
- 49 Workflows
- 10 DomoAI
- 38 Predict
- 16 Jupyter Workspaces
- 22 R & Python Tiles
- 398 Distribute
- 115 Domo Everywhere
- 276 Scheduled Reports
- 7 Software Integrations
- 130 Manage
- 127 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 11 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 110 Community Announcements
- 4.8K Archive