How to append data to the output dataset?
I have a workbench job that will run daily. It will get the data with current date. I then needs to filter this data and create two data sets using the DataFlow ETL tool.
How do I append data to the DataFlow ETL output table?
DataFlow ETL creates a new table every time it runs.
I realize that One way to achieve this is to select append in the workbench job itself, however this takes up unnecessary computing resources since it it processes the entire table every time data is appended.
Answers
-
It sounds like you are wanting to leverage recursive dataflows. I would recommend looking at this KB article which will walk you through how to set it up.
**Check out my Domo Tips & Tricks Videos
**Make sure to any users posts that helped you.
**Please mark as accepted the ones who solved your issue.0 -
Have you looked into the DataSet Copy connector to copy your datasets and set the update method to Append? https://domohelp.domo.com/hc/en-us/articles/360043436533-DataSet-Copy-DataSet-Connector
You can copy the dataset to the same instance and set the update method to append so you'd remove the append rows and the two old input datasets and just output those two datasets then have two dataset copy jobs to copy your two datasets with the append update method.
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**0 -
I am trying to do recursion, however it creates new datasets every time. I end up with two datasets having the same name.
I wonder why DOMO did not give an option to select append to an existing output dataset in the ETL tool.
0 -
@rishi_patel301 the steps are a little tricky, but you will end up with just one dataset once your done. I would review the steps again as you shouldn't end up with two datasets.
**Check out my Domo Tips & Tricks Videos
**Make sure to any users posts that helped you.
**Please mark as accepted the ones who solved your issue.0 -
if it were me, I would implement the DS pipeline and the recursive dataflow as two separate dataflows.
separate interests. if you alter your DS pipeline for whatever reason, or need to test something, you don't necessarily want to commit that the your recursive dataflow. and if you need to add new columns or calculations to your recursive dataflow, you don't want to have to reproccess your DS pipeline.
Jae Wilson
Check out my 🎥 Domo Training YouTube Channel 👨💻
**Say "Thanks" by clicking the ❤️ in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.6K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 9 Federated
- 2.9K Transform
- 101 SQL DataFlows
- 623 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 748 Beast Mode
- 60 App Studio
- 41 Variables
- 690 Automate
- 177 Apps
- 455 APIs & Domo Developer
- 48 Workflows
- 10 DomoAI
- 36 Predict
- 15 Jupyter Workspaces
- 21 R & Python Tiles
- 397 Distribute
- 114 Domo Everywhere
- 276 Scheduled Reports
- 7 Software Integrations
- 127 Manage
- 124 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 109 Community Announcements
- 4.8K Archive