How to append data to the output dataset?

edited November 2021 in Magic ETL

I have a workbench job that will run daily. It will get the data with current date. I then needs to filter this data and create two data sets using the DataFlow ETL tool.

How do I append data to the DataFlow ETL output table?

DataFlow ETL creates a new table every time it runs.

I realize that One way to achieve this is to select append in the workbench job itself, however this takes up unnecessary computing resources since it it processes the entire table every time data is appended.


  • It sounds like you are wanting to leverage recursive dataflows. I would recommend looking at this KB article which will walk you through how to set it up.

    **Check out my Domo Tips & Tricks Videos

    **Make sure to <3 any users posts that helped you.
    **Please mark as accepted the ones who solved your issue.
  • GrantSmith

    Hi @rishi_patel301

    Have you looked into the DataSet Copy connector to copy your datasets and set the update method to Append?

    You can copy the dataset to the same instance and set the update method to append so you'd remove the append rows and the two old input datasets and just output those two datasets then have two dataset copy jobs to copy your two datasets with the append update method.

    **Was this post helpful? Click Agree or Like below**
    **Did this solve your problem? Accept it as a solution!**
  • rishi_patel301


    I am trying to do recursion, however it creates new datasets every time. I end up with two datasets having the same name.

    I wonder why DOMO did not give an option to select append to an existing output dataset in the ETL tool.

  • @rishi_patel301 the steps are a little tricky, but you will end up with just one dataset once your done. I would review the steps again as you shouldn't end up with two datasets.

    **Check out my Domo Tips & Tricks Videos

    **Make sure to <3 any users posts that helped you.
    **Please mark as accepted the ones who solved your issue.
  • jaeW_at_Onyx

    if it were me, I would implement the DS pipeline and the recursive dataflow as two separate dataflows.

    separate interests. if you alter your DS pipeline for whatever reason, or need to test something, you don't necessarily want to commit that the your recursive dataflow. and if you need to add new columns or calculations to your recursive dataflow, you don't want to have to reproccess your DS pipeline.

    Jae Wilson
    Check out my 🎥 Domo Training YouTube Channel 👨‍💻

    **Say "Thanks" by clicking the ❤️ in the post that helped you.
    **Please mark the post that solves your problem by clicking on "Accept as Solution"