Use recursive dataflows to update datasets

We have 20 to 30 massive tables (+40 million rows) that need to be updated. I am wondering whether a recursive dataflow could be an alternative to update all these datasets. I have researched the option of recursive dataflows, but it seems to me that it is a two step process (first, add the input datasets  to create an output dataset, then add it as an input and remove the original dataset), so my question is whether this can run automatically as a single dataflow, so that we do not need to manually update the dataset. What I believe it requires is running the ETL process first to create the output dataset, and then again to append the historical with the updated data. Is this correct?

 

Does this mean that we have to manually run two ETL processes to update the datasets or can we make this process automatic? Thanks in advance!

Comments