Data Snapshot
Ok so we have a dataset in one of our databases that periodically gets updated.
Every so often, we have new data that gets added AND some of the old data gets modified. (Sometimes we have loans that get modified and we need to keep the original 'value')
I am trying to create a snapshot of the original data - on a quarterly basis and feed that into the dataflow that is used to create visualizations
Am I supposed to be doing a recursive dataflow?
An example:
In this case loan_id 125 isn't new but it was modified and for the purposes of this dataflow, we don't want the new value ($126) but the original ($125)
What I am hoping to do is create a table that can keep track of all the original 'value' entries- on a quarterly basis.
I keep looking back towards a recursive data flow but the other problem is that when a loan gets modified, there aren't two entries of 'loan_ids' that match and the original record is lost.
What is the easiest way to capture an original record, store it in a database, and make sure the dataflow knows to use the "master file" and to never modify the old records?
If this helps, feel free to agree, accept or awesome it!
Best Answers
-
Use the "Dataset copy" connector, take a copy of the original dataset and append it to your snapshot dataset on a quarterly basis. From there you can make an ETL that marks the most recent version of the data as the most recent and use the snapshot date to view history of the date.
2 -
@damen Yes, you can use a recursive dataflow to create your master "snapshot" data. The way you would do this is to add a column to your input data with the current date, then append to the existing output dataset. Here is a KB article about how to set up a recursive dataflow:
As for your question about duplicate entries, you can create a second dataflow that runs the snapshot data through a Group By tile to show the original value. Do not do this in the original recursive dataflow since you will lose all the detail data.
1
Answers
-
Use the "Dataset copy" connector, take a copy of the original dataset and append it to your snapshot dataset on a quarterly basis. From there you can make an ETL that marks the most recent version of the data as the most recent and use the snapshot date to view history of the date.
2 -
@damen Yes, you can use a recursive dataflow to create your master "snapshot" data. The way you would do this is to add a column to your input data with the current date, then append to the existing output dataset. Here is a KB article about how to set up a recursive dataflow:
As for your question about duplicate entries, you can create a second dataflow that runs the snapshot data through a Group By tile to show the original value. Do not do this in the original recursive dataflow since you will lose all the detail data.
1 -
Thank you both for the suggestions. I will take a look at both and let you know what I found.
If this helps, feel free to agree, accept or awesome it!
1
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 738 Beast Mode
- 57 App Studio
- 40 Variables
- 685 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 47 Workflows
- 10 DomoAI
- 36 Predict
- 15 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 124 Manage
- 121 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive