dynamic date associating
I have a dataset that contains IDs and meta data to include modification dates. Each ID has a last modification date. However, each ID may (or may not) be associated with other IDs. I want to identify the latest modification date (direct or associated) for each ID.
Below is an illustration of what it looks like, except there are a lot more columns and some of the IDs are associated with upwards of hundreds of other IDs… and there will be a growing list of new IDs and associations with other IDs.
Does anyone have any suggestions for how to do this in ETL?
ID | name | last modification date | associated IDs | Latest date associated | Notes |
---|---|---|---|---|---|
1 | asdf | 3/4/2023 | 5, 6 | 3/4/2023 | 1 is the oldest date - no change |
2 | te | 6/7/2022 | |||
3 | sdg | 4/5/2019 | 4, 5, 6 | 5/5/2023 | 4 is the oldest date |
4 | fewq | 5/5/2023 | |||
5 | we | 2/3/2020 | 1, 2 | 3/4/2023 | 1 is the oldest date |
6 | tgrw | 12/11/2022 | |||
Best Answer
-
To provide more clarity on my answer, your flow will end up looking something like:
Which after running provides the following output:
The modifications to the original one is on the number of days we take from the calendar, so we no longer limit it to a calendar year but to a set number of days, this happens in the filter of the Domo Calendar section (here I used 10 days as I didn't expect anything to have a list of associated IDs with more than 10 entries, but you can change the number to whatever you want):
The other key change is that I'm self adding a reference to itself on the (Self Associating SA) as part of the formulas to your original dataset, and that we measure on the number of commas, not spaces:
When we split we also make sure to remove whitespaces and transform to an integer, so that we can do the join afterwards:
The select Columns 1 narrows it down only to the ID and date, which is what we care for, and then on the group by we select all the original columns from your dataset, plus perform a MAX aggregation on the joined one.
This is scalable in the way that you can accept lists of up to N associations, but how fast it'll be to run on your real data with thousands or more entries is something you'd need to test.
2
Answers
-
You can use a rank and window tile in Magic ETL to rank the rows partitioned by the ID. Then you can use a filter tile to only keep those ranked 1 (most recent).
If I solved your problem, please select "yes" above
0 -
Thanks @colemenwilson, but I'm not sure I follow. I can rank the IDs by their last modification date (see new columns: "Date Rank"), but I'm still left with the same issue. Can you elaborate more on how I can use the date rank column?
ID
name
last modification date
Date Rank
associated IDs
Latest date associated
Notes
1
asdf
3/4/2023
2
5, 6
3/4/2023
1 is the oldest date - no change
2
te
6/7/2022
4
3
sdg
4/5/2019
6
4, 5, 6
5/5/2023
4 is the oldest date
4
fewq
5/5/2023
1
5
we
2/3/2020
5
1, 2
3/4/2023
1 is the oldest date
6
tgrw
12/11/2022
3
0 -
Sure thing! You are on the right track, you just need to partition by ID, as shown below:
You'll be left with a ranking by date for each ID. So where there are duplicate ID's, lets say 3 duplicates, you'll have ranks 1,2,3 for the corresponding IDs. Then you can use a filter tile to remove anything that isn't ranked 1 - leaving you with the rows with the most recent modification date for each ID.
If I solved your problem, please select "yes" above
0 -
@colemenwilson I'm not sure I've properly explained what I'm attempting to do here…
Each row is a unique ID - so there aren't any duplicate IDs. What I am attempting to do is create an overall last modification date for each ID (based on it's last mod date and the last mod dates of the IDs it is associated with).
So for example, looking at ID 3 in my illustration. The last mod date for 3 is 4/5/19. However, ID 3 is also associated with IDs 4, 5, & 6. The latest modification date among those three is 5/5/23 (which belongs to ID 4). So the overall last mod date (based on direct/association) for ID 3 is 5/5/23.
I'm unsure how to do this, as it will involve some type of dynamic lookup.
0 -
Ahhhhhh okay. So what you'll do is split out the associated IDs column using the split column tile in Magic ETL. Then join the last modified dates to each of the new associated id's columns. Then use the GREATEST() function to find the most recent date across the columns for each row:
GREATEST(`Associated ID Date 1`, Associated ID Date 2`, `Associated ID Date 3`, `Associated ID Date 4`, `Associated ID Date 5`)
If I solved your problem, please select "yes" above
0 -
@colemenwilson two issues: 1) some of the IDs will be associated with upwards of several hundred IDs.
2) there will be new IDs added and new ID associations (with other IDs).
I'm familiar with the split column function, but would need a dynamic way of splitting up the columns and then associating them.
0 -
I would need to test this out, but ideally, you could solve this with a variation on @MarkSnodgrass word cloud example (YouTube link), and after exploding to have one row per each associated ID, plus one row for the unassociated version, where you'll Join the data from the associated IDs (only the field you care for), then use a GroupBy tile to get the MAX of the dates for them. This should theoretically work … whether it'll be fast or efficient since you mentioned
IDs will be associated with upwards of several hundred IDs
that's a different story. Also this approach will only work assuming you only care about direct associations as you mentioned in a comment before.3 -
To provide more clarity on my answer, your flow will end up looking something like:
Which after running provides the following output:
The modifications to the original one is on the number of days we take from the calendar, so we no longer limit it to a calendar year but to a set number of days, this happens in the filter of the Domo Calendar section (here I used 10 days as I didn't expect anything to have a list of associated IDs with more than 10 entries, but you can change the number to whatever you want):
The other key change is that I'm self adding a reference to itself on the (Self Associating SA) as part of the formulas to your original dataset, and that we measure on the number of commas, not spaces:
When we split we also make sure to remove whitespaces and transform to an integer, so that we can do the join afterwards:
The select Columns 1 narrows it down only to the ID and date, which is what we care for, and then on the group by we select all the original columns from your dataset, plus perform a MAX aggregation on the joined one.
This is scalable in the way that you can accept lists of up to N associations, but how fast it'll be to run on your real data with thousands or more entries is something you'd need to test.
2 -
thanks @marcel_luthi I'll take a look at this.
0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.8K Visualize
- 2.5K Charting
- 731 Beast Mode
- 55 App Studio
- 40 Variables
- 682 Automate
- 175 Apps
- 451 APIs & Domo Developer
- 46 Workflows
- 10 DomoAI
- 35 Predict
- 14 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 122 Manage
- 119 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 107 Community Announcements
- 4.8K Archive