How to SUM only one instance of duplicate rows?
I have a dataset with financial data, and it has one row per month per project. However, it also pulls a Run Cost from an attribute, and this attribute is displayed with the exact same value on every row, because it's not captured as a monthly figure; it's just a total.
I need to SUM the Run Cost for matching criteria, but I need to ignore these duplicate rows and count it for each ProjectId only once.
I previously thought I could do this using MAX instead of SUM, because for each ProjectID the MAX would give me what I need. However, my code is giving me the MAX for the entire dataset column where the criteria match, and not the SUM/MAX for each Project ID.
Here's a mockup of a portion of my dataset:
BatchTimeStamp | ProjectId | First Year of Impact | Yearly Total Run Costs (CHF) |
---|---|---|---|
2023-11-22 | ID1 | 2024 | 700 |
2023-11-22 | ID1 | 2024 | 700 |
2023-11-22 | ID1 | 2024 | 700 |
2023-11-22 | ID2 | 2024 | 300 |
2023-11-22 | ID2 | 2024 | 300 |
2023-11-22 | ID2 | 2024 | 300 |
Long story short, the result I'm looking for here would be 1000 (700+300)
This is my not working BM at the moment
MAX
(CASE
WHEN DATE(`BatchTimestamp`) = DATE('2023-11-22')
AND (`First Year of Impact_p` = '2024' OR `First Year of Impact_p` IS NULL)
THEN `Yearly Total RUN Costs (CHF)_p`
ELSE 0
END)
This is part of an ETL, and I can't change the input dataset. I do need the monthly rows for another part of the ETL, so I can't remove the duplicate rows either.
Is there a solution someone could provide?
Thanks!
Answers
-
You can try and use a window function to compare the row numbers to conditionally add the first row as your data
CASE WHEN SUM(1) OVER (PARTITION BY `ProjectId` ORDER BY `BatchTimeStamp`) = 1 THEN YOUR_AMOUNT_HERE ELSE 0 END
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**0 -
I was thinking something similar to @GrantSmith. Using Magic ETL, group by ProjectID and use Max/Min aggregation on Yearly Total Run Costs. I always prefer to aggregate in an ETL.
In beast mode, sum the run costs with something like:SUM(
CASE
WHEN `BatchTimeStamp` = MIN(`BatchTimeStamp`) OVER (PARTITION BY `ProjectId`)
THEN `Yearly Total Run Costs (CHF)`
ELSE 0
END
)** Was this post helpful? Click Agree or Like below. **
** Did this solve your problem? Accept it as a solution! **0 -
Hi Grant & Arbor, thank you for your replies. Trying to figure this out based on your answers, but quickly for now, I should have clarified that there are other BatchTimeStamp values in this dataset, and I do need to filter on this one specifically. Also, far from an expert on BM so any hand holding is greatly appreciated! :)
0 -
What Grant is suggesting you try….
SUM(1) OVER (PARTITION BY ProjectId ORDER BY BatchTimeStamp)
This creates a running total (cumulative sum) of rows for each project id, ordered by the batch timestamp. The first row for each project id will have a total of 1, the second 2, etc.
The case statement checks to see if the running total for the row is 1. In other words…is it the first row for that project id, based on the order of batch timestamp. If it is the first row, it returns the value of your amount. Otherwise it returns zero.
Hopefully I summarized that correctly.** Was this post helpful? Click Agree or Like below. **
** Did this solve your problem? Accept it as a solution! **0 -
I'm just struggling with how to integrate that with my other criteria within the ETL BM.
Thanks for your patience.
0 -
I don't have much time to respond, but it ought to be something like this:
Using Magic ETL
Group By the data to remove duplicates for each ProjectId and get the unique Yearly Total Run Costs (CHF) for each project.• Add a Group By tile in Magic ETL.
• Group by ProjectId.
• Use the MAX (or MIN) aggregation on Yearly Total Run Costs (CHF) . This will ensure you're only getting a single row per project with its correct Run Cost.Then SUM the results of the grouped data.
• Add an Aggregate tile after the Group By.
• Perform a SUM on the Yearly Total Run Costs (CHF).
This will give you the correct total of the Yearly Total Run Costs (CHF) for all projects without counting duplicate rows.** Was this post helpful? Click Agree or Like below. **
** Did this solve your problem? Accept it as a solution! **0 -
Hi Arbor,
Thanks for the reply. What you described is more or less exactly what I'm doing in another ETL. However, don't think I can do that here - group by ProjectId - because I need to keep each of the monthly rows in order to calculate project costs in the same output dataset. Expanding on the mockup from my first post, the input dataset looks like this (shortened some column headers so it fits better):
Type
TimeStamp
ProjectId
Year of Impact
Total Run Costs
Month
Month Cost
Historical
2023-11-22
ID1
2024
700
2024-01-01
200
Historical
2023-11-22
ID1
2024
700
2024-02-01
400
Historical
2023-11-22
ID1
2024
700
2024-03-01
100
Historical
2023-11-22
ID2
2024
300
2023-06-01
900
Historical
2023-11-22
ID2
2024
300
2023-07-01
900
Historical
2023-11-22
ID2
2024
300
2023-08-01
400
Live
2024-10-02
ID1
2024
800
2024-01-01
200
Live
2024-10-02
ID1
2024
800
2024-02-01
500
Live
2024-10-02
ID1
2024
800
2024-03-01
150
Live
2024-10-02
ID2
2024
400
2023-06-01
999
Live
2024-10-02
ID2
2024
400
2023-07-01
900
Live
2024-10-02
ID2
2024
400
2023-08-01
450
I've also shown the Type column and other timestamps in the mockup to show why that specific timestamp is important, and why the unique identifier is the ProjectId and Timestamp.
The ETL currently looks like this:
As well as this piece I'm working on to SUM the Run Cost, each Group By has a Beast Mode to calculate the Project Cost for 2024 where it SUMS the individual rows like this:
SUM
(CASE
WHEN DATE(`BatchTimestamp`) = DATE('2023-11-22')
AND YEAR(`Month`) = 2024
THEN `PlannedCost`
ELSE 0
END)Which is why I need to keep the individual rows for each month.
0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 297 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 614 Datasets
- 2.2K Magic ETL
- 3.8K Visualize
- 2.5K Charting
- 729 Beast Mode
- 53 App Studio
- 40 Variables
- 677 Automate
- 173 Apps
- 451 APIs & Domo Developer
- 45 Workflows
- 8 DomoAI
- 34 Predict
- 14 Jupyter Workspaces
- 20 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 121 Manage
- 118 Governance & Security
- Domo Community Gallery
- 32 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive