Sum Distinct Not working Correctly in the Total
Best Answer
-
The SUM DISTINCT function is interesting in that it only sums the distinct values in the column you are referring in your Beast Mode calculation.
For instance, let's consider an example where you have two projected load values for 2024-05-01 (10, 20) and three projected load values for 2024-06-01 (30, 10, 10). The SUM DISTINCT output for 2024-05-01 will be 30 since all of the values are unique for that specific date. Additionally, the SUM DISTINCT output for 2024-06-01 will be 40 since it skips the duplicate values (10 appears under that date twice). Thus, you have 30 for 2024-05-01 and 40 for 2024-06-01.
Where it gets interesting is that the Total row will compute the value 60 in this example, rather than 70 which would make a lot more sense. The reason it does this is because the total row itself sums the distinct values and skips the duplicates. So it only considers 10 + 20 + 30 = 60 since the value 10 appears three times between both of those dates in this example.
With that being said, my recommendation is to create this calculation in the dataflow itself rather than in a Beast Mode calculation since you have more functionality there to work with and can avoid these confusing outputs.
0
Answers
-
Is "Projected Loads" itself a sum? The grand total might be deduping the values that give you the monthly totals.
0 -
Is there a reason to do a distinct sum rather than a straight sum? There are probably non-distinct values that make up your monthly totals.
Let's say July was: 100, 100, 100 for a total of 300; August was 50, 50, 50 for a total of 150, so your totals table would look like:
July: 300
August: 150
Your overall distinct sum would be 150, not 450.
Please 💡/💖/👍/😊 this post if you read it and found it helpful.
Please accept the answer if it solved your problem.
0 -
@ColinHaze Have you tried using a FIXED function instead of a SUM Distinct?
0 -
The SUM DISTINCT function is interesting in that it only sums the distinct values in the column you are referring in your Beast Mode calculation.
For instance, let's consider an example where you have two projected load values for 2024-05-01 (10, 20) and three projected load values for 2024-06-01 (30, 10, 10). The SUM DISTINCT output for 2024-05-01 will be 30 since all of the values are unique for that specific date. Additionally, the SUM DISTINCT output for 2024-06-01 will be 40 since it skips the duplicate values (10 appears under that date twice). Thus, you have 30 for 2024-05-01 and 40 for 2024-06-01.
Where it gets interesting is that the Total row will compute the value 60 in this example, rather than 70 which would make a lot more sense. The reason it does this is because the total row itself sums the distinct values and skips the duplicates. So it only considers 10 + 20 + 30 = 60 since the value 10 appears three times between both of those dates in this example.
With that being said, my recommendation is to create this calculation in the dataflow itself rather than in a Beast Mode calculation since you have more functionality there to work with and can avoid these confusing outputs.
0 -
I ended up taking Jonathans Advice and just running it through the dataflow.
Thanks all!1 -
I'm sorry to dredge up an old topic, but I'm dealing with this now.
I have a dataset that has some duplicate values (Distance), but I need to keep them because another field is unique for filtering only (SupplierCode) - so these extra rows should not be considered in the 'Distance' SUM.
However, I also have duplicate 'Distance' values that are unique in every other field, so essentially they DO need considered in the 'Distance' SUM because they are actually unique when considered against the whole row.
Using SUM() or SUM(DISTINCT()) gives me incorrect results.
My fix was to use a "Rank & Window" tile in the ETL, partitioned appropriately so that truly unique values have a unique rank. Then I add a 'Distance' + 'Rank' formula and run the ETL. This makes every 'Distance' value unique, except for the ones that were intentionally duplicated for the additional 'SupplierCode' filtering field (assuming I partioned the Rank & Window correctly).
Lastly, in the beast mode, I used SUM(DISTINCT 'Distance') - SUM(DISTINCT 'Rank'). This aggregates everything that should be considered, then restores 'Distance' back to its original value before the ETL calculation.
Hope that makes sense. There may have been an easier way but I'll consider this one a victory.1
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.8K Visualize
- 2.5K Charting
- 738 Beast Mode
- 56 App Studio
- 40 Variables
- 684 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 46 Workflows
- 10 DomoAI
- 35 Predict
- 14 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 123 Manage
- 120 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive