Sum Distinct Not working Correctly in the Total

Hi,

Am I overlooking something? I have a table and a beastmode using sum Distinct, however the total is not adding up Correctly. The total for Projected Loads should be 12,682 not 8596. I dont see any two numbers in the sub totals being the same.
Thanks in advance

Best Answer

  • Jonathan53891
    Jonathan53891 Contributor
    edited June 11 Answer ✓

    The SUM DISTINCT function is interesting in that it only sums the distinct values in the column you are referring in your Beast Mode calculation.

    For instance, let's consider an example where you have two projected load values for 2024-05-01 (10, 20) and three projected load values for 2024-06-01 (30, 10, 10). The SUM DISTINCT output for 2024-05-01 will be 30 since all of the values are unique for that specific date. Additionally, the SUM DISTINCT output for 2024-06-01 will be 40 since it skips the duplicate values (10 appears under that date twice). Thus, you have 30 for 2024-05-01 and 40 for 2024-06-01.

    Where it gets interesting is that the Total row will compute the value 60 in this example, rather than 70 which would make a lot more sense. The reason it does this is because the total row itself sums the distinct values and skips the duplicates. So it only considers 10 + 20 + 30 = 60 since the value 10 appears three times between both of those dates in this example.

    With that being said, my recommendation is to create this calculation in the dataflow itself rather than in a Beast Mode calculation since you have more functionality there to work with and can avoid these confusing outputs.

Answers

  • Sean_Tully
    Sean_Tully Contributor

    Is "Projected Loads" itself a sum? The grand total might be deduping the values that give you the monthly totals.

  • Is there a reason to do a distinct sum rather than a straight sum? There are probably non-distinct values that make up your monthly totals.

    Let's say July was: 100, 100, 100 for a total of 300; August was 50, 50, 50 for a total of 150, so your totals table would look like:

    July: 300

    August: 150

    Your overall distinct sum would be 150, not 450.

    Please 💡/💖/👍/😊 this post if you read it and found it helpful.

    Please accept the answer if it solved your problem.

  • @ColinHaze Have you tried using a FIXED function instead of a SUM Distinct?

  • Jonathan53891
    Jonathan53891 Contributor
    edited June 11 Answer ✓

    The SUM DISTINCT function is interesting in that it only sums the distinct values in the column you are referring in your Beast Mode calculation.

    For instance, let's consider an example where you have two projected load values for 2024-05-01 (10, 20) and three projected load values for 2024-06-01 (30, 10, 10). The SUM DISTINCT output for 2024-05-01 will be 30 since all of the values are unique for that specific date. Additionally, the SUM DISTINCT output for 2024-06-01 will be 40 since it skips the duplicate values (10 appears under that date twice). Thus, you have 30 for 2024-05-01 and 40 for 2024-06-01.

    Where it gets interesting is that the Total row will compute the value 60 in this example, rather than 70 which would make a lot more sense. The reason it does this is because the total row itself sums the distinct values and skips the duplicates. So it only considers 10 + 20 + 30 = 60 since the value 10 appears three times between both of those dates in this example.

    With that being said, my recommendation is to create this calculation in the dataflow itself rather than in a Beast Mode calculation since you have more functionality there to work with and can avoid these confusing outputs.

  • I ended up taking Jonathans Advice and just running it through the dataflow.
    Thanks all!

  • TheBeard
    TheBeard Member
    edited October 18

    I'm sorry to dredge up an old topic, but I'm dealing with this now.

    I have a dataset that has some duplicate values (Distance), but I need to keep them because another field is unique for filtering only (SupplierCode) - so these extra rows should not be considered in the 'Distance' SUM.

    However, I also have duplicate 'Distance' values that are unique in every other field, so essentially they DO need considered in the 'Distance' SUM because they are actually unique when considered against the whole row.

    Using SUM() or SUM(DISTINCT()) gives me incorrect results.

    My fix was to use a "Rank & Window" tile in the ETL, partitioned appropriately so that truly unique values have a unique rank. Then I add a 'Distance' + 'Rank' formula and run the ETL. This makes every 'Distance' value unique, except for the ones that were intentionally duplicated for the additional 'SupplierCode' filtering field (assuming I partioned the Rank & Window correctly).

    Lastly, in the beast mode, I used SUM(DISTINCT 'Distance') - SUM(DISTINCT 'Rank'). This aggregates everything that should be considered, then restores 'Distance' back to its original value before the ETL calculation.

    Hope that makes sense. There may have been an easier way but I'll consider this one a victory.