Joining Datasets with different numbers of rows
We’re looking to analyze staff utilization data. There are two data sets:
Data Set 1: Raw data from a timekeeping system with daily time entries from each staff member, there are multiple rows per person per month.
Data Set 2: The total available hours per person per month, there is one row per person per month.
We’re looking to calculate utilization per month per person by summing all of their hours entries for a specific month from Data Set 1, and dividing by their corresponding available hours for that month in Data Set 2.
When we joined the two data sets, using their unique ID# and the identifier, the output includes the monthly available hours value for every row in which there is a raw data entry. We need to create a Beast Mode that is the sum of all raw data hours per month divided by just one instance of the available hours per month.
Any ideas?
Comments
-
Great question! As long as the card is reporting by month, the following will work:
SUM(`Hours_Field_From_Dataset_1`) / AVG(`Available_Hours_From_Dataset_2`)
You could also use MAX() or MIN() instead of average since there's just one value per rep per month. Should you wish to report by quarter or year, you will need to create quarterly and yearly availability fields in Dataset 1 before joining them and create beast modes for quarterly and yearly utilization.
** I work for Domo1 -
I think I would approach this by bringing both datasets to the same date grain before the join. In Data Set 1 SUM up all the the hours in the month by employee so that you have just one row per employee per month. This would be my prefered method.
Alternatively, you might try something like this:
SUM(`Hours Logged by Employee`) / MAX(`Available Hours by Employee`)
As long as the Available Hours is never equal to zero this should work.
KurtF
**Say “Thanks” by clicking the “heart” in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"1 -
Thanks!
What if the number of rows varies? IOW, a higher Dataset 2 value included five times will weight the average differently than a lower Dataset 2 value included five times.
We tried the Beast Mode and the numbers don't look quite right yet.
1 -
In that case, unless there is a reason you can't do it, then I would second Kurt's recommendation that you get both datasets on the same level of granularity.
My answer assumes that dataset 2 has data that looks like this:
Name Date Available_Hours
John Doe 01/01/2018 140
Jane Roe 01/01/2018 119
John Doe 02/01/2018 115
Jane Roe 02/01/2018 137
As long as the card is displaying by month and you are breaking it out by individual user, then that formula will work.
Another option would be to fix the data after the join. You would need to add row numbers and partition it by person by month. You would then update the data so that their available hours got set to 0 where the row number was greater than 1. Then you could use SUM()/SUM().
** I work for Domo2 -
Where would we update the Available Hours to 0 in the dataflow? Not sure which function to use.
0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 738 Beast Mode
- 57 App Studio
- 40 Variables
- 685 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 47 Workflows
- 10 DomoAI
- 36 Predict
- 15 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 124 Manage
- 121 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive