How to create a backlog trend card?
I'm trying to create a card that shows the trend of case completion against a backlog each month (based on open/close dates). Ideally, it would be a multi-line chart with 'month-yr' on the X-axis and COUNT of cases on the Y-axis, for the following series:
- New = COUNT of cases opened during month
- Closed = COUNT of cases closed during month
- Backlog = COUNT of total remaining open (no close date) at end of the month
Is this possible? Not really sure how to tackle the aggregation of the backlog each month with beast mode.
Best Answer
-
I’d do some additional ETL before doing this visualization. There are two issues that would make date-based beast gymnastics in the card a pain:
- Open.Date and Close.Date are likely two different columns in your dataset (assuming each row represents a Case). The x-axis can only look at one column worth of dates, which we need to specify in a beast or an ETL. I'd suspect it’s possible for a Case to open and close in the same month, and you’d want the count of that case reflected in both Open and Closed counts.
- The count of cases that are open at the end of each month is a point-in-time value that should always remain static from end of month onward. I’ve been much happier with point-in-time info by doing a snapshot (monthly scheduled dataset refresh) on the last day of each month and appending the count into the primary dataset. Otherwise you get into a lot of logic that continually assesses whether the case was open at a point in time...it's ugly. Plus, I often get questions about “what was the backlog at end of Month X” and it’s easier to go back and look at prior snapshots then try to recreate history.
Here’s a review of the ETL approach I’d take to take to tackle these two issues.
- Create new ETL.
- Start with your cases dataset that runs on regular intervals as the input dataset.
- Branch it right away into two parallel paths with the Filter Rows transform. One of them will filter on Open.Date “is not null”, the other will filter on Closed.Date “is not null”.
- Use Add Constants transform in each to create a new identically named column called “Row Status”. Set the value to Text and call it “Opened” for the open filtered path and “Closed” for the closed filter path. Also make a constant of “1”, call that column “Key Count”.
- Next use Value Mapper transform in each path to create a new column named “Key Date”. In the opened path, it makes that new column based on the Open.Date. In the closed path, it makes the new column based on Closed.Date. (note – you could also use Select Columns to just change the names of these columns, but I prefer Value Mapper so you can keep the original data intact for better drill-through capability later on)
- At this point, you’ve now got an opened path in your ETL that has a row count (and sum of Key Count) equivalent to all the cases that have been opened and a closed path that has a row count equivalent to all the cases that have been closed. You then append the two paths back together and output the dataset.
You could now use “Key Date” as your x-axis, Sum of Key Count as your y-axis, and “Row Status” as your series. Make sure this is treating you right, then proceed to the next piece which adds the historical snapshot.
- Make another dataset for “Case Snapshot” that’s identical to the Case dataset you used as your input in the ETL above, and set it to run late night on the last day of every month. Be sure it’s Scheduling is setup to use the Append method instead of the Replace method.
- Add "Case Snapshot" as a new input to the ETL above.
- Apply a Filter transform to only include rows where Closed.Date “is null”.
- Add Constants for a text column called "Row Status" and set to a value of “EOM Open Cases”, and add another column called "Key Count" with a value of “1”.
- Then use Value Mapper to create a “Key Date” column based on BATCH_DATE.
- Lastly, take this path and add it as a 3rd input to your append. (if append isn't lining up the date properly, you may need to also use a Column Type transform to ensure the date format of BATCH_DATE is similar to Open.Date and Closed.Date)
There are other ways to do this using Group By to assemble counts for various conditions, but it wouldn’t allow a user to drill to any details.
If you need to recreate the past EOM Open Cases counts and aren’t worried about drills, you could add a simple webform to the ETL & append with each row representing an EOM period. You’d need to make sure the column types matched so the append worked properly, but it would just be a Row Status of “EOM Open Cases”, a Key Count of whatever value you had logged as open EOM case count, and a Key Date of the last day of each month.
Good luck!
Bill
0
Answers
-
I’d do some additional ETL before doing this visualization. There are two issues that would make date-based beast gymnastics in the card a pain:
- Open.Date and Close.Date are likely two different columns in your dataset (assuming each row represents a Case). The x-axis can only look at one column worth of dates, which we need to specify in a beast or an ETL. I'd suspect it’s possible for a Case to open and close in the same month, and you’d want the count of that case reflected in both Open and Closed counts.
- The count of cases that are open at the end of each month is a point-in-time value that should always remain static from end of month onward. I’ve been much happier with point-in-time info by doing a snapshot (monthly scheduled dataset refresh) on the last day of each month and appending the count into the primary dataset. Otherwise you get into a lot of logic that continually assesses whether the case was open at a point in time...it's ugly. Plus, I often get questions about “what was the backlog at end of Month X” and it’s easier to go back and look at prior snapshots then try to recreate history.
Here’s a review of the ETL approach I’d take to take to tackle these two issues.
- Create new ETL.
- Start with your cases dataset that runs on regular intervals as the input dataset.
- Branch it right away into two parallel paths with the Filter Rows transform. One of them will filter on Open.Date “is not null”, the other will filter on Closed.Date “is not null”.
- Use Add Constants transform in each to create a new identically named column called “Row Status”. Set the value to Text and call it “Opened” for the open filtered path and “Closed” for the closed filter path. Also make a constant of “1”, call that column “Key Count”.
- Next use Value Mapper transform in each path to create a new column named “Key Date”. In the opened path, it makes that new column based on the Open.Date. In the closed path, it makes the new column based on Closed.Date. (note – you could also use Select Columns to just change the names of these columns, but I prefer Value Mapper so you can keep the original data intact for better drill-through capability later on)
- At this point, you’ve now got an opened path in your ETL that has a row count (and sum of Key Count) equivalent to all the cases that have been opened and a closed path that has a row count equivalent to all the cases that have been closed. You then append the two paths back together and output the dataset.
You could now use “Key Date” as your x-axis, Sum of Key Count as your y-axis, and “Row Status” as your series. Make sure this is treating you right, then proceed to the next piece which adds the historical snapshot.
- Make another dataset for “Case Snapshot” that’s identical to the Case dataset you used as your input in the ETL above, and set it to run late night on the last day of every month. Be sure it’s Scheduling is setup to use the Append method instead of the Replace method.
- Add "Case Snapshot" as a new input to the ETL above.
- Apply a Filter transform to only include rows where Closed.Date “is null”.
- Add Constants for a text column called "Row Status" and set to a value of “EOM Open Cases”, and add another column called "Key Count" with a value of “1”.
- Then use Value Mapper to create a “Key Date” column based on BATCH_DATE.
- Lastly, take this path and add it as a 3rd input to your append. (if append isn't lining up the date properly, you may need to also use a Column Type transform to ensure the date format of BATCH_DATE is similar to Open.Date and Closed.Date)
There are other ways to do this using Group By to assemble counts for various conditions, but it wouldn’t allow a user to drill to any details.
If you need to recreate the past EOM Open Cases counts and aren’t worried about drills, you could add a simple webform to the ETL & append with each row representing an EOM period. You’d need to make sure the column types matched so the append worked properly, but it would just be a Row Status of “EOM Open Cases”, a Key Count of whatever value you had logged as open EOM case count, and a Key Date of the last day of each month.
Good luck!
Bill
0 -
Thank you so much for the very detailed solution. I was finally able to spend the time working through it and for the most part everything works. 3 things to mention:
- Value mapper doesn't work for making the Key Date columns. Originally, I was able to do this with Combine Columns and Convert Type to Date/Time. However, I ended up switching to use Date Operation (since I needed it for the next item) and just add 0 days for the transform - which gives the same result but doesn't require the type conversion step.
- Using the batch date for the backlog makes sense but poses some additional challenges depending on timezones. I was able to use a date operation to address this.
- I do care about historical, so i'll need to figure out the best way to get that data together.
1
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.6K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 9 Federated
- 2.9K Transform
- 102 SQL DataFlows
- 626 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 753 Beast Mode
- 61 App Studio
- 41 Variables
- 692 Automate
- 177 Apps
- 456 APIs & Domo Developer
- 49 Workflows
- 10 DomoAI
- 38 Predict
- 16 Jupyter Workspaces
- 22 R & Python Tiles
- 398 Distribute
- 115 Domo Everywhere
- 276 Scheduled Reports
- 7 Software Integrations
- 130 Manage
- 127 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 11 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 110 Community Announcements
- 4.8K Archive