Comments
-
@david_cunningham Whoopsie. Thanks for the catch on caregiver vs. care_recipient. I'm now getting the expected results from the following function: SUM(COUNT (DISTINCT CASE when caregiver.action.status = 'current' then caregiver.action.ownerId END)) over (partition by caregiver.action.slug) Thanks to you and @DavidChurchman
-
@david_cunningham thanks for the reply. That makes sense. However, how would I modify the example beast mode function to handle a case statement on the distinct count? For instance, I only want to count records that satisfy this criteria: CASE when care_recipient.action.status = 'primary' then care_recipient.action.ownerId…
-
Can you guys think of a way to do the same, but remove the last part of a string if it ends with specific characters? I'm concatenating a bunch of values into a comma-separated list in a column but want to eliminate the trailing comma. i.e.: "item1, item2, item3, item4," → "item1, item2, item3, item4"
-
@GrantSmith I'm having a similar problem with the S3 AssumeRole Advanced connector. I'm trying to read all of the .json files within a subdirectory in an S3 bucket. I'm able to successfully connect, but I get this error when the job runs: Domo is ready but did not find files that match '/logs/entity=BuffersAndBarriers/*'…
-
Got it. We might just write a Lambda that moves the files out of the S3 subdirectories into a "to_be_processed" directory, and then point the S3 connector at that one directory.
-
@MattTheGuru appreciate your response. That code might be useful for the AWS side of things but I'm not sure how to solve the issue on the domo side getting the S3 connector to pull files from subdirectories automatically. Do you have any thoughts on that?
-
Thanks for your help with this, @ColemenWilson I was making the mistake of not averaging each case statement individually. I'll give this a shot and let you know how it goes.
-
Thanks for the speedy response, @MichelleH - that worked like a charm!
-
Ahhh… this appears to be the right way to do it. COUNT(DISTINCT(`Main Ticket ID`)) / SUM(COUNT(DISTINCT(`Main Ticket ID`))) OVER ()
-
FYI, this SEEMS to work but I wanted to validate that it is the right way to do it: COUNT(DISTINCT(`Main Ticket ID`))/ COUNT(COUNT(DISTINCT(`Main Ticket ID`))) OVER () UPDATED: That doesn't work. Do I have to do windowing in my ETL to display a percent of total on a State chart? Seems kind of silly since you can do it on…
-
@GrantSmith Is it possible to use %_PERCENT_OF_TOTAL on a state map visualization? Otherwise how can I show percentages of total on that instead of counts?
-
Got it, thanks!
-
@colemenwilson - if I do the group by ID tile, won't I lose the additional rows for that same ID? The ID is in the same table as the individual interaction rows
-
Thanks @colemenwilson I'm going to try that approach and will post back if I can't figure it out.
-
@MarkSnodgrass thanks for the suggestions. I'll give this a shot and see if I can make it work.
-
@MarkSnodgrass thanks for the quick reply. Can you think of a way to use this approach to query for a date range? So not just all open tickets on a specific date, but open tickets during a date range (i.e. 3/1/2023 to 3/31/2023)
-
@AKnowles we looked into the writeback connectors option and unfortunately, it's cost prohibitive to add this feature to our account for just this simple use case.
-
I've got a similar challenge here: Assuming I'll need to create a table with 30-minute increments of time and figure out a way to join to that in an ETL to generate a row for each 30-minute block of time?
-
@MarkSnodgrass so I tried converting the values to numbers for the hours but that doesn't work because I'd dealing with AM and PM - so “100” AM and “100” PM get grouped together in the chart. I'm still perplexed as to why SOME hours sort properly and others don't when using a text sort. I've also tried converting the field…
-
Thanks so much for the comprehensive suggestion @bobbyragsdale - I'll give this a try in our environment!
-
Managed to make it work. Thanks @GrantSmith
-
So I'm facing a similar issue where I have a bunch of multiselect values where multiple values are stored in a single comma-separated column. I have some logic in my ETL that does a pivot/unpivot in order to be able to report on them, but it's caused a MASSIVE row explosion with about 10 columns I have to do this with. Can…
-
Thanks @GrantSmith that's a good idea.
-
On a related note, what's the most reliable way to filter out rows with specific values in them but not null rows? For example, I want to remove rows where column_1="A" OR column_1="B" but not where column 1 is NULL? I originally tried two filters with an ALL setting where column_1<>"A" or column_1<>"B" but it seems to…
-
Thanks @GrantSmith and @RobSomers that's very helpful!
-
Plus 1 for this feature. It would be great if there was a simple way to find all cards (and Beast Mode functions) which break because a column name changed.
-
Thanks, @MichelleH that's exactly what I needed!
-
@jaeW_at_Onyx thanks for the suggestion. I was thinking maybe tables would be better, though we'd ideally like to have control over the formatting without having to export it to Excel.
-
Got it. This is really helpful @RobSomers. I've already made some progress with a prototype using your suggested approach.
-
@RobSomers thanks for the quick response and suggestion. We already have separate charts per interaction type, but the business is looking for totals across all interactions by date. Hence the requirement. If I'm not joining the data and instead appending it, how will the ETL process handle it when each table has many…