Null Date Values in Card Filters

Hi!

I know this question has been asked a few times in the forums over the years (see "Best Practice for Date fields" and "How do you Filter on Date that is Null") so apologies for the redundancy. I'm asking the question again because these posts seem to miss an element that is crucial to my use case: calculations and Beast Mode functions.

I am building a KPI dashboard for a team of Project Managers. Projects are separated into 10 "milestones". Milestones and other date fields (at least 25 total but probably more) are currently being tracked within Salesforce in various custom fields. For example:
Milestone_1_Due Date
Milestone_1_Completed Date
Milestone_2_Due Date
Milestone_2_Completed Date
...
Milestone_10_Due Date
Milestone_10_Completed Date
Target Launch Date
Project Opened Date
Project Closed Date

Depending on where a Project Manager is within the project timeline, it's very common and expected that many of these Due Date/Completed Date values will be blank.

I'm tasked with doing a number of calculations on these projects, for example:

  • "How many projects completed 'Milestone 7' at least 7 days before their Target Launch Date?"
  • "What percentage of projects launched on time?"
  • etc.

In ETL, I have defined all date columns as Date data types so I can easily do math on them. However, cards prohibit null values being used in filter configuration.

I am aware of workarounds involving CASE or IFNULL for empty dates (CASE WHEN Date_Column IS NULL THEN 'Is null' ELSE Date_Column END) but these update the data type from Date to String, which breaks the formulas used to calculate if DateA occurred before DateB.

Are my options...:

  1. Duplicating the date columns so I have both data types (~50 columns instead of ~25): Date for calculations and String for filtering nulls
  2. Using DATE() and other String/DateTime functions in Beast Mode on an ad hoc basis
  3. Populating a bogus date value in MagicETL to replace null (e.g., null -> 1900/1/1)

It seems like a lot of workarounds and extra work to leverage functionality that exists in other parts of Domo (i.e. you can filter Include/Exclude in the Data view of a dataset).

Are there other best practices out there that I'm missing?

Best Answer

  • DavidChurchman
    Answer ✓

    I think a best practice when dealing with a large number of fields like this is to unpivot them into a "long format" dataset rather than a super wide dataset. That helps you not have to perform the same transforms 25 times, and usually makes building cards easier, too.

    Making some broad assumptions without knowing your dataset/context, my guess is you'd want to split your dataset into milestone information and project information, then unpivot the milestones and join the project information back in. That would reduce your number of date columns down to something more manageable. Something like this:

    That would also facilitate filtering on stuff like "show me all the milestones reached before the target launch date", where you could make some of the that analysis more dynamic and user driver.

    Please 💡/💖/👍/😊 this post if you read it and found it helpful.

    Please accept the answer if it solved your problem.

Answers

  • I guess the path to follow would depend on what you want to do with the dates. If option 3 works for you, then there is no need to do this in ETL, you can do this in the Views Explorer and Remap Nulls to the bogus date of your choice.

    Usually when a date is NULL and you operate on it the result of the operation will be NULL, which could be desired at time to indicate that the elapsed time between timestamps couldn't be calculated as it has failed to reached either of them and would usually case any CASE statement to go up to the default ELSE statement, unless you specifically check for NULL as part of your logic.

  • DavidChurchman
    Answer ✓

    I think a best practice when dealing with a large number of fields like this is to unpivot them into a "long format" dataset rather than a super wide dataset. That helps you not have to perform the same transforms 25 times, and usually makes building cards easier, too.

    Making some broad assumptions without knowing your dataset/context, my guess is you'd want to split your dataset into milestone information and project information, then unpivot the milestones and join the project information back in. That would reduce your number of date columns down to something more manageable. Something like this:

    That would also facilitate filtering on stuff like "show me all the milestones reached before the target launch date", where you could make some of the that analysis more dynamic and user driver.

    Please 💡/💖/👍/😊 this post if you read it and found it helpful.

    Please accept the answer if it solved your problem.