Comments
-
Hey @kjones140 I haven't tried this before but you might be able to utilize a recursive dataflow where you have an input dataset of the new data and also the output dataset as the input. You could then perform some auditing as a 3rd dataset on your new data and determine if it's good or not and add a column with value of 1…
-
Hi @user094816 You can utilize a window function in a beast mode - you need to have this feature enabled in your instance. Talk with your CSM to enable window functions. SUM(SUM(1)) OVER(PARTITION BY `Col1`) This would get you 5 for every ABC123 record. In your example data you have ABC123 listed 4 times and then 1 time…
-
Hi @user077529 Domo supports window functions (you need to have the feature turned on in your instance. Talk with your CSM if you don't have it enabled) which can get you the prior week's value using the LAG window function. The caveat to this is that it assumes there are no missing week in your dataset. LAG(SUM(`Orders`))…
-
Hi @Brandon You can schedule a report to send an email with a card at a specific time where you can choose to include the data as an attachment if you're wanting to export data that's been processed / visualized. If you're looking to export raw data you can utilize the Java CLI tool…
-
Hi @user052846 You've got the basic idea of what needs to happen however the one issue you have is that you're not writing back out a DataFrame object but rather an Index object. If you're wanting to export your index you need to convert it to a data frame using the to_frame() method first before attempting to write it…
-
what happens if you change the charindex check to be >= SELECT DISTINCT MarketingClubLevel, CASE WHEN CHARINDEX(' ',MarketingClubLevel,1) >= 0 THEN MarketingClubLevel ELSE SUBSTRING(MarketingClubLevel,1,(CHARINDEX(' ',MarketingClubLevel,1))) END AS ClubDescription FROM vClub What does `CHARINDEX(' ', MarketingClubLevel,1)`…
-
Hi @user063136 You just need to add a partition to your window function to calculate the rank within each specific month. RANK() OVER (PARTITION BY YEAR(`date`), MONTH(`date`) ORDER BY COUNT(`Certificate #) desc)
-
Hi @jeremymcd You'd need to utilize an ETL to format your data such that you have a record for each hour of the day instead of a record for each status. For example (this is assuming you're only looking at a single day otherwise you'd need to include your date in here as well): HOUR | Status | Agent |Seconds | 12 | Admin…
-
Hi @AndresChavez What does your beast mode look like you're using to calculate the variance?
-
What's the query you're running and the error message you're getting? (anonymize it if necessary)
-
Hi @user092416 Looks like Nielsen is the one who owns and maintains the DMA lists. You might be able to utilize the Zip code list they provide to plot the different regions based on a US map categorized by the region.
-
Hi @MarkSnodgrass Magic ETL doesn't support a conditional join, only a straight columnar join. The option you can do is perform a Cartesian join and then user a filter tile after the fact. To do that, add a constant of 1 to both of your datasets and then join on that column. The Cartesian join causes the number of records…
-
Because of leap years and not every year being 365 days I'd recommend using a 1 year or 2 year interval instead. case when date >=currentdate() - interval '1' year then 'this year' when date >= currentdate()- interval '2' year then 'two years ago' ... end
-
Hi @user009257 If you're wanting to graph when the equipment is being used the current format won't get you want you're looking for. You'd need to instead slice your data to state "at this time I was in use". You can do this using an ETL to modify your data. You'd need to define how granular you want your slices to be but…
-
Hi @jeremymcd You can utilize some beast modes on the chart to calculate the number of seconds between the timestamps. I typically use the UNIX_TIMESTAMP and basic math to calculate the difference. UNIX_TIMESTAMP(`status_start_at 1`) - UNIX_TIMESTAMP(`status_end_at 1`) You can then group by the `user_name 1` field and the…
-
Hi @ktranstudent Domo allows you to upload custom SVG files with defined regions to create a custom visualization. Essentially you define where the buckets are displayed and Domo takes care of the calculating the buckets. For more information on this you can refer to this knowledge base article:…
-
Hi @the_tree What value do you get as the summary number? What value are you expecting?
-
@Kumar619 If you're looking for descriptions on each column and the tags associated with each column on your dataset you could utilize the "Dataset Schema With Tags" dataset (for clarification these tags are the ones on the columns and not on the dataset).
-
Hi @Shumilex Have you also tried using a different function to get the time difference in seconds like UNIX_TIMESTAMP? SEC_TO_TIME(AVG(UNIX_TIMESTAMP(`closedDate`) - UNIX_TIMESTAMP(`createdDate`)))
-
Hey @Ashleigh The data governance datasets do have the ability to identify the dataset type (it's the data source type on the dataset.). You can just filter for 'workbench-odbc' to get the workbench odbc jobs. I use this to filter for additional governance checks I have in place. You can then utilize those filtered…
-
Hi @user048760 You'd need three separate beast modes, one for each source. You just need to include the condition to filter the appropriate source in your YOY, MOM etc calculations. For example for Google: (These are all untested code) Inquiries COUNT(CASE WHEN year(`Date_Entered`)=year(curdate()) AND…
-
Hi @user027926 For future reference you can use the the Replace Text tile with a regular expression and replacement variables. You just need to tell the Search Term field that you're using a regular expression. Also replacement variables are in the format of $1, $2, etc instead of the traditional \1 or \2 like some other…
-
Hi @user048760 The amount would be the column in your table that contains the MTD amounts in your table. It's the column name. What @jaeW_at_Onyx is suggesting is that you split out each source into its own beast mode. For example: Google: sum( case when marekting_source = 'Google' then `Current MTD Amount` end ) Google…
-
Very helpful @Anna Yardley! Thank you for putting this together.
-
Hey @MarkSnodgrass I don't believe there's a way to control the abbreviation of the data columns, only the abbreviation on the graph.
-
Hi @Bwaalsh 1) It sounds like you're wanting to do a windowing function. This is possible within Domo as a beast mode however you need to have the feature switched on in your instance. If you don't have it you'll need to talk to your CSM. Something like: SUM(`Sales`) / SUM(SUM(`Sales`)) OVER (PARTITION BY `Color`) 2) Domo…
-
Hi @user055174 You could also utilize the last_day function to make it a bit simpler. LAST_DAY just returns the last day in the month for a given date. CASE WHEN LAST_DAY(`dt`) = LAST_DAY(CURRENT_DATE()) THEN 'This Month' WHEN LAST_DAY(`dt`) = LAST_DAY(DATE_SUB(CURRENT_DATE(), interval 1 month)) THEN 'Last Month' WHEN…
-
Hi @user047019 What you'd need to do is "stack" your data into a single dataset where you'd have the amounts calculated and then depending on the filter you have selected it will then select the appropriate records. Date | Currency | Amount 1/1/2021 | ¥ | 10000 1/2/2021 | ¥ | 5000 1/3/2021 | ¥ | 250000 1/1/2021 | $ | 10…
-
Hi @user084060 This is because the MySQL backend in Domo is based on MySQL 5.6 which doesn't have window functions. They were later added with MySQL 8.0. I'd recommend you look into Magic ETL 2.0 which has a Rank & Window tile which you can use LAG functions and is typically much more performant than MySQL / Redshift if at…
-
Hi @user048760 This is because you don't have an else clause so any record which doesn't have a Walk_In lead_source would return NULL which make your entire aggregate as NULL. Try adding an ELSE 0 clause to both of your numerator and denominator case statements. SUM(CASE WHEN 'lead_source' = 'Walk_In' AND 'unit_rented_c' =…