Comments
-
Currently it's an all or nothing in terms of the interactions however, have you looked into utilizing a FIXED function beast mode with some FILTER DENY to prevent the calculation from being affected by certain columns?
-
You can utilize a beast mode as part of your card (just make sure to save it to the dataset so you can filter on it on your page) to define the categories.
-
Not with a connector. You could utilize workbench and have that server connect via VPN before connecting your your database
-
You can then set your chart to be the last 13 months and create a beast mode to return either yes or no if they're less than the current month. CASE WHEN LAST_DAY(`Date`) < LAST_DAY(CURDATE()) THEN ‘KEEP’ ELSE ‘EXCLUDE’ END Then just filter on that beast mode for the KEEP values. LAST_DAY is a hack I use to get the last…
-
Correct
-
You could have 2 separate slicers, 1 for the high level category that would filter the other slicer having your lower level slicer to show only those values in your high level category. Alternatively you could likely utilize a Domo/DDX brick to get more control over the filtering / UI interactions to combine them in to a…
-
Are you simply looking to just remove the open and close brackets and the double quotes?
-
Alternatively you can use a group by tile and select the minimum value for your pass fail column and join it back to your original dataset that way if any are marked as fail all would become fail.
-
I’d recommend using the CLI tool to do a large dataset import as it will use the streams API and split your data into chunks and load your data faster
-
Are you wanting each row to have that 5% value or just a single summary number? if you want each row to have that number you’d need to use a window function 0.05 * SUM(SUM(IFNULL(BuyerDebit,0)+IFNULL(SellerDebit,0)+IFNULL(BuyerCredit,0)+IFNULL(SellerCredit,0))) OVER ()
-
How is your data structured? That will determine how to proceed with a beast mode. Typically when I’m working with periods comparisons I’ll establish a date dimension table to get different periods easier. I’ve done a write up on this which you can find here:…
-
Filter your card with a beast mode to calculate the prior week's timeframe and then schedule your report based on the filtered card.
-
Do you have a fields variable defined in your code prior to this segment?
-
There are other options depending how technical you want to get. There is the Python SDK you can use or the Java CLI to export those and automate them outside of Domo
-
You can utilize a window function to pull the MAU for each month and divide your DAU by that number MAX(CASE WHEN `category` = 'DAU' THEN `users` END) / MAX(MAX(CASE WHEN `category` = 'MAU' THEN `users` END) FIXED (BY LAST_DAY(`date`)))
-
When should the Jan 30th date be used? For All dates in January or just some of them?
-
You can use a case statement for this SUM(CASE WHEN `category` = 'DAU' THEN `users` END) / SUM(CASE WHEN `category` = 'MAU' THEN `users` END) Then make sure to graph based on your date field. You can change the aggregation function if you need something different.
-
does the user you’re logged in as have permissions to those output datasets?
-
You can take the minimum value of your Haz / Non-Haz text in your rank and window to determine if had any Hazardous. Alternatively, you can use a case statement to return 1 if the value is hazardous and 0 if it's not and then take the max of that value in your case statement and then convert it back to a text value using…
-
I have noticed it being a bit finicky in the past trying to save the replacement variable but I had the most success when pressing enter after typing in the replacement variable value and then saving the workbench job.
-
Currently this isn't an option but I'd recommend adding it to the Idea Exchange for Domo's to review.
-
You can utilize Magic ETL and the Domo Dimensions connector. Using the Domo Dimensions connector pull in the calendar dataset. Feed that into a magic ETL, add a constant call 'Join Column' for a value of 1. Then use your dataset as an input, add a constant 'Join Column' with a value of 1 to this dataset as well in the ETL.…
-
Here's some KB articles for you to reference:
-
You can use a fixed function for this: AVG(MAX(`duration`) FIXED (BY `session`)) This will take the maximum of the duration for each session identifier and then average each of those values out. As the duration should be the same across all of the same sessions this is essentially deduping your durations.
-
I'd recommend looking into the logic on how a recursive dataflow ignores data that's included in the updating dataset. You can do a left join from your user updated dataset to the database dataset and only keep the records in the database dataset where the user updated identifiers are null. Then append the data together…
-
You can utilize the DataSet Copy connector to run every Monday to take make a copy of your original dataset and set the update method to append. It will have the BATCH_LAST_RUN field when the snapshot is taken and you can use that in your chart.
-
Have you tried using SUM on your billed amount and MAX on your gross amount with a group by function in an ETL grouping on your provider and pay period dates?
-
@AnnaYardley Can you assist here?
-
Ok I think I understand. You want the count of the IDs on each line of the ID. For that you can utilize a window function SUM(SUM(1)) OVER (PARTITION BY `ID#`)
-
You can aggregate your Occurrences in the analyzer by selecting SUM from the aggregate option when you select the column.