Comments
-
When should the Jan 30th date be used? For All dates in January or just some of them?
-
You can use a case statement for this SUM(CASE WHEN `category` = 'DAU' THEN `users` END) / SUM(CASE WHEN `category` = 'MAU' THEN `users` END) Then make sure to graph based on your date field. You can change the aggregation function if you need something different.
-
does the user you’re logged in as have permissions to those output datasets?
-
You can take the minimum value of your Haz / Non-Haz text in your rank and window to determine if had any Hazardous. Alternatively, you can use a case statement to return 1 if the value is hazardous and 0 if it's not and then take the max of that value in your case statement and then convert it back to a text value using…
-
I have noticed it being a bit finicky in the past trying to save the replacement variable but I had the most success when pressing enter after typing in the replacement variable value and then saving the workbench job.
-
Currently this isn't an option but I'd recommend adding it to the Idea Exchange for Domo's to review.
-
You can utilize Magic ETL and the Domo Dimensions connector. Using the Domo Dimensions connector pull in the calendar dataset. Feed that into a magic ETL, add a constant call 'Join Column' for a value of 1. Then use your dataset as an input, add a constant 'Join Column' with a value of 1 to this dataset as well in the ETL.…
-
Here's some KB articles for you to reference:
-
You can use a fixed function for this: AVG(MAX(`duration`) FIXED (BY `session`)) This will take the maximum of the duration for each session identifier and then average each of those values out. As the duration should be the same across all of the same sessions this is essentially deduping your durations.
-
I'd recommend looking into the logic on how a recursive dataflow ignores data that's included in the updating dataset. You can do a left join from your user updated dataset to the database dataset and only keep the records in the database dataset where the user updated identifiers are null. Then append the data together…
-
You can utilize the DataSet Copy connector to run every Monday to take make a copy of your original dataset and set the update method to append. It will have the BATCH_LAST_RUN field when the snapshot is taken and you can use that in your chart.
-
Have you tried using SUM on your billed amount and MAX on your gross amount with a group by function in an ETL grouping on your provider and pay period dates?
-
@AnnaYardley Can you assist here?
-
Ok I think I understand. You want the count of the IDs on each line of the ID. For that you can utilize a window function SUM(SUM(1)) OVER (PARTITION BY `ID#`)
-
You can aggregate your Occurrences in the analyzer by selecting SUM from the aggregate option when you select the column.
-
Not within Magic ETL however, you could possibly utilize MySQL dataflow to query all of the columns in the table via the information_schema, sort them alphabetically and then build your select clause to then select those columns in your dataset.
-
Currently, there isn't a simple way to do this within the UI. You can utilize the Java CLI to export the dataflow definition using the list-dataflow and save it to a file. Then change the output dataset ID to your web form's ID in the file. You can then use the set-dataflow-properties command with the -d flag to update the…
-
You can't conditionally show or hide different filtering options. If you're looking to restrict the records returned for specific users you can look into PDP to automatically do filtering so they won't be able to see other records.
-
It's grouping based on all non-aggregated fields. Because you're not aggregating the NumberDelivered field it's including it in the grouping. If you aggregate it with Sum it should remove the duplicate SendDate fields and aggregate it for the entire month.
-
Views like federated data sets are refreshed when they are accessed. They are not like a regular date of the days that which would update based on a time or other DataSet updating. This means that they only get updated when the data is requested as in when viewing a card built off of these data sets, you can’t create a…
-
You can use three different beast modes to handle this: Combined: AVG(`Handle Time`) Inbound 1: AVG(CASE WHEN `Call Type` = 'Inbound 1' THEN `Handle Time` END) Inbound 2: AVG(CASE WHEN `Call Type` = 'Inbound 2' THEN `Handle Time` END) You can then graph all of those on the same chart.
-
Have you looked into the premium Sandbox product that Domo offers? This may help with your ability to track and deploy versions. https://domo-support.domo.com/s/article/4403367344023?language=en_US
-
@AdamC Will you have multiple records with the same ID or is it just one record per ID and you want to check the latest date based on multiple columns for the one record?
-
Typically when you're getting an Infinity error it means you're dividing by 0. Have you tried wrapping your formula in a CASE statement to check against having a 0 denominator and then return a 0 or some other default value?
-
You could do something like: CONCAT(HOUR(`Timestamp`), ':', FLOOR(MINUTE(`Timestamp`)/30)*30) Minute will return the number of minutes in the timestamp (0-59) /30 will return a number between 0 and < 2 FLOOR will convert your fraction to 0 or 1. *30 will convert the interval to either 0 or 30. This will then give you your…
-
Hi @user046467 You'll likely need to add a third-party IdP to enable MFA. Here's the documentation on it: https://domohelp.zendesk.com/hc/en-us/articles/360043428233-Domo-Mobile-Security
-
Have you used a try/catch block to trap the error?
-
Take the output from the last formula tile and feed that into a group by tile. Then feed the output of that and the same formula tile to a Join tile and do the inner join. Then take the output of the join and feed it to your output dataset.
-
You'll need to utilize a Magic ETL for this. You can take the dataset as your input. Then feed it to a Select Column Tile. You can then select the Supervisor Name and the Email columns where you rename the Email Column to 'Supervisor Email'. Then feed your original dataset into a Join tile where you will left join your…
-
Take a Group By tile to group by the ID and Status Fields, take the MAX of your UPDATED field then inner join your data back into your original dataset based on the ID, Status Fields, and where the UPDATE fields match. This will remove any records where it's not the last updated value.