コメント
-
@NathanDorsch I'm referring to the Java CLI tool that Domo provides.
-
Try something like this: CASE WHEN (`Week Date` + INTERVAL (7-DAYOFWEEK(`Week Date`)) DAY) = (CURDATE() - INTERVAL (DAYOFWEEK(CURDATE)) DAY) THEN 'Last Week' ELSE 'NOT' END The first part is is adding days to the week to get the last day in the week. The second half is calculating the end of the week based on the current…
-
Ah, in that case you'd want to use the fixed function to be able to ignore filtering. Which date field is driving your chart? Is that the Milestone date? In your beast mode you're missing the order by which makes it a running total. Without it it's just a grand total of your entire dataset (or partition if you have one…
-
Did that user have access to the data fusions / was the data fusion shared with them? "Everyone with DataSet access" should include everyone who has the dataset shared with them.
-
You can utilize a window function to do this: SUM(COUNT(`Issue ID`)) OVER (ORDER BY `Milestone Date`)
-
I've done a writeup in the past about custom period over period which you could utilize in this case. I'd recommend filtering for just the first of the month instead of all date and grouping your data on a monthly basis before doing the joins but you can read up more on this here:…
-
You'll need to reach out to support to have the connector team look into this.
-
Correct, Magic ETL has network restrictions with the exception of Writeback tiles. You'd need to import your data using a connector first and then process it with Magic ETL
-
You'll need to reach out to Domo Support to have them possibly remove these.
-
@RocketMichael If those are your only records in your dataset you can utilize a Magic ETL to add a constant to your dataset (Join Column, value 1), then filter your data for just 'Current Week'. Take that filter and join it back to your original dataset with the constant and join both on the constant. This will join each…
-
How is your rank value being calculated in your dynamicRank table?
-
Currently it's not supported with a Date type variable. It'd be a great idea for the idea exchange.
-
Are you using a beast mode or an ETL to calculate your rank? Could you post the code?
-
Should be DAYOFMONTH
-
`Date` - INTERVAL (DATOFMONTH(`Date`) - 1) DAY
-
Have you tried using an unpivot in an Etl to convert your columns to rows then group by your metric column and count the number of values?
-
When you’re filtering and including a and b your lag function will pull data from metric a and b. You’ll want to add PARTITION BY ` metric ` to your lag statements to make sure you’re not jumping metrics with the lag
-
If you go to the bottom of the post there's an updated version (under Addendum) of JSON code you can copy and then paste into the ETL canvas area to automatically populate the necessary tiles and logic. You'll just need to name the output dataset and select the correct datasets for the input datasets. There are two…
-
You could use a Magic ETL 2.0 dataflow and group your data on the start_date, model and inventory_units and select the MAX ob_modified_date then do an inner join from your original input dataset and the output of the group by based on the start_date, model, inventory_units and ob_modified_date.
-
You can use the aggregation on your value field in the Single value card and set it to maximum:
-
pydomo/ds_update doesn't support append at this time. You could configure a recursive dataflow to append your data.
-
You can utilize the Java CLI and the backup-card command to get the card definition. You can also monitor the API it's calling with the log-api command before you run backup-card to get the api endpoints it's calling.
-
Hi @damen I'd recommend utilizing your own date dimension table to compare the same days per year. I've done a writeup on how to do this previously here: https://dojo.domo.com/main/discussion/53481/a-more-flexible-way-to-do-period-over-period-comparisons#latest
-
If you browse to the actual dataset you can see the specific ID in the URL in a GUID format. https://instance.domo.com/datasources/a12cbf64-35d0-47bb-8567-ce7c87149a54/details/overview
-
I’d recommend utilizing a date dimension dataset with different offsets so you have more control of your PoP values and it will allow you to have multiple types of periods o. The same graph. I’ve done a write up on how to do this is the past here:…
-
No, if you can change the names so they're the same in the backed then you won't need to create a new field / beast mode to do your filtering.
-
Count is counting the number of non null values. You’ll want to return a value only if you want it counted. You can use a CASE statement in your COUNT to do this COUNT(CASE WHEN `ptstatus`=80 THEN `ptstatus` END)
-
COUNT function will count the number of non null records if you’re wanting to compare to not NULL then it’s the same as SQL CASE WHEN `Name` IS NOT NULL THEN 'Not Null' ELSE 'Null' END
-
If you have dataset A with Field1 and Dataset B with Field2 then create a beast mode on the card using dataset B and call it Field1. Then have it return the value Field2 `Field2`
-
You’ll need to rename them so they have the same name. You could just save a beast mode to the dataset where you have it just return the other column
