Comments
-
Currently there isn't a dataset with this information. I'd recommend adding it as an idea to the idea exchange.
-
If you can have all of the records from your top dataset in your output dataset then you can do a left join on your dataset instead of an inner join. Then in your card do a count of the id file from your TEST dataset to get the total records that match if you just want the entire count then I’d recommend running your TEST…
-
This KB article may be of help where it outlines the different functions and their formats: SUBSTRING(`Column`,[START POSITION],[NUMBER OF CHARACTERS TO EXTRACT])
-
I'd recommend structuring your data with a date dimension so you can easily determine the last year's value. I typically do this with the Domo date dimension dataset. I've written up on how to do this before here:
-
You can utilize the STR_TO_DATE function and a date format string to convert it to a date. STR_TO_DATE(CONCAT(`Year`, '-',`Month`), '%Y-%m') I'm adding in a concat to add the year and month together as a string to then format. From the documentation: STR_TO_DATE Converts strings (that Domo does not recognize as dates) from…
-
Have you tried setting the update method on the connector dataset to be Append instead of Replace so it'll append each day's run to your dataset? Alternatively if you're worried about duplicated records you could utilize a recursive dataflow (KB Article: ) to filter out duplicated records based on the logic you have.…
-
Are you including the 'api' portion of your url in your request?
-
this isn’t possible as all the filters are shown. You could do the filtering in the cards for the values you don’t want displayed in the filter bar but you would lose the flexibility to filter on those values on the page if you ever needed to
-
You can create separate beast modes for your different values and drag each of them on your graph CASE WHEN `engagement_status` = 'ES_user' THEN `value` END
-
You can do a running total over time using a window function. Something like this in a beast mode might work for you. SUM(SUM(`field`)) OVER (PARTITION BY `fiscal_year` ORDER BY `date`)
-
You can use a CASE statement with a LIKE operator to determine if the word Delivery is in your string: CASE WHEN `field` LIKE '%Delivery%' THEN 'True' ELSE 'False' END If you want to have it case insensitive you can use ILIKE instead of LIKE
-
When ingesting data Domo assumes UTC and when it gets to a visualization (in a card or on the dataset data tab) Domo will then convert your data to the Timezone company setting. Are there any other transformations happening on your data during processing or just converting it to a timestamp? Have you tried using the…
-
Use a Magic ETL with a formula tile to make the value null if it's -3: case when `field` <> -3 then `field` end You can then feed that into an alter column tile to change the type
-
I don’t believe the beta works with dataset views as an input data source which is why you’re seeing that error message. I’d try outputting it to an actual dataflow dataset and try again. I’d also recommend giving the feedback to the beta team as well as I’m sure they’d find that valuable
-
You could potentially use two variables and a beast mode to calculate the difference between your two values however there isn't a way to automatically populate these values in the variable so you'd need to manually enter them. Alternatively you can do a giant cross product of your dataset such that you have the data…
-
Hi @Byboth Typically if a Domo connector doesn't get me all the information I need I'll either write my own custom connector or utilize python, the pydomo package and the associated platform's API SDK packages to pull the information I need. This gives me more control over how I pull and process the data.
-
Yeah, it should be a date field to return the proper year. If Org Creation is already the year then you can just remove the YEAR() function in the beast mode
-
The slicer will filter your dataset if the field being used on the slicer exists in your dataset. As long as your subset dataset have the same field name that you're filtering it should work for you.
-
You can use the TRIM function in a formula tile to strip leading and trailing whitespace off of a string.
-
You're correct, Minimum isn't an option, apologies. I did log a new idea to the idea exchange to add this in as it seems to be a glaring omission from that tile: Really there isn't a way in Magic ETL except for doing a Group By and then joining back to your original dataset based on the grouping keys you used to get the…
-
Do you have any other firewalls in place or a spam server that might be filtering the emails? As @MichelleH and @MarkSnodgrass have mentioned it's typically a network issue that your IT team will need to get involved with as they can typically monitor the traffic and shed some more light on the missing emails.
-
Have you looked into using a Rank & Window til in your Magic ETL to get the minimum date across your column partition?
-
I haven't been able to successfully connect to AWS using those connectors as AWS requires some additional special headers for the connection. I'll use Python to write a script and use the Amazon boto3 package to do all of the authentications automatically for me. And then I'll use the pydomo package to upload data to my…
-
Ok, there's likely some other setting that's causing issues. Do you have any beast modes still on the card which reference a field that no longer exists? Are there empty color settings? If you right click on the page and select Inspect from the menu it will bring up the developer console, if you select the network tab and…
-
You can do this in a beast mode with a window function: SUM(COUNT(`fieldtocount`)) OVER (ORDER BY YEAR(`yourdatefield`))
-
You need to aggregate the aggregate when using a window function in a beast mode because of how Domo processes the beast mode. You could do something like: COUNT(MAX(`ID`)) OVER (PARTITION BY `Clinic`, `Product`)
-
You can select a specific sheet when you're uploading a file to Domo if it's an Excel spreadsheet and then have multiple datasets for your different sheets and then use those different datasets in your ETL.
-
If you want to add users in bulk you can do that using a CSV and also have a value to toggle sending the welcome email or not: You can specify TRUE or FALSE for the sendInvite value in your CSV to send the email or not.
-
You may be able to automate sending an email and using the Dataset via Email connector - See Alternatively, if you want to be more technical you may be able to write a custom connector and leverage SSRS's REST APIs:
-
Currently no, this isn't an option but It'd be a great recommendation for the Ideas Exchange.