Comments
-
You can do this in a beast mode with a window function: SUM(COUNT(`fieldtocount`)) OVER (ORDER BY YEAR(`yourdatefield`))
-
You need to aggregate the aggregate when using a window function in a beast mode because of how Domo processes the beast mode. You could do something like: COUNT(MAX(`ID`)) OVER (PARTITION BY `Clinic`, `Product`)
-
You can select a specific sheet when you're uploading a file to Domo if it's an Excel spreadsheet and then have multiple datasets for your different sheets and then use those different datasets in your ETL. https://domo-support.domo.com/s/article/360043436573?language=en_US
-
If you want to add users in bulk you can do that using a CSV and also have a value to toggle sending the welcome email or not: https://domo-support.domo.com/s/article/360042934274?language=en_US You can specify TRUE or FALSE for the sendInvite value in your CSV to send the email or not.
-
You may be able to automate sending an email and using the Dataset via Email connector - See https://learn.microsoft.com/en-us/sql/reporting-services/subscriptions/e-mail-delivery-in-reporting-services?view=sql-server-ver16 Alternatively, if you want to be more technical you may be able to write a custom connector and…
-
Currently no, this isn't an option but It'd be a great recommendation for the Ideas Exchange.
-
This won't work in a beast mode as you can't aggregate (sum of total per reason) an aggregate (total per reason). You'd need to pre-aggregate the data in an ETL first and then do your final grouping in the card.
-
@user07231 Are you getting any sort of error message with the SQL Partitions connector? How exactly are they not working correctly?
-
I'd recommend adding this to the idea exchange for the product team to review and possibly re-add back into the product. It will also give visibility to other users and allow them to add their voice to this issue.
-
Which version of python are you using locally and which version is your script using? import sys print(sys.version) Is it giving you a line number which is causing the error?
-
You should restructure your data so that it's stacked so you have one row per observation (sales or service) Unit ID | Date | Date Type 1 | 1-Jan | Sales 2 | 2-Jan | Sales 2 | 2-Feb | Service Then use some beast modes to count your sales and service units: COUNT(CASE WHEN `Date Type` = 'Sales' THEN `Unit ID` ID END)…
-
@Utz Can you give me some sample data of your existing dataset, the dataset changed and what the final snapshot dataset should look like?
-
@Utz It sounds like you just want to take your old historical dataset, add your new data to the old dataset with the timestamp? In that case using a Magic ETL start by just having your original dataset as an input, feed it into a formula tile to add the snapshot timestamp and then output it to a new dataset. Save and run…
-
Not with the default visualizations. You could try and utilize a third party visualization tool like D3 in a Domo Brick to truly customize a scatter plot how you're wanting it displaued.
-
Custom Domo Connectors are written in the ES5 version of JavaScript.
-
You wouldn't necessarily need to create separate connectors but define different reports within your connector to allow users to select and handle those selections in your code to call the correct endpoint.
-
There's three steps to configuring GraphQL Describe your data (schema) Ask for what you want (query) Process your data Underneath the hood GraphQL is just like another API where you format a request, send your request and then process the data you get back from that request. The only difference is that you're defining what…
-
Have you thought about restructuring your data so that each record is a type of go-live? ID | Go Live Number | Actual Go-Live | Initial Go-Live Then you can filter on the Actual Go-lIve date value for 2023 or use the date filter for this year and perform a date diff between actual and initial to see if it's an on-time.…
-
Are you wanting tick labels or just a generic x and y label? If you're looking to just put some text on the axis to state what the units or metric you're displaying is you can just add a new text feature. x-axis svg.append("text") .attr("class", "x label") .attr("text-anchor", "end") .attr("x", width) .attr("y", height -…
-
Are the columns the same data type? Do you have extra / trailing whitespace in the column names? Domo should recognize them as one column if they're named exactly the same.
-
This sounds like a bug. I'd recommend logging a ticket with Domo Support.
-
Have you tried using an i frame in your HTML code with a link directly to your card url in the Domo brick?
-
Are you dealing with percentages / decimal numbers and have it set using whole numbers? Does your mom and max values go under your max value in your dataset?
-
You can tell the Magic ETL DataFlow to run only when certain input datasets have been updated. This document may be of use: https://domo-support.domo.com/s/article/360057087393?language=en_US Since you're not wanting to replace the old data you can bypass the join and filter sections of the ETL in the documentation.
-
You'd need to reach out to Domo Support directly as this would require them to do a custom environment installation for you if they even allow it. Have you looked into other packages which wouldn't require a driver like beautiful soup?
-
You'll need to stack / pivot your data so you have a singular date field instead of multiple per row. You data would look something like: Order ID | Date | Date Type | Order Total 1 | 1/1/2023 | Created | $123 1 | 1/7/2023 | Sales | $ 123 Then you can use the new Date field to make sure you're getting records affected by…
-
If you have your raw dataset (not the snapshot version) as an input and the snapshot dataset as an input you can just calculate your current date with your formula and then use an append tile to join the new version to the historical snapshot dataset and then output to the historical dataset.
-
This sounds like a bug with the Domo APIs / SDK. I'd recommend logging a ticket with support to have them investigate deeper.
-
You can use a dataflow / magic Etl to pivot your dataset so that you have the columns member and event and signed up and then filter for signed up = TRUE You can then use that output dataset in your pie chart with the event as the series and do a count of the members
-
Does your dataset ID exist in the instance you’ve authenticated to? Does the user you’re authenticated as have access to edit or create datasets?
