Comments
-
Swap the two datasets you're using in your join. You're basing all of your dataset from the suppliers which have rejected records and not all vendors from the received records. TEST Received Qty should be on the left of your join, not the right.
-
It appears your join is only on the month. If you have multiple values in the supplier code codes it will cause a cross product join creating duplicates. Try and include the supplier code in addition to the month in your join.
-
Currently, this isn't an option and you'd need to set this for each dataset job in Workbench. I'd recommend adding this to the idea exchange as I can see others finding it useful.
-
How are you combining your data? Are you using a join in Magic ETL? What kind of join are you using? What are the values you're joining? It appears you may have an issue with how you're joining your two datasets together. Make sure you have all of the key fields in the join to avoid duplication of data.
-
What is the structure of your data? What type of visualization are you utilizing? What do you want the output data or visualization to look like? You may need to utilize a window function to calculate the category. CASE WHEN SUM(SUM(`Aantal`)) OVER (PARTITION BY `Your Grouping Columns`) <= 5 THEN 'Why' ELSE `Opzegreden` END
-
group_concat won't remove duplicates like in your example. If you need to remove duplicates you'd need to feed it through a remove duplicates tile first, then do a group by based on account id and date and select the "combine columns with ," option. I typically try to avoid combining values into a CSV list though as it's…
-
In my experience, the Simple API doesn't allow for nulls with numerical values in JSON. This is why I typically use the dataset or stream APIs to upload a CSV value and just have an empty value in my CSV to process a null. This last document should have a better example of uploading CSV with the Content-Type of text/csv:
-
This appears to be a bug, I'm able to reproduce your issue as well. I'd recommend logging a bug with support (More > Feedback > Support Issue or alternatively ()).
-
Sounds like a potential Domo backend issue. I’d recommend reaching out to the third party connect team at connectorhelp@domo.com and see if they can help assist you.
-
The only way to dynamically pivot a table would be to utilize a MySQL dataflow and do something like this:
-
It will count them as separate words. You could utilize a regexp_replace to condense all multi-spaces to a single space. LENGTH(`field`) - LENGTH(REPLACE(REGEX_REPLACE(`field`, ' +', ' '), ' ', '')) + 1
-
If you're considering a word as a number of characters separated by a space you can do something like this with a formula tile: LENGTH(`field`) - LENGTH(REPLACE(`field`, ' ', '')) + 1 This will count the number of spaces in your field
-
Because Domo is looking for a first and last value in your dataset when you have only one that value is simultaneously the first and the last value. Use an ETL to add in a 2nd row with the value you want to include. You'll need to decide how you want to handle the % change since the initial value doesn't exist in your case.
-
There are two ways to interact with the Domo APIs, the first being a more object-oriented approach and the latter being url based approach. This documentation lists out the different types of aggregation you can use in your query request: It's not much but should highlight the possibilities for aggregation.
-
The Environment Variable should hold your token. connect -s mydomain.domo.com -v NAME_OF_ENV_VARIABLE
-
This would be a great suggestion for the Ideas Exchange as this currently isn't possible.
-
The connect command does allow you to use an environment variable and connect with a -v flag. If you're wanting to pass it along in the java -jar command it's not a possibility. You'd need to write a wrapper script to fill out the variable values in your script and have the java command run your script.
-
You can't hide the link but you can restrict their access to viewing the data by removing the grant in the admin section. You'll need to use a custom role to do this and then assign the users that custom role. If you want the link to completely go away I'd recommend logging an idea in the idea exchange.
-
You'll need a window function to aggregate an aggregate and a count distinct to count the users only once. COUNT(DISTINCT CASE WHEN COUNT(`userid`) OVER () > 100 THEN `userid` END)
-
Does the audit fail across all records with the same submission ID or are you looking to have just the Critical/Major portion of the submission marked as a failure?
-
For your first condidition, Critical/Major can't be both Common and Major at the same time so it won't ever get set as fail. It looks like you have two separate conditions so I'd split out the Common and Major clauses into two separate clauses: CASE WHEN `Critical/Major` = 'Common' AND No>= 2 THEN 'Fail' WHEN…
-
Assuming you're wanting a bucket for your applications and not an actual user group here you can utilize a magic ETL to enhance your data. With a formula tile and a CASE statement you can then create your groupings: CASE WHEN `Application` like 'STARTSWITHFORGROUP1%' THEN 'Group 1' WHEN `Application` like…
-
Page views and Card views are separate from each other in the activity log. You'd need to utilize an ETL to join the governance dataset for cards and pages to the page ID in the activity log and then count each of those page views as a card view in the resulting dataset.
-
Domo Publish is a premium option which can allow you to publish datasets and dashboards to child instances. Another possible option is utilizing some of the commands in the CLI tool to export and import Domo objects.
-
This happened to me today as
-
You'll likely need to coalesce your dataset1 zip code and dataset2 zip code into a new field using a formula tile and then use that to join your dataset 3 zip code. This way it will pull in zip codes that don't exist in dataset1 COALESCE(`dataset1.zip`, `dataset2.zip`)
-
You can use two join tiles with a full outer join to do this. Join tables 1 and 2 together then join the output of that to table 3 via an outer join.
-
Another version would be LAST_DAY(CURDATE() - INTERVAL MOD(MONTH(CURDATE()), 3) MONTH) To break it down: MONTH(CURDATE()) - returns the month number of the current date (1-12) MOD(..,3) returns the remainder when dividing by the month number by 3 so the last month of the quarter will be 0 INTERVAL .. MONTH - subtracts the…
-
It depends on how you want your data to be structured. You could duplicate the rows so that each email is on its own row or you'd have one record with three separate columns to store the three separate email addresses. In either case you'd need to use a for loop and loop through the emails property of your data.
-
Have you tried using a beast mode to calculate the difference between now and 5 days ago and counting them? COUNT(CASE WHEN `date_field` < CURRENT_DATE() - INTERVAL 5 DAY THEN `date_field` END)