コメント
-
just FYI @DP_DynamicData , the documentation has been updated since 2019, https://knowledge.domo.com/Connect/Connecting_to_Data_with_Connectors/Configuring_Each_Connector/Connectors_for_File_Retrieval/JSON_No_Code_OAuth_Connector, to incluude the URL @Ben_Dunlap documented in the comments.
-
@user041053 use regex to identify if a set of characters are formatted as a date. then use INSTR or SUBSTRING to try to extract the set of characters then use STR_TO_DATE to convert the extracted values to date.
-
@user069416 , you're mixing issues. 1) use PDP to determine what people can see data. 2) sharing is sharing. assume that eventually, EVERYONE will share EVERYTHING and therefore everyone will see everything that doesn't have PDP policies applied.. Therefore, to secure your data, focus on limiting what people see via PDP.…
-
@MarkSnodgrass is pointing out that an empty string, '' <> NULL. hence his test: TRIM(ID) = '' or ID is null HOWEVER, SUM ... 1 else 0 END will give you COUNT not COUNT (DISTINCT). @ekola , you should validate whether COUNT DISTINCT is including NULLS or not, because standard behavior in many SQL implementations is to…
-
Given the math you've put on the screen, there is no reason to run this ETL as a SQL transform. Just keep your data unaggregated. Be careful with Sum(`Early to Lost`+`Early to Won`+`Early to Late`) AS 'Total Low to High' for any row, if any column contains the value null, the entire result will be null. this is the…
-
MySQL 5.6 does not support CTEs. this approach works.
-
Sure. just store the qeuery as table, and CROSS APPLY it to your table. Not sexy, but gets the job done. select * from `last_date_of_spent_data` or SELECT * FROM table , ( select max(date) as LastDate from `last_date_of_spent_data`) ld
-
@HowDoIDomo You want this video, https://www.youtube.com/watch?v=CDKNOmKClms&t=325s to build a period over period analysis without using the POP chart type.
-
@user09644 you could use list-dataflows and output to a csv. then use any other scripting tool (python for example) to continue the automation process. @MarkSnodgrass is correct, the CLI is mostly a wrapper for interacting with the APIs, the querying and automation part is the next mile which you'll pick up with Python,…
-
Let’s work together in Slack! It’s a faster, simpler way to talk business, share files and get work done. Join here: https://join.slack.com/t/domousergroup/shared_invite/zt-n22nno2s-erzBJFTX3TdYOM_jb4058Q @DataSquirrel Domo spins up a VM (or similar) to process MySQL etls. They've known they're behind for YEARS. I do not…
-
@Gor_Gonzalez , chose a file, then update the configuration manually.
-
if the dates are always at the end, instead of using a SPLIT, calculate from the back (using RIGHT and SUBSTRING) Alternatively, if you're confident the dates are always the last value, the use COALESCE() starting from the last column coalesce(`flight date 1`, `global, client`, `product program`) recall, coalesce will…
-
@user06979 this is the kind of thing you have to do in ETL or a Dataset View. You cannot complete this requirement in analyzer. The gut check response might be to use a window function, but you can't filter on aggregates without a beta feature switch AND with recent changes to Domo, even that won't work. You could…
-
@PWeed domo has customers successfully using Azure AD. To your second question, when you resync your AD groups are you saying that it's not synccing? I would imagine the sync command should create and remove groups in Domo to match AD. SSO is a bit of a niche area, i recommend you liase with support@domo.com for fastest…
-
@HowDoIDomo , consider using a Bullet Chart if you don't want an actual line overlayed on a bar chart There is a Period Over Period chart type to allow you to compare to the previous year of data. I would be wary of the Regression b/c as soon as people start adding filters the results get misinterpreted (b/c it's limited…
-
@serendipity , technically, @GrantSmith and @ST_-Superman-_ are correct, but you do still have options / hacks. 1) build your initial card on a dataset view of aggregated data. apply PDP policies as appropriate. 2) when you create a drill path, you can reference the original dataset, OR point to a completely different…
-
@user048760 , the short answer is "a card can only be powered by data from ONE dataset". In short, no you cannot create a card that spans two datasets. What @Lady and @GrantSmith are suggesting is that you combine your two datasets into one dataset. Be careful, instinctively, most people will try to JOIN their data…
-
@gbuckley , Domo does not support parameterization the way you're thinking. You can recreate the metric you're looking for by altering your data model. APPEND a copy of your data to the dataset, and in the second set, replace Fruit w/ "All-Fruits", then your users can filter on "Apple" OR "All-Fruits" and with that slice…
-
@user082291 , convert your card into a table card and confirm that it still retrieves values. Also, are you confident that your EndDate and StartDate never contain nulls? If it does contain nulls, can you confirm that the math still works as desired. ROUND(AVG(DATEDIFF(`Investigation End Date`,`Investigation Start…
-
@user15776 , it is possible to have an environment spun up in MagicETL with additional packages included. You would have to coordinate with Domo Support and your CSM. Also, if you have the option, try to go with Jupyter Notebook Integration (or get it added as a trial) you can use pip install to install your own packages…
-
@andres it's kind of a binary thing right. Either you drill OR you filter. So if you want filtering, maybe make a second visualization / card that handles state-based filtering. This could be a Bar Chart or a Checkbox, Radio box etc.
-
lag(sum(case when `control or target` = 'target' then `vol_usd` else 0 end)) over (partition by year(`trade_date`)*100 + month(`trade_date`)) I think Domo broke / changed functionality. You can no longer use an expression in the partition or order by clause -- year(`trade_date`)*100 + month(`trade_date`) add this to your…
-
@MikeRoberts and @GrantSmith it makes sense that rand() would be consistent in Analyzer. you're sending a query to a database that was designed to process multiple billions of rows of data at a time. can you imagine if you asked the query to process each row and generate a new rand() value? oof. what you might consider…
-
@user09644 , @GrantSmith is correct, Jupyter Notebooks cannot be integrated into an automated pipeline by design. The tool for pipeline automation would be the MagicETL scripting tiles! https://www.youtube.com/watch?v=lhj9zcwai98
-
@leonardschlemm , pretty sure that's what the IPs @MarkSnodgrass linked are giving you...
-
Instead of JOIN'ing the data as @Mark proposes, I would APPEND the data to itself. Then replace the "TruckID" value with "Truck_Total" this way your dataset will still respond to filters and you don't have to do weird math. sum(case when truck_id <> 'truck_total' then amount end) / sum(case when truck_id = 'truck_total'…
-
@Gor_Gonzalez , you can set workbench to use wild cards (*) for the file name. so you could do my_file*.xlsx
-
@user013818 i'm still not sure i understand your use case... but it sounds like, you need to generate a dataset that for each id, you identify all the possible values for each of the four variables and then one-hot-encode them into a matrix, like the example below, only except just having one column (color) you can have…
-
@swagner if that information is not included in DomoStats_ActivityLog, I think that would be a feature request / product feedback. EDIT:: OOOOOH except i think either DomoStats_Person or DomoGov has the user profile URL... so if you're taking a recursive snapshot each day, you can tell if the value changed from yesterday.
-
@user006645 ((sum(`Promoter Count`) - sum(`Detractor Count`)) / sum(`NPS Count`))*100 @GrantSmith is correct it looks like you ARE aggregating across columns. i.e. "first calculate sum of promoter count and sum of detractor count and then subtract them etc. So your syntax looks good. To troubleshoot break each aggregate…