Comments
-
open a support ticket. but sorry, no don't have any experience here.
-
@Dharshini , this isn't the answer that you want... but WHY are you displaying the data as a pie chart in a map? The data in ME is unlegible. if you had a map, and your metric % of all schools that are public, you would communicate the exact same inforrmation with one number less ink.
-
Jessica, your language is confusing. it sounds like you have one dataset, ID = abc. Structure your data such that you have drillable and aggregated data (note 10+10+30 = 50) and you differentiate them using a column isAggregated. Now John, a lowly analyst can have access to the rows where 'isAggregated' = 'other' OR…
-
don't use SUM(Distinct) that's too risky. Imagine if the forecast value for a set of employees is all 50. SUM distinct would assume there was only one employee with a forecast value of 50. The better solution is to reshape your data such that you don't have to rely on DISTINCT to deduplicate rows. have a set of rows for…
-
use the Domo Governance and Domo Stats datasets, https://www.domo.com/appstore/connector/domo-governance-datasets-connector/overview . How are you defining 'Orphan'? and How would you define "impacting current users"? Sounds like a pedantic question, but there are different answers which will inform your process for…
-
i understand. i'm questioning whether the code you've written would actually return that result. what json ARE you getting?
-
.datasetRedirects = {"v1:v2"} this looks problematic (compared to the rest of the code you're writing. shouldn't it be .datasetRedirects = { .v1 = v2"} or something similar.
-
i assume this is what you're talking about. are you able to get the desired outcome using other embed methods? what code are you writing that does not work? (caveat, i don't know much about vb.net, but "it's not working" is a bit hard to troubleshoot :P )
-
@Tow just fail a few datasets and then run the governance dataset :P but i believe FAILED and ERROR and maybe SUSPENDED are some of the various states. Alternatively, just filter for NOT SUCCESS.
-
@User_32265 @GrantSmith is giving you solid advice. Typically there are two ways to handle 'mult-valued columns'. 1) you can increase your row count (one row per value) OR 2) you can add columns for isGreen isBlue isRed. to handle avoid showing duplicate records, you could UNION a version of the row (car, truck bike) with…
-
'switch datasets' ... what do you mean?
-
Recursives are very expensive and will get more time consuming over time. see if UPSERT might not be a better idea.
-
@swagner are you able to make the filter work with JUST the item filter? make sure that works first. THEN try creating an array where you pass pfilters=[{branch dictionary}, {item dictionary}] notice pfilters is an array (square brackets) not a dictionary.
-
are you doing this in analyzer or ETL? if you're doing it in Analyzer i don't expect this function to work once you try to apply aggregation. Typically window functions (the LAG() function) require two aggregations, once to occur BEFORE the window is applied at the granularity of the aggregated data, and then the window…
-
it sounds like you need to implement a PARTITION model. in your ETL where you generated 4B rows, when you JOIN historical to new data, your JOIN criteria (as @GrantSmith surmised) is resulting in row growth. In a classic partitioining model, once set up properly, you would identify the sets of dates from the historical…
-
Clean up your SQL and avoid using nested CASE statements. you want your code to read cleanly. case when `Date` > '2021-07-01' AND `company` like '%company1%' AND `ConversionTypeName` like '%product1%' then 'exclude' else 'include' end FYI it is syntactically correct to nest your logic using parenthesis, but again...…
-
... what have you tried? how do you want to send the data? have you considered pyDomo?
-
sure. but keep in mind MAgic 2.0 is a distributed ETL engine, if it can, it will chunk your job into smaller parts and distribute it. so b/c it's being distrbuted row sort order can't necessarily be guaranteed. the solution i've described doesn't require the rows to return in a specific order. for your header rows you're…
-
@reichner015, here's a youtube video i posted where i dive at length into Window Functions (it is a feature you'll have to ask your CSM to enable). I am available for mentoring and consulting services through my company OnyxReporting.
-
i'm late to the party ... but just wanted to put this out there... all 3 of you are mixing math before and after aggregation and while it probably works ... it isn't best practices and you should be careful of unexpected results @GrantSmith 's solution is pretty solid except you're conducting you CASE statement after…
-
@Johnson i personally would try to avoid this aggregated view of the data ... but let's set that aside. 1) you need to add a column to your dataset where you define your cohort. ex ("March Start" or "April Start" ... use last_day() if you're defining cohorts by the month of their first call) 2) then you need to define a…
-
@GrantSmith the rank() function should work as expected insofar as it does have the nested sum() function. so sum(sum(1)) or rank() should be interchangeable. If you are using Snowflake as your backend, keep in mind that Domo has to 'translate' DQL (domo query language) into Snowflake SQL equivalent. Not all functions have…
-
@Salmas what's your use case? what does your data contain and what are you trying to accomplish? it sounds like you have a multi valued column and you want to be able to filter on one value in the multi-valued column. you can't do that in analyzer gracefully. what you could do is add binary columns for the major values.…
-
@pl_Luke it's unclear to me how you're defining the numerator and denominator. it sounds like you want to be able to choose the reasons included in the numerator and then always have it divide by an 'all member' i.e. all activity. if that's the case, then just UNION the data onto itself. but add a column [Report_Reason]…
-
sounds like an authentication issue in Postgres ... but it's unclear to me where you're seeing the error
-
@HowDoIDomo , I did run a session at the IDEA Exchange about structuring your data warehouse, https://www.youtube.com/watch?v=rS2e2_fv5yk Absolutely you'll want to start with naming conventions as @MichelleH suggested. From there you might look at using certified datasets to bring peeople's attention to approved datasets.…
-
@Tow , create an account in Domo whose email is a service account / distribution list, ex BI_DistorList@yourcompany.com then make that account the owner of all your important dataset and dataflow assets.
-
if all you want to do is apply header row values (in col 1) across all rows of your dataset, then 1) split your data into header rows and transaction rows (FILTER looking for NULLs) 2) spread the header values (organized in rows) into columns using PIVOT 3) add the constant 1 to both header_set and transaction_set. 4)…
-
CASE WHEN `Activity Date` = MAX(MAX(`Activity Date`)) OVER (PARTITION BY YEARWEEK(`Activity Date`)) THEN 1 ELSE 0 END @GrantSmith and @tobypenn from my experience, using calculated columns in window functions will not work as expected in analyzer. Best I can tell, Domo broke that functionality in the Spring release and…
-
@StefanoG if you think it's a bug, this is something you'd want to send to support.