Comments
-
NP. If you're looking for more Domo training or upskilling, I post tutorials and content here: https://datacrew.circle.so/home
-
@user008875 if you're trying to group orders by number of items ex. (orders with 5 or more items, orders with 2 items , orders with 1 etc), then you definitely can't take the approach you're taking. you need to create a column that counts the number of orders per item. you'll have to materialize it into your dataset (do it…
-
@cassandra_cook1 what do you mean by "switches to a dataset?" one generally does not simply change the datasets that powers cards. in any case if a filter is set up without specifying the dataset then it will apply that filter to all cards on the page that have datasets that include the column name on which the filter was…
-
this is for the community slack group. this is for the semi-official Domo slack group.
-
Value mapper only operates on TEXT. So after you finish you value mapping you'd need to use a tile to Change the Data Type to integer.
-
oof... don't do it! :P I wouldn't code this into ETL. use the LAG function in analyzer: https://www.youtube.com/watch?v=cnc6gMKZ9R8&t=1s UNLESS you have gaps in your data. in which case you might want to consider the route of creating the 'universe of possible values' as outtlined here:…
-
you can hack Domo ETLs and make a dataflow APPEND instead of Replace.
-
cumulative backlog is just a window function sum(sum(1)) over (order by date) the trick is to know if on a date the ticket is still open case when date<= closeDate then 1 else 0 end. course this only works assuming you have a ticket on every single day or you might end up with gaps in your data.…
-
this can be automated via APIs. use the DatasetAPI or QueryAPI to find out it's age. the tough part will be if you want to retire rows of data (as opposed to entire datasets). you'll need a recursive query with a partitioning model to drop old data. also in the java cli there are functions to assist with setting the…
-
@DKCloud9 if your'e not going to use PyDomo, I'd recommend forming your query in Postman. also i don't know what JSON is expecting, but keep in mind the API expects a string not a dictionary.
-
think like a web dev. assume your table card must contain a div tag. <div> text content <a href> ... </a> </div> so now just concat in your CASE statement inside the DIV tag.
-
this question was the inspiration for a tutorial: to just get datasets owned by a specific user, just hit that query API with the body {"entities":["DATASET"],"filters":[{"filterType":"term","field":"owned_by_id","value":"1527841788:USER","name":"George…
-
if you don't know how many _ are in your string, a cleaner method than using the split column tile might be to: 1) count the number of _ in your data (length(string) - length(replace(string, "_", '')) 2) duplicate the row <number_of_occurences> 3) keep everything between the n and n-1_th version of the _. you can do this…
-
you can do a cumulative sum using Rank & Window SUM(SUM(`Amount`)) OVER (PARTITION BY YEAR(`Date`) ORDER BY `Date`) but at the moment you cannot do a Subtotal in Rank & Window. To do this equivalent you'd do a GROUP BY and then JOIN the data back to the original. SUM(SUM(`Amount`)) OVER (PARTITION BY YEAR(`Date`) )
-
@User_32265 if you use @GrantSmith 's window function you will only keep rows on that max() date. example. if you had a report where every day you sent the customer table, you could ask for "the most recent version of the customer table" and guarantee that you'd see every customer. but if your question is, "i only want to…
-
what code are you writing? SUM(CASE WHEN ... then AMOUNT END) should work fine
-
@GrantSmith , but @MarkSnodgrass solution is cleaner. while buiding the universe of SKUs and Dates is an option, all that facilitates is building an (expensive) window function. a date dimension with a lag_from_current_date (0, 1,2,3,4,5) would be a much cleaner way of reporting on 30 day rolling average. even with an…
-
@MarkSnodgrass given the vertical ticks, 'Marketing' , would be interpreted as the text, Marketing, even if there was a column called `Marketing`-- note the tildas, `, .
-
@MarkSnodgrass you can't sum the text 'marketing'. @henryfact you can COUNT the text ... but that doesn't make sense. your count would be = to the number of rows in the dataset. -- leadCategory CASE WHEN `Lead Source` IN ('Event','Display','Direct') THEN 'Marketing' WHEN `Lead Source` IN ('LeadIQ','Prospecting') THEN…
-
@WizardOz DSVs crash on me sporadically too and there doesn't appear to be a rhyme or reason as to why. Best I can recommend is keep your views as simple as possible and do as much as possible in ETL :(
-
user000051 Most SQL and Excel have a 1 to 1 variant. For example the today() function in Excel is the same as the current_date() function in MySQL. Instead of IF() statements in Excel you'll write a CASE statement in MySQL. On most days of the week you can just google an abstracted version of your question and end it with…
-
if i had to solve this problem, i would ask myself: 1) can i find the position of thte text "Numbers: " 2) can i find the first New line character after the position ofthe text Numbers: 3) can i keep the stuff in between?
-
@danidiaz 1) no, primary keys have little to no impact on analyzer's performance. 2) if you can implement dataset views in analyzer you could try using a webform (or some other dataset stored in Domo) as a Dimensional table / Surrogate Key solution, where you JOIN in the attributes after it's pulled into Domo. Keep in mind…
-
@amehdad21 ... test the code in your linked post i don't believe it's correct b/c without a window function that average would just calculate within the row of data you're looking. if the user wants the average across all visible months they would have to use window functions (a feature switch support / your CSM can…
-
@Ritwik you can implement a max( max(amount) over ( order by date asc ) in a dataset view and then use that as an input in Magic 2.0. You could even create that max_view and then JOIN raw_data in a second DSV for maximum performance. Don't do it in a SQL etl -- waste of time.
-
WHOOP!!! I'll just leave this here. and maybe this https://datacrew.circle.so/c/developerdomocom-community-edition/a-comprehensive-list-of-domo-apis
-
@Salmas what you CAN do is write case when value = 1 then 'between 0 and 1 when value = 2 then between 1 and 2 ... end this will work fine because the CASE statement is evaluated BEFORE aggregation. what you CANNOT do is write: case when sum(amount) between 0 and 1 then ... because the case statement is evaluated AFTER…
-
refresh your browser. if you run an ETL that laters the schema of a dataset, sometimes the browser doesn't know that ta column no longer exists until you refresh the page .
-
@DataSquirrel from the CTO and one of the leads on the engineering team.
-
@DataSquirrel , Wheras Grant is assuming you're ingesting the data via Workbench, I'm assuming you're using the web connector library. I don't know off top, but if i had to investigate this problem, these are the things I'd do. 1) check PyDomo to see if you can trigger via Python 2) check java cli to see if you can execute…