Best Of
Re: Show Top 10 in Chart (Rank in Beast Mode)
Looking around in the Dojo, this has been a feature that has been requested in the past, but has not been implemented yet. It may be worth submitting a new idea in the ideas exchange section of the Dojo and see if it can get included.
There is a feature called Window Functions in Beast Mode that you can ask your CSM to enable that can allow for rank and window functions to be written. However, it is still not going to get you quite there. I was able to write the following in a beast to produce a rank:
RANK() OVER (PARTITION BY `dateofevent` ORDER BY COUNT(`claimnumber`) DESC)
However, when I tried to wrap it in a CASE statement so that I could drop it in a filter and exclude records with a rank higher than 10, I received a message that aggregated filters are not supported. The CASE statement looked like this.
CASE WHEN RANK() OVER (PARTITION BY `dateofevent` ORDER BY COUNT(`claimnumber`) DESC) <= 10 THEN 'Include' ELSE 'Exclude' END
Since you stated that you can't do this in the ETL because you need to allow for different time periods to be used, you might need to consider a different way to visualize the data. A scatter plot card can hold a lot of series fairly effectively.
Re: Single Value Card - No numeric abbreviations
In the General Chart Properties, for the Divide Value by property, set it to None.
Re: Avoid BeasMode Duplication in Bulk Switching cards
If your new dataset doesn't have any beast modes built off it, then there won't be any duplication because nothing is there to duplicate. If you do have a beast mode with the same name on both datasets, it will rename one by adding something like "from dataset...." to the end of the beast mode name. It will also alert you when you go into analyzer and tell you there are beast modes to rename.
I would encourage you to use the beast mode manager, which is in the data center in your instance and make use of some of the tools available in there. This can help with a lot of the cleanup. Here's a link to the KB article about it.
Best Practice: Filtering Favorites page (Participant Users)
In the January 2019 feature release notes Favorites page filtering was added. This was not available for Participant users. They could open the page filter pane, but could not add filters. Since coming back from Domopalooza I have been showing some of my users new features including the new "Interaction Filters" and Filter cards.
I was so proud today, a user made a connection with the new features and asked me, "couldn't you add a filter card for my district? I could then add that to my favorites, and then use the interaction filters to then filter my Favorites page?" WOW! Awesome! Below is a video I put together to make this available to Participant users in my company's instance of Domo.
Re: Timestamp Convention in JSON File from Command Line DataFlow Request
Hi @walker_page
These are unix timestamps. Typically they are jut a simple numerical representation of the seconds since Jan 1st, 1970 however in this case these are unix timestamps in milliseconds instead of seconds (just divid the number by 1000 to get the number of seconds). Language have tools to convert these easily just depends on the flavor of language you're using. There's also online tools to help you convert these to standard format like https://www.unixtimestamp.com
Re: Option to Remove the 10,000 Duplicate Error for ETL
The super secret workaround we all use...
Right outer join instead of left.
This keeps people who don't know what they are doing from breaking stuff, but enables the rest of us to go on blissfully joining.
It'll seem awkward at first, but in a bit, you won't even have to think about it.
You want to set it up to pull all records from the table on the right, and only those which match from the left. Simple reverse of what you're used to.
Option to Remove the 10,000 Duplicate Error for ETL
Because of how fast/easy ETL flows are, I prefer to use them for most flows. My problem is that sometimes I hit the following error, "Error joining data. The left input cannot include over 10,000 duplicates. Please switch your inputs, group your data, or remove duplicates before joining". The Dojo suggests that this error is only shown to stop users from accidentely making cartesian joins, but I know that's not what I'm doing. For example, if I want to join an item table onto a transaction table, then I have to do it in SQL because I'll get the error. This leads me to having multiple flows for the same end result.
I suggest you give admins the option to disable the error.
Re: Entering or Updating data into DOMO visuals Directly
The Domo Webform is a very quick way to enter in some data and test with it. You can find it when you go to the data center and click on cloud app and then enter in webform
Re: is it possible to add a constant in front of values in a column by using a beastmode calc?
Hi @user048760
What you're looking for is the CONCAT function. It allows you to add multiple columns together with constants to create a single value.
If you're looking for just the URL (which you won't be able to click on but would only be displayed)
CONCAT('https://xxxxxxxxxx.sugarondemand.com/#Calls/', `ID`)
Alternatively you can also utilize some HTML code to convert your URL to a clickable URL.
CONCAT('<A HREF="https://xxxxxxxxxx.sugarondemand.com/#Calls/', `ID`, '" target="_blank">', 'Clickable Text Here Or Use a Column Instead', '</A>')
The `target="_blank" ` tells the HTML to open your link in a new window. If you want it to open in the same window then just exclude that section.
The HTML link will only work in the table type card and doesn't work in other cards at this time.
Re: Error with size field on SQL
Hi @user025461
This is because MySQL 5.6 (version Domo is built off of) doesn't support column names longer than 64 characters. You have a few options to work around this issue.
1) Use a Dataset view (or a magic ETL) to rename your column to something less than 64 characters and then use that dataset inside your MySQL dataflow
2) (Better Option) Convert your dataflow to a MySQL Magic ETL 2.0 dataflow which will be much more performant and support the longer column names.