Comments
-
hey @SurfingDataLakes good to see you again.
-
I would throw in a create copy, so I can make a copy as a starting point for modifications. We still have some redshift dataflows that I want to convert to adrenaline. I do not want to start with a new dataflow because the output datasets have hundreds of beast modes. The redshift dataflow I am thinking of takes a day to…
-
I would like to amend this request. Rather than simply prioritizing recent data, as I requested at Domopalooza 2024, I would like the ability to filter the datasets being brought in. Let me hit that 100k row limit based on attributes in the data. This would make dataflows an invaluable tool for tracking down data anomalies…
-
@ColemenWilson I also learned the SQL editor is the way to test SQL you are passing to the CLI tool. The syntax is not obvious and the error messages are useless. I was trying to pass a where column is not null to a CLI export and finally found it needs to be where column != null. At least I think that was it. I had not…
-
How do we make this a suggested change to Domo?
-
@ColemenWilson That is an interesting thought… but I need to join it back 9 times…. SAP Vistex tables are really a pain.
-
Totally agree with this one… I may even have submitted it on another thread.
-
Totally agree… I have a user whose only activity is "exported dataset"… there are no logins at all. What does this mean? I finally realized that "viewed card" just means they got an email and did not actually view the card. We need a lot more detail on the activity logs.
-
Data Sync is a terrible option! It creates a new dataset for every CSV in the folder! I just had to delete over a hundred datasets after testing to see what data sync does. It also does not bring the filename in as a column, I do not see a way to dynamically assemble all the data from all the CSVs into a single dataset…
-
That did not answer how to bring the file name in as a column value.
-
I have found these sub-selects very finicky in Adrenaline. a recent one I am working on will not preview but will run without error. I opened a ticket, so we shall see.
-
@Culper_Jr I just saw this because our old process broke. There was some garbage in the data and it failed to parse the CSV response throwing a too many columns error. That sent me down the rabbit hole of switching our javascript process to powershell. Well that ran into the API issue that using the "end" parameter with…
-
Is this a new card of an existing app or a new version of an existing app? I have created new versions by downloading an existing one, delete the ID in the manifest.json and then publishing... Creating a new card based on an existing app (had to do it to remember) was as simple and add card to a page, pick the custom app…
-
We do not have a lot of scatterplots but I found that I could not get it to filter on the x axis at all... It would filter on the series if I clicked a data point but would not create a filter for the X-axis. Maybe a feature request or call it in as a bug?
-
This might help.. https://stackoverflow.com/questions/24648130/protocol-error-in-tds-stream-error
-
You can user the "query a dataset" version of the dataset API. It works great for aggregating data, even from large datasets. here is a powershell example of the query: $query = '{"sql": "select sum(gross_sales_less_returns) from table where fiscal_date = '+"'" +$Yesterday +"'::DATE group by fiscal_date"+'"}' The call then…
-
I use the query API in a number of situations but generally it is to check something like the volume of sales in a dataset for yesterday, so that I know it is a reasonable number and I can send out the reports based on that dataset. I am curious why you are trying to query more than a million rows using the API. If it is…
-
I was thinking issue with the token provided to the account but I had forgotten that went away in one of the updates. Will it let you re-authorize the user defined in the account? It might also be a certificate issue... you might try telling it to ignore cert issues.. Post the answer if you get it working.
-
I scripted it using PowerShell and the API... All you need to add is the access token and the dataset ids. there are comments in the code from past projects and it is not perfect but it worked for our purpose. We have thousands of PDP rules and the excel pugin bogs down. There is also a PDP utility but you have to create a…
-
Redshift datasets can have a sort key added that will help them join faster... below is some of our code to speed up a dataflow with big datasets: (the BSEG customer table has 85million rows.) The biggest impoct on run times will be IO moving the data into the redshift environment. If you can limit the columns you use it…
-
I wrote a C# program a long time ago that exports the dataset metadata and then pushes it back into Domo. I would recomend using hte something like powershell and the APIs to do the same thing. This allows me to create a card on the inputs datasets for a given dataflow - take sales for example - and with some beast modes I…
-
We use the Domo APIs to constantly update our activity log... it is just a dataset we created in our instance. https://api.domo.com/v1/audit?start=1528819530913&end=1528819617330&limit=1000&offset=0 I think I write it out to a CSV then load it into Domo with workbench, I then join that to the main activity dataset in a…
-
you can update a user's email with the API. I already had a script that changes termed employees to Social so I hijacked that script to change a user's email $Authentication = Get-Authorization -clientID $apiKey -clientSecret $apiToken $headers = @{ Authorization = $Authentication.access_token } $person = @{ email =…
-
I do not think I posted this... strange...
-
see original as its a complete solution, thanks!
-
I just went to a dashboard, hit the wrench, clicked "save as" and checked "take to the new page" (or something like that. I owned all the cards and they were not linked to the originals. I then deleted the page (from admin) and the original is still there just fine. It seems to work in our instance as described.
-
You could replicate the existing dataset to a new one using the API and then you should be able to update that with API or workbench, or c#... most of our export then import is export with API then import to history with workbench. just a thought.
-
I had one and flipped it to RedShift. There was a regex expression (probably why it was not redshift to begin with) so I had to convert that to redshift syntax. to me, much easier that "re-writing" in magic ETL...
-
"We use AWS best practices for sending our emails through Amazon SES (Simple Email Service). We use DKIM (DomainKeys Identified Mail) to sign our messages and use SPF to authenticate our domain. Although amazonses.com sends the email, the SPF record shows that they are authorized to send domo.com emails." This seems like…
-
We update our activity log dataset with the API and I checked and I see Mobile card views. VIEWEDPAGE8/13/2018 5:05:57 AMmobileVIEWEDCARD10/31/2017 2:51:01 AMmobileVIEWEDCARD11/11/2017 3:12:23 PMmobileSHAREDCARD7/19/2018 12:14:43 PMmobile