Comments
-
Aloha!
-
+1
-
No, it's not possible. It's my understanding we would need the Jupyter notebook integration to create a custom environment.
-
I think this does what you want with an R scripting tile: library('tidyr') library('dplyr') # Import the domomagic library into the script. library('domomagic') # read data from inputs into a data frame input1 <- read.dataframe('domo_do_while') # write your script here input1 %>% tidyr::gather( key = 'name', value =…
-
@jaeW_at_Onyx, here are a few reasons why I export 1m+ rows: * We don't have Jupyter notebook integration * No `anti_join()` in Magic ETL or Magic ETL v2 * I can do some exploratory data analysis locally in R faster than I can in Domo
-
Checkout the Java CLI, link, and its `export-data` function. It grabs the entire dataset as a .csv file. To use the Java CLI, after adding Java to your path, create a script file called `script.txt` like this: connect --token {your_token} --server {your_server}export-data --id "{ds_id}" --filename "{filename.csv}"quit From…
-
This will take care of the extra period: names(data) <- stringr::str_replace(names(data), '\\.\\.', '.') Is that issue with the DomoR package?
-
I use limit and offset to download tables with more than 1mm rows, like this: SELECT * FROM table LIMIT 1000000 OFFSET 0; SELECT * FROM table LIMIT 1000000 OFFSET 1000000; Or you can use Domo Java CLI Tool's `export-data`…
-
@PacoTaco are there any docs explaining how to do that with the Data Copy connector and an ETL with filters?
-
It's possible with a Recursive DataFlow: https://knowledge.domo.com/Prepare/DataFlow_Tips_and_Tricks/Creating_a_Recursive%2F%2FSnapshot_ETL_DataFlow There's an append ETL option for Redshift that's in beta per this domopalooza2020…
-
At the top of your script try: library('dplyr') library('stringr') library('tm') I don't see NL in the list of installed packages.