Streams API - benefits?
Are there any speed benefits to using the Streams API for uploading data?
For example if you wanted to upload a billion rows as quick as possible.
I've written a python script to upload data via the streams api and it doesnt seem much faster than the workbench. I've even tried writiing the script in an asynchronous manner hoping that the csv files / "parts" would upload concurrently and much faster.
After all this, it seems to be about the same speed of upload as using the workbench.
Any clarification for use cases for this API? Is it faster?
Thanks,
Seth
Comments
-
I too am trying to upload rows as quickly as possible. I'm not an expert, but it seems that if the bottleneck is the upload bandwidth of one machine/network, you could use the Streams API to distribute the uploading of parts across multiple machines/networks. Do you agree? I think Workbench only operates from one machine.
0 -
After further research and testing i realize that the "slowness" to the streams api that I was seeing was really just an error in my script.
Once i correctly executed the asynchronous upload using the streams api I was able to upload a csv that was 6.2 Million rows and around 130 columns wide in 3.5 minutes.
0 -
Nice work. Did you try/ find any advantages to using gzip compression? Or is that just uncompressed CSV?
0 -
That was actually uncompressed. If I had Gzipped it, im sure the speed would have increased. Probably to somewhere around 1 minute vs the 3.5 minutes i was seeing. We just tested a similar dataset and successfully uploaded a gzipped 6 Million row csv in 60-90 seconds.
On the Domo workbench, the send portion of the upload (which is really the only part of the upload that is being sped up by using the Streams API) took 36 minutes for that same file. That is a significant increase in speed. However, you still have to split and gzip the file (which the workbench has to do as well) which takes a significant amount of time. I am fairly certain that the domo workbench does NOT asynchonrously read. If you can find a way to asynchronously split and gzip the csv (or results from a SQL query) that would be a huge performance boost as well.
1 -
Thanks @Medinacus your comments are really helpful. I frequently upload a (new) 400 million row (and growing) dataset with 200 columns, and I'm always looking for ways to save time. Have you found a part size that works well? I think the latest documentation recommends 20MB - 100MB (compressed size) per part, but I'm curious if you have any input on that. I'm trying to optimize not only the upload time, but also the time for Domo to "process" it after I commit the upload. (sorry if this should actually be a new question in forum).
0 -
Glad that you find them at least somewhat helpful @robsmith!
We actually havent experimented too much with part size. On our tests, our split files were 100K rows each. That turned out to be gzipped files that were about 10-12 mb. Maybe too small? However, They still uploaded very quickly as mentioned above.
How long has it been taking you to upload your 400M row file? How long does it take for the split and gzip portion and then sending the parts?
0 -
I upload the dataset in about 3.5 hours, but one day I'll further distribute the uploader to see if I can improve that. But after committing the execution, Domo's "processing" stage takes 6+ hours. I'm hoping that by increasing my part sizes to ~100MB gzipped I can minimize the Domo processing time.
I'm not pulling from a database table that I have to split and gzip. Instead, as I collect the data, I store in gzipped csv files intended for Domo.
1 -
Ahh gotchya.
Thanks for the info. Very helpful as we test and try to optimize our own script.
Let me know how the 100 MB file sizes work out!
0 -
Hey - I was wondering how you uploading actual files to Domo through the Streams API?
I know you can upload file-like objects (StringIO), but can I actually upload a real file? I modified pydomo a bit on my server to allow for streaming upload (with open csvfile as f:), but I am still not able to actually upload a folder of files without opening or reading the files in the first place.
0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.6K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 9 Federated
- 2.9K Transform
- 102 SQL DataFlows
- 626 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 753 Beast Mode
- 61 App Studio
- 41 Variables
- 692 Automate
- 177 Apps
- 456 APIs & Domo Developer
- 49 Workflows
- 10 DomoAI
- 38 Predict
- 16 Jupyter Workspaces
- 22 R & Python Tiles
- 398 Distribute
- 115 Domo Everywhere
- 276 Scheduled Reports
- 7 Software Integrations
- 130 Manage
- 127 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 11 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 110 Community Announcements
- 4.8K Archive