Improve Dataflow Efficiency
Hello, Domosapiens,
I currently have a data flow that is taking 14 min or greater to run.
The design is as such:
- MySQL connector that first runs once to pull in all historical data (7.8M rows). Ex full historical pull query:
select * from table;
- change dataset settings to merge method by changing the query to
select * from table where updated_date > utc_timezone() - INTERVAL 1 DAY;
. As you would assume, the data is pulling in a smaller subset of data vs the full historical run.
- change dataset settings to merge method by changing the query to
- Connector feeds into a MySQL ETL DF that converts timezone for date columns based on each row's location timezone and spits out an output dataset. (this takes 14 minutes to run)
Is my issue lying with one or many of the following:
- should I be creating a sub-table/transformation that is querying my data connector for updated changes?
- maybe with something that you find in MySQL (replace insert into)?
I am open to suggestions. Realistically, is it possible to decrease the dataflow run time from 14 min and greater down to as low as possible? If so, how would you do it?
Grateful!
Isaiah Melendez
Answers
-
Hi @imelendez
The biggest improvement you'll see in processing time is if you can switch off of MySQL ETL Dataflow. It's procedural so everything must be run in order. It sounds like though your MySQL ETL isn't doing much except a simple transformation. Have you looked into possibly using a Dataset View (currently in beta - talk with your CSM) with a calculated field to calculate your new timezone shifted date? This will remove the lag of exporting your entire dataset off to the processing server, to have it processed and then transferred back across.
You can look at the history of your MySQL ETL dataflow and click on the SUCCESS or FAILURE status to get a detailed breakdown of how long each section took to help highlight slow performing sections. My guess in your case though is it's just reading in your data that's taking the longest.
I try to stay away from MySQL ETL dataflows as they're the least performant of the options unless absolutely necessary.
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**0 -
Hi, @GrantSmith !
Thank you for the suggestion. I was pointed in the general direction of the SQL ETL data flow because I was running into issues with using the formula tile in the Magic ETL to do the convert_tz() conversion. It kept processing an error. I tried it here using this method and it worked. But, generally speaking, your middle paragraph hit the nail in the head reading the data is taking the longest.
Is there an efficient way to go about not having to read all the data each time? I am open to suggestions.
I have asked my CSM to see if they can turn on the DSV feature for us.
Thanks a bunch, Grant!
0 -
With a MySQL no, there isn't a way to transfer only a part of your dataset, it needs to transfer everything over to read. The best option is as I said, removing the transfer portion with a dataset view. There is a premium feature called Adrenaline dataflows which do the ETL work inside the adrenaline layer instead of transferring it out you might be able to talk with your CSM about but I'd recommend going down the Dataset View route first as that's free.
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**0 -
So if I am understanding correctly, in ETL's regardless of the method you use, the ETL will scan/read the entire amount of data in the dataset (input source) regardless of how efficient you are trying to be, correct? If so, that is a bummer.
0 -
Would something like this still take a long time or do you think performance would improve? Sorry if it sounds like I'm shooting blanks here. Just trying to think of a way to solve this quandary, @GrantSmith . :D
Proposed design:
0 -
Nihar did a whole session about this at the recent conference. https://www.youtube.com/watch?v=kmC-0ncu9PA
Let's unpack a few things.
1) why do you care if it takes 15 minutes to run a dataflow? how often are you trying to update the data? you can't use a recursive dataflow AND have low latency run times. the problem with recursives is that run time keeps getting longer as your historical dataset accumulates more data.
THEREFORE if you're planning to run your dataflow multiple times a day you must find a pipeline that does not require recursion.
since you're using a connector to pull data into Domo ... if you want the lowest execute time, why don't you just put the logic for your timezone conversions into the connector Query?
2) you generally should not do timezone alterations in your ETL. Domo expects timezone data to be in UTC and stores data in UTC and will make adjustments in the presentation layer assuming the source data is in UTC.
THAT SAID to support presenting data in different timezones no matter the company settings, you could:
1) create a Date Table that has columns that include an isOffset column for Daylights savings
2) create a Time Dimension with columns (for different regions) that adds x number of hours to a UTC timestamp for the hours offset.
3) JOIN using DatasetView,
4) (very pseudocode) beast mode: date_add( timestamp , INTERVAL isDaylightSavings DAY) + interval RegionalOffsetHours HOURS
WHY THIS METHOD?
If you want to add a new timezone or discover a mistake you don't want to reprocess your entire historical dataset. AND you don't want to have to repeat this logic multiple times for different datasets
Jae Wilson
Check out my 🎥 Domo Training YouTube Channel 👨💻
**Say "Thanks" by clicking the ❤️ in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.8K Visualize
- 2.5K Charting
- 738 Beast Mode
- 56 App Studio
- 40 Variables
- 684 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 46 Workflows
- 10 DomoAI
- 35 Predict
- 14 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 123 Manage
- 120 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive