Appending large datasets: Magic ETL vs Redshift

Hi,
I have a need to append large datasets together with Domo, (Google analytics data in Big Query).
According to the documentation when you have inputs larger than 100m rows you should use Redshift to transform the data.
I compared doing large dataset appends in Magic ETL against Redshift and they both took a similar amount of time to complete. I was wondering what is the rationale behind the recommendation to use redshift when there doesn't seem to be an improvement in performance?
Thanks
Comments
-
Are you only appending these two datasets, or are you doing more calculations? If you only want to append them, you may want to consider using DataFusion since that's specifically designed for simple joins/appends on very large datasets.
0
Categories
- All Categories
- 2K Product Ideas
- 2K Ideas Exchange
- 1.6K Connect
- 1.3K Connectors
- 311 Workbench
- 6 Cloud Amplifier
- 9 Federated
- 3.8K Transform
- 655 Datasets
- 115 SQL DataFlows
- 2.2K Magic ETL
- 811 Beast Mode
- 3.3K Visualize
- 2.5K Charting
- 80 App Studio
- 45 Variables
- 771 Automate
- 190 Apps
- 481 APIs & Domo Developer
- 77 Workflows
- 23 Code Engine
- 36 AI and Machine Learning
- 19 AI Chat
- AI Playground
- AI Projects and Models
- 17 Jupyter Workspaces
- 410 Distribute
- 120 Domo Everywhere
- 280 Scheduled Reports
- 10 Software Integrations
- 142 Manage
- 138 Governance & Security
- 8 Domo Community Gallery
- 48 Product Releases
- 12 Domo University
- 5.4K Community Forums
- 41 Getting Started
- 31 Community Member Introductions
- 113 Community Announcements
- 4.8K Archive