I'm trying to find ways to optimize the performance of my data flows and one thing I've been thinking about is leveraging datafusions to connect data.
Basically I have a base data set from our admin that adds a new day of data each day, about 65k rows of data per day. So after 15 months of data (beginning of 2017) we're at around 30 million rows of data. In an attempt to improve performance I want to limit what has to process each day and my thought is about removing all of our salesforce data from this base data set and then using datafusion to combine it. The salesforce data is about 2000 rows of unique information that is mapped to the larger 30 million rows and takes a few hours to do right now utilizing ETL. Has anyone used datafusion to combine large amounts of data to a smaller data set in this fashion and how was overall performance when doing this? If it works for a larger data set like this, I may reconfigure all of our datasets in this fashion so it would be great to get insights into other people's experiences. I figure reducing the base data set by 15 columns will improve performance and then adding the columns per Datafusion would reduce overall processing time.
Also, I assume Datafusion data sets cannot be used in other ETLs or Redshift but confirmation on that would be helpful as well.
Thirdly, why can't we get a Datafusion which inserts the data onto an existing data set but can run on a loop? This would be for adding rows as opposed to columns obviously.