Datafusion Experiences and Limitations?

I'm trying to find ways to optimize the performance of my data flows and one thing I've been thinking about is leveraging datafusions to connect data.

 

Basically I have a base data set from our admin that adds a new day of data each day, about 65k rows of data per day. So after 15 months of data (beginning of 2017) we're at around 30 million rows of data. In an attempt to improve performance I want to limit what has to process each day and my thought is about removing all of our salesforce data from this base data set and then using datafusion to combine it. The salesforce data is about 2000 rows of unique information that is mapped to the larger 30 million rows and takes a few hours to do right now utilizing ETL. Has anyone used datafusion to combine large amounts of data to a smaller data set in this fashion and how was overall performance when doing this? If it works for a larger data set like this, I may reconfigure all of our datasets in this fashion so it would be great to get insights into other people's experiences. I figure reducing the base data set by 15 columns will improve performance and then adding the columns per Datafusion would reduce overall processing time.

 

Also, I assume Datafusion data sets cannot be used in other ETLs or Redshift but confirmation on that would be helpful as well.

 

Thirdly, why can't we get a Datafusion which inserts the data onto an existing data set but can run on a loop? This would be for adding rows as opposed to columns obviously.

Comments

  • Hi,

    Is anyone able to help out with this request?


    Thanks,

  • D-Rab
    D-Rab Member

    Anyone have experience using Datafusion?

  • I have not had experience with datafusion.  However, have you considered splitting up your main data flow?  Rather than rewriting all 30 million rows, have you considered an append?  I'm assuming you are adding ~65k rows each day.  Do the historical rows change as well?  If not, you could run the data flow only for the most recent day and then, in a seperate dataflow, append the output to your complete data set.


    “There is a superhero in all of us, we just need the courage to put on the cape.” -Superman
  • D-Rab
    D-Rab Member

    I'm already appending the base data set on top of eachother. That data will never change. Using data fusion is what i think would make sense for attaching the descriptives that can change over time. Contractual terms, ownership of entity, who is working that account are all things that can change from day to day. 

  • So I guess the question now becomes, do you want to know what those descriptives are currently, or at the time in question.  

     

    My suggestion is to take your 65k rows each day and run them through the data flow that will attach these "descriptives" from Salesforce.  You would then have a complete data set for that day.  You would then append that with the full data set.  By doing this you would be able to see who was working an account on the day in question, rather than just who is currently working on the account.  Also, because you would only be running the joins on a single day's worth of data at a time, it should not take as long to process.


    “There is a superhero in all of us, we just need the courage to put on the cape.” -Superman
  • D-Rab
    D-Rab Member

    Just current. Historicals don't really matter to current analysis but I have logged historical and current in the past but moving to just current as no one has a need for past book of business.

  • Have you attempted this with a datafusion yet?  As I mentioned earlier, I don't have much experience with that tool.  However, for data sets of this size, I would tend to rely on a redshift data flow over the ETL.  The limitations of the datafusion (output cannot be used in another dataflow and you cannot apply PDP rules to a data fusion) have prevented me from using it much.


    “There is a superhero in all of us, we just need the courage to put on the cape.” -Superman
  • D-Rab
    D-Rab Member

    Totally understand those limitations. 

     

    1. we don't use PDP

    2. since the base data set will be its own dataflow that can be repurposed I'm indifferent to the other limitation you speak of.

  • I would say go for it then. 

    Create a new data fusion to join the two data sets and compare.  I'm curious how the two would perform compared with one another.  If you do get it working, please post how the datafusion performs versus the ETL data flow.  


    “There is a superhero in all of us, we just need the courage to put on the cape.” -Superman