2 Issues: MySQL SSH - Query Parameter and STREAM API

Sorry for combo post but I got problems. 

 

1. I'm using the MySQL SSH Connector - I'm using the QUERY option to pull in data from a table.  Currently, most are just daily imports but some tables are 1B rows and so doing a 'Select * from table;' is VERY inefficient.  What I'd like to do is using a parameter to only pull in the most recent data (if the source table has a date field or if the source field has a PK ID that I could use).  It looks like the query parameter option was intended to do that, but I don't see how it would store a variable so it could reference it when it imported.  I know the other option here is to just do a recurse DF but it will slow me down so I'm trying to reduce waste where I can.

 

2. Am I right in thinking that this may be a good case for Stream API?  It looks in the documentation that it only works for very large flat files, but that doesn't make sense to me...so I'm hoping I'm wrong with that.  I'm not a programmer either so any helpful advice here is VERY much appreciated.

 

Thanks everyone  

Matt

Tagged:

Best Answer

  • NewsomSolutions
    NewsomSolutions Domo Employee
    Answer ✓

    Answers:

    1. no stored system variable - must find it myself using a date/id field and go from there....build recursive to just filter out the dupes.

    2. Maybe - but you have to build code to export source data to a flat file to then import in your stream which sucks.  

     

    Alternative: There is an option for data assembler for big datasets -going to try it out (2b row table may need it).  Also I got workbench working thru an SSH connector (putty/odbc/port forwarding) and that works sometimes ok...but seems to fail at the most inopportune times w/o good error messages.  - closing this up.

Answers

  • NewsomSolutions
    NewsomSolutions Domo Employee
    Answer ✓

    Answers:

    1. no stored system variable - must find it myself using a date/id field and go from there....build recursive to just filter out the dupes.

    2. Maybe - but you have to build code to export source data to a flat file to then import in your stream which sucks.  

     

    Alternative: There is an option for data assembler for big datasets -going to try it out (2b row table may need it).  Also I got workbench working thru an SSH connector (putty/odbc/port forwarding) and that works sometimes ok...but seems to fail at the most inopportune times w/o good error messages.  - closing this up.