How Can I add data to a data source?

I have a Data Input called 'datainput_new' that in itself is a dataflow. I want to insert rows of data into this 'datainput_new' table using an insert statement. I am currently using MySQL for my dataflow. When I execute the script and test it immediately, I see that the insert was successful. However, when I exit the dataflow and then come back to test whether or not the data were inserted, I see nothing. Can anyone help out as to what may be happening here?

 

Thanks

Best Answer

  • Gimli
    Gimli Domo Employee
    Answer ✓

    Hello @sufficingsoup,

    Dataflows do not alter the input datasets. They will create a new output dataset that is the result of the query you use. 
    Because of this we cannot add/delete rows from a dataset. 

    We can create a new dataset as the output of the dataflow that contains a new row. 

    When using dataflows everything you do before saving & running the dataflow is a temporary view. 
    It isn't until you click save and run will the dataflow preform your query and alter your output table. 

    For example we have an input datset. A transform using an INSERT INTO statment. And one output that is a SELECT * FROM input_dataset. 

    The input dataset will not be altered. However, the output dataset will contain the input dataset and what was included in your INSERT INTO statement. 

    Please let me know if this helps. 

    **Say “Thanks" by clicking the thumbs up in the post that helped you.
    **Please mark the post that solves your problem by clicking on "Accept as Solution"

Answers

  • kshah008
    kshah008 Contributor

    Hi all,

     

    Can anybody help @sufficingsoup out?

    Thanks!

  • Gimli
    Gimli Domo Employee
    Answer ✓

    Hello @sufficingsoup,

    Dataflows do not alter the input datasets. They will create a new output dataset that is the result of the query you use. 
    Because of this we cannot add/delete rows from a dataset. 

    We can create a new dataset as the output of the dataflow that contains a new row. 

    When using dataflows everything you do before saving & running the dataflow is a temporary view. 
    It isn't until you click save and run will the dataflow preform your query and alter your output table. 

    For example we have an input datset. A transform using an INSERT INTO statment. And one output that is a SELECT * FROM input_dataset. 

    The input dataset will not be altered. However, the output dataset will contain the input dataset and what was included in your INSERT INTO statement. 

    Please let me know if this helps. 

    **Say “Thanks" by clicking the thumbs up in the post that helped you.
    **Please mark the post that solves your problem by clicking on "Accept as Solution"
  • Ilikenno, thanks a lot for the help here! This makes a lot of sense now.

  • Thank you,

    However, this only saves one row.

    There is no increment upon the existing rows.

     

    Has anybody figured out how to add a daily row of historical data into a table kept for that purpose?

  • Hello - You will need to set up a recursive query.  There are a couple of ways to do this, depending on your process.  

     

    Solution #1 - Create a dataflow which generates your output table.  Save and run the dataflow.  After it has completed running, make the resultant table your input table and modify the steps between your input and your output tables to suit your needs.  It's critical that you don't delete and re-do the output table or you will just end up with 2 tables of the same name.  Make sure to test several times to make sure the structure is working, and that your input and output tables are attached to the same unique identifier.  

     

    Solution #2 - Look at the path of the input table which you are trying to append to, and find the table's unique identifier.  You will find it in the address bar of the website when you open the table, and it will look similar to the 2 examples listed below.  Contact support or your technical contact at Domo and ask for them to make the output for your query that unique identifier.  They can make that change quickly and easily on the back end.  It's best to use solution #1 where possible.  Solution #2 is needed when you are using more than one query to output to the same table.  

     

    1a52bad9-d49f-4a03-b84c-6085c0c24csd

    3c8d39a2-4a20-4c8d-8863-c89538e7e36b

     

    If this helps you, please remember to mark it as a solution!  

     

    Thanks!

     

    DataMaven
    Breaking Down Silos - Building Bridges
    **Say "Thanks" by clicking a reaction in the post that helped you.
    **Please mark the post that solves your problem by clicking on "Accept as Solution"