can I use a "insert into' statement in redshift ?

i have multiple transformation stage in a redshift data flow.


i am trying to use an insert statement in a transform by disabling the "Generate Table Output".


is that possible in domo ?




  • kshah008
    kshah008 Contributor

    Hi all,

    Can anybody help @Ravendran with their question?



  • Hi @Ravendran,


    It is possible to insert records into a transform dataset using an INSERT statement. The syntax to insert records is:



    INSERT INTO table (column_name) VALUES ('value')



    Simply add this command in your transform and disable the "Generate Table Output" textbox.


    Please let me know if that does not answer your question.

    **Say "Thanks" by clicking the thumbs up in the post that helped you.
    **Please mark the post that solves your problem by clicking on "Accept as Solution"
  • kshah008
    kshah008 Contributor

    @Ravendran, did creed's reply help answer your question? 

  • Is it possible to do the insert (or delete) on a domo dataset?  I have a salesforce dataset that I want to be able to delete records from and insert new records on a daily basis.  Being able to do the delete and insert on only the transform dataset does not help me unless I can write back to the original domo dataset that was used as input, but I do not believe that is possible.

  • kshah008
    kshah008 Contributor

    @danielbest, please feel free to open up a new thread for better exposure to your question.

  • Did this ever get resolved? I'm trying to do the same thing (insert records into a DOMO dataset on a recurring basis) via redshift dataflow but not sure if it's possible. 

  • @btm who on the product side can review this to provide guidance?



  • Hi,


    I have the same question. Was there ever a resolution to this?

  • VictorReyes
    VictorReyes Contributor



    I figured out that you can add an output of a dataflow as an input of the same dataflow. I have been using this as a little trick to updating an existing DOMO dataset. So, I have my new/raw data and my existing set. I have the same output of the dataflow as an input and am updating the data with my new records. I don't set the dataflow to re-run when the dataset that is both an input and output is updated. Only runs when the raw data has changed. Hope this helps.