Magic ETL

Magic ETL

How to use datasets with duplicate column names in a dataflow

Using a JIRA connector, I import a dataset, which has a 1000 columns. There are two columns which have duplicate names. Consequently, I cannot use the dataset in a Domo SQL transform as it gives me a duplicate column error. I also cannot write each column name individually as there are 1000 columns (and I need 999). 

 

Is there a way to use my dataset in a transform, while catering to the duplicate column issue?

 

One solution could be selecting all columns in my table except one. Is there a way to do that?

Best Answer

  • Coach
    Answer ✓

    Is this a Domo connector or one that you wrote?  

     

    I'm surprised that Domo didn't auto_adjust the duplicate column name OR fail the ingest.

     

    If it's a connector you wrote, consider reshaping the data to have more rows and fewer columns.  That's going to be a real pain to build analysis against.

     

    If i had to guess you've got something like 'one row per project or ticket' and then flattened the data such that all the attribute_values are going across in columns but very few of the columns are actually populated as you scan through the rows. 

     

    Also, 1000 is a REALLY convenient number for a connector ... are you sure it's not accidentally truncating data because it ran into a limit?

     

    Short answer I would take a closer look at the connector and see if i can't reshape the data before bringing it into Domo.

     

     

    Jae Wilson
    Check out my 🎥 Domo Training YouTube Channel 👨‍💻

    **Say "Thanks" by clicking the ❤️ in the post that helped you.
    **Please mark the post that solves your problem by clicking on "Accept as Solution"

Answers

  • Coach
    Answer ✓

    Is this a Domo connector or one that you wrote?  

     

    I'm surprised that Domo didn't auto_adjust the duplicate column name OR fail the ingest.

     

    If it's a connector you wrote, consider reshaping the data to have more rows and fewer columns.  That's going to be a real pain to build analysis against.

     

    If i had to guess you've got something like 'one row per project or ticket' and then flattened the data such that all the attribute_values are going across in columns but very few of the columns are actually populated as you scan through the rows. 

     

    Also, 1000 is a REALLY convenient number for a connector ... are you sure it's not accidentally truncating data because it ran into a limit?

     

    Short answer I would take a closer look at the connector and see if i can't reshape the data before bringing it into Domo.

     

     

    Jae Wilson
    Check out my 🎥 Domo Training YouTube Channel 👨‍💻

    **Say "Thanks" by clicking the ❤️ in the post that helped you.
    **Please mark the post that solves your problem by clicking on "Accept as Solution"
  • Hello,

    Thanks for the reply. I am using the JIRA Rest API Domo Connector and you are right, it is a one row per ticket dataset.

     

    I was also expecting the connector to auto adjust the column names. I wasn't needing the duplicated columns so I was able to include only the ones I need in the import using the Filter type = Include and Fields options.

    Thanks!

Welcome!

It looks like you're new here. Members get access to exclusive content, events, rewards, and more. Sign in or register to get started.
Sign In