How to use datasets with duplicate column names in a dataflow

Using a JIRA connector, I import a dataset, which has a 1000 columns. There are two columns which have duplicate names. Consequently, I cannot use the dataset in a Domo SQL transform as it gives me a duplicate column error. I also cannot write each column name individually as there are 1000 columns (and I need 999).
Is there a way to use my dataset in a transform, while catering to the duplicate column issue?
One solution could be selecting all columns in my table except one. Is there a way to do that?
Best Answer
-
Is this a Domo connector or one that you wrote?
I'm surprised that Domo didn't auto_adjust the duplicate column name OR fail the ingest.
If it's a connector you wrote, consider reshaping the data to have more rows and fewer columns. That's going to be a real pain to build analysis against.
If i had to guess you've got something like 'one row per project or ticket' and then flattened the data such that all the attribute_values are going across in columns but very few of the columns are actually populated as you scan through the rows.
Also, 1000 is a REALLY convenient number for a connector ... are you sure it's not accidentally truncating data because it ran into a limit?
Short answer I would take a closer look at the connector and see if i can't reshape the data before bringing it into Domo.
Jae Wilson
Check out my 🎥 Domo Training YouTube Channel 👨💻
**Say "Thanks" by clicking the ❤️ in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"0
Answers
-
Is this a Domo connector or one that you wrote?
I'm surprised that Domo didn't auto_adjust the duplicate column name OR fail the ingest.
If it's a connector you wrote, consider reshaping the data to have more rows and fewer columns. That's going to be a real pain to build analysis against.
If i had to guess you've got something like 'one row per project or ticket' and then flattened the data such that all the attribute_values are going across in columns but very few of the columns are actually populated as you scan through the rows.
Also, 1000 is a REALLY convenient number for a connector ... are you sure it's not accidentally truncating data because it ran into a limit?
Short answer I would take a closer look at the connector and see if i can't reshape the data before bringing it into Domo.
Jae Wilson
Check out my 🎥 Domo Training YouTube Channel 👨💻
**Say "Thanks" by clicking the ❤️ in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"0 -
Hello,
Thanks for the reply. I am using the JIRA Rest API Domo Connector and you are right, it is a one row per ticket dataset.I was also expecting the connector to auto adjust the column names. I wasn't needing the duplicated columns so I was able to include only the ones I need in the import using the Filter type = Include and Fields options.
Thanks!
0
Categories
- All Categories
- 2K Product Ideas
- 2K Ideas Exchange
- 1.6K Connect
- 1.3K Connectors
- 311 Workbench
- 6 Cloud Amplifier
- 9 Federated
- 3.8K Transform
- 656 Datasets
- 115 SQL DataFlows
- 2.2K Magic ETL
- 813 Beast Mode
- 3.3K Visualize
- 2.5K Charting
- 81 App Studio
- 45 Variables
- 771 Automate
- 190 Apps
- 481 APIs & Domo Developer
- 77 Workflows
- 23 Code Engine
- 36 AI and Machine Learning
- 19 AI Chat
- AI Playground
- AI Projects and Models
- 17 Jupyter Workspaces
- 410 Distribute
- 120 Domo Everywhere
- 280 Scheduled Reports
- 10 Software Integrations
- 142 Manage
- 138 Governance & Security
- 8 Domo Community Gallery
- 48 Product Releases
- 12 Domo University
- 5.4K Community Forums
- 41 Getting Started
- 31 Community Member Introductions
- 114 Community Announcements
- 4.8K Archive