Comments
-
Got it. This is really helpful @RobSomers. I've already made some progress with a prototype using your suggested approach.
-
@RobSomers thanks for the quick response and suggestion. We already have separate charts per interaction type, but the business is looking for totals across all interactions by date. Hence the requirement. If I'm not joining the data and instead appending it, how will the ETL process handle it when each table has many…
-
The Domo Integration suite can do this, but it's a very expensive add-on, so we decided against using it.
-
So doing the multiple column splits and multiple unpivots is causing the ETL to run extremely slowly - I'm looking for any alternative solutions to handling a bunch of columns that took like this: "value1,value2,value3,value4,value5" And splitting them and putting them into separate rows for a single new column. Any other…
-
So this is still driving me nuts - I used this solution for a single column, but I have about 10 columns where I have to do the same thing - do a lookup from an external table to change the values to match those. Is there any simpler way to do this across multiple columns?
-
This is what I ended up doing... Maybe it's not ideal... Since that CSV list never has more than 10 values in it, I joined the data from the external lookup table, then split the column into 10 columns, then unpivoted the 10 columns so each one is now a row. Not particularly elegant, but it seems to do the trick.
-
Does anyone know if the Zendesk Connector can pull in the Display Names for custom fields rather than the tag? So far I think my only solution to getting the proper Display Names for custom fields in my reports is to do a JSON API call to pull in the Field Details and Values separately and then join to that data in Magic…
-
I've used the pivot tile for some other ETL work, but never the unpivot ones. I'll take a look and see if that might help.
-
Thanks @MarkSnodgrass - that was what I thought I might have to do. It's just that there are like 20 columns where this needs to happen, and new columns get added to the primary data set which need the same sort of mapping on a regular basis. Was hoping for a more scalable solution, but this might be the only approach