Fill empty columns based on values in other columns
I'm trying to figure out a way to do the following in an ETL.
Given data that looks like this, I want to propagate the email address from the first row with a specific ID to all other rows that match that ID. I need to do this in order to attribute all interactions of all sorts to a specific user's email address:
Input Example:
Desired Output:
Best Answers
-
This is best accomplished in a dataflow. Using magicETL you can use the group by tile and group by ID for the field Email and select the first non-null value option, then, with 1 email per id from the output of that tile, you can join that with you data on the ID field. Let me know if you have any questions!
If I solved your problem, please "accept" my answer as the solution
1 -
No because you will break out the group by into a separate line in the dataflow. So from you input dataset there will be two lines:
1. All rows of data2. 1 unique row for each ID.
Then you will left join the unique rows for each ID with the original input that has all rows of data. No rows of data will be lost.
If I solved your problem, please "accept" my answer as the solution
1
Answers
-
This is best accomplished in a dataflow. Using magicETL you can use the group by tile and group by ID for the field Email and select the first non-null value option, then, with 1 email per id from the output of that tile, you can join that with you data on the ID field. Let me know if you have any questions!
If I solved your problem, please "accept" my answer as the solution
1 -
Thanks @colemenwilson I'm going to try that approach and will post back if I can't figure it out.
0 -
@colemenwilson - if I do the group by ID tile, won't I lose the additional rows for that same ID? The ID is in the same table as the individual interaction rows
0 -
No because you will break out the group by into a separate line in the dataflow. So from you input dataset there will be two lines:
1. All rows of data2. 1 unique row for each ID.
Then you will left join the unique rows for each ID with the original input that has all rows of data. No rows of data will be lost.
If I solved your problem, please "accept" my answer as the solution
1 -
Got it, thanks!
0
Categories
- All Categories
- 1.1K Product Ideas
- 1.1K Ideas Exchange
- 1.2K Connect
- 969 Connectors
- 257 Workbench
- Cloud Amplifier
- 1 Federated
- 2.4K Transform
- 76 SQL DataFlows
- 501 Datasets
- 1.8K Magic ETL
- 2.7K Visualize
- 2.2K Charting
- 375 Beast Mode
- 20 Variables
- 485 Automate
- 103 Apps
- 378 APIs & Domo Developer
- 6 Workflows
- 22 Predict
- 6 Jupyter Workspaces
- 16 R & Python Tiles
- 316 Distribute
- 64 Domo Everywhere
- 252 Scheduled Reports
- 59 Manage
- 59 Governance & Security
- 1 Product Release Questions
- 5K Community Forums
- 37 Getting Started
- 23 Community Member Introductions
- 63 Community Announcements
- 4.8K Archive