How to make lookup tables robust and fast
A common requirement is to augment an existing data set with new columns based on data from a lookup table, e.g.,
- lookup the State based on the post (zip) code
- map disparate campaign IDs from a stacked DCM/DS/Facebook data set etc into a master campaign.
In Domo lookups are done using a JOIN. There are a couple of ways to do this, each with its pros/cons and in all cases major problem to watch out for.
I usuall start with Magic, even if its just to prototype the solution, and use a join on the lookup key (i..e, column or set of columns).
- Rapid development
- Easy to test and debug
- Performance. If the data set is 10m+ rows, Magic might not be fast enough.
If the join key(s) is not unique you will duplicating record from the primary data set, which inflates the reporting. This happens if you don't use the right combination of columns for the key, or if the data is created in a spreadsheet or webform which does not enforce unique keys.
The solution is to double check your work on the join, dedupe the lookup table on the key and run the ETL, then check that the number of output rows is the same as the number of inputs rows. You can do this by checking the Magic logs or by creating a sumo.
If my data set is large, e..g, over 1M rows, I prototype using a sub-set of data that I can process in under 5 min. This creates a fast test/fix cycle time to get the right join keys, then when it works and is validated, I go back to the full data set. Even when the data set is very large, I start in Magic because it is so much easier to build and debug and it act as a great blueprint for creating a SQL ETL.
For large data sets a blend is a great solution as the join is done in Adrenaline (the cards database), which only takes seconds even for data sets with 10s of millions to 100s of millions of rows.
- Does not support PDP on the output and can't use an input with PDP.
- If the input data set is another blend, say an append of data sets (DCM/DS/Facebook etc), then card rendering performance can suffer.
Same join issues as with Magic.
If the data set is large (say 5+m rows) and you can't use a Blend, then SQL is the way to go.
- Beter performance than Magic. In my tests MySQL is about 2x faster and Redshift 10x.
- In SQL you can add validation code to check the input/output row count is the same and write this to a log and set alerts. This is very valuable but take a bit of effort as its not baked into the SQL ETL
- Harder to code
- Harder to debug
Same join issues as with Magic, however you can create join that only return the first match. Personally I found that overly complex and instead I ensured that I had the right columns for the join, deduped the lookup tavle on those keys, and created validations to warn me when the number of input and output rows did not match.
- 10.5K All Categories
- 8 Connect
- 918 Connectors
- 250 Workbench
- 470 Transform
- 1.7K Magic ETL
- 69 SQL DataFlows
- 477 Datasets
- 194 Visualize
- 253 Beast Mode
- 2.1K Charting
- 11 Variables
- 80 Cards, Dashboards, Stories
- 17 Automate
- 354 APIs & Domo Developer
- 89 Apps
- 3 Workflows
- 20 Predict
- 5 Jupyter Workspaces
- 15 R & Python Tiles
- 247 Distribute
- 63 Domo Everywhere
- 243 Scheduled Reports
- 21 Manage
- 42 Governance & Security
- 174 Product Ideas
- 1.2K Ideas Exchange
- 12 Community Forums
- 27 Getting Started
- 14 Community Member Introductions
- 55 Community News
- 4.5K Archive