How to make lookup tables robust and fast
A common requirement is to augment an existing data set with new columns based on data from a lookup table, e.g.,
- lookup the State based on the post (zip) code
- map disparate campaign IDs from a stacked DCM/DS/Facebook data set etc into a master campaign.
In Domo lookups are done using a JOIN. There are a couple of ways to do this, each with its pros/cons and in all cases major problem to watch out for.
Magic
I usuall start with Magic, even if its just to prototype the solution, and use a join on the lookup key (i..e, column or set of columns).
Pros:
- Rapid development
- Easy to test and debug
Cons:
- Performance. If the data set is 10m+ rows, Magic might not be fast enough.
Caution
If the join key(s) is not unique you will duplicating record from the primary data set, which inflates the reporting. This happens if you don't use the right combination of columns for the key, or if the data is created in a spreadsheet or webform which does not enforce unique keys.
The solution is to double check your work on the join, dedupe the lookup table on the key and run the ETL, then check that the number of output rows is the same as the number of inputs rows. You can do this by checking the Magic logs or by creating a sumo.
If my data set is large, e..g, over 1M rows, I prototype using a sub-set of data that I can process in under 5 min. This creates a fast test/fix cycle time to get the right join keys, then when it works and is validated, I go back to the full data set. Even when the data set is very large, I start in Magic because it is so much easier to build and debug and it act as a great blueprint for creating a SQL ETL.
Blends
For large data sets a blend is a great solution as the join is done in Adrenaline (the cards database), which only takes seconds even for data sets with 10s of millions to 100s of millions of rows.
Pros:
- Performance
Cons:
- Does not support PDP on the output and can't use an input with PDP.
- If the input data set is another blend, say an append of data sets (DCM/DS/Facebook etc), then card rendering performance can suffer.
Caution
Same join issues as with Magic.
SQL
If the data set is large (say 5+m rows) and you can't use a Blend, then SQL is the way to go.
Pros:
- Beter performance than Magic. In my tests MySQL is about 2x faster and Redshift 10x.
- In SQL you can add validation code to check the input/output row count is the same and write this to a log and set alerts. This is very valuable but take a bit of effort as its not baked into the SQL ETL
Cons
- Harder to code
- Harder to debug
Caution
Same join issues as with Magic, however you can create join that only return the first match. Personally I found that overly complex and instead I ensured that I had the right columns for the join, deduped the lookup tavle on those keys, and created validations to warn me when the number of input and output rows did not match.
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.8K Visualize
- 2.5K Charting
- 732 Beast Mode
- 55 App Studio
- 40 Variables
- 684 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 46 Workflows
- 10 DomoAI
- 35 Predict
- 14 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 123 Manage
- 120 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 107 Community Announcements
- 4.8K Archive