unexpected result from joining to ETL data flows
So here is another strange happening that I do not understand. Here goes folks:
I have two data flows that I have created each contains a date column and a value column. The date columns don't match up exactly - same date but not the same time samps... anyway I want to join them so that I will have one date column and two value columns. I add a constant column with a text value of X to each of the dataflows and then I join them using the inner method by identifying and matching the constant X column... each of the datasets has about 1k rows BUT the resulting output dataset has over 1m rows... and I don't understand why?!?!?
Anyone please explain what am I doing wrong?!
Thanks in advance...
Best Answer
-
You can also do this via ETL if it's easier.
Here's what it would look like (i included the detail for one of the groupings:
Just make sure to group by your date field and average your value field.
Hope this helps,
ValiantSpur
0
Answers
-
Hi,
I am not sure what is going on with the join, the inner join adds only rows that have a matching identifier in both datasets, I dont know the definition by heart, but it might just be that it matches multiple times because it is a constant... 1k x 1k = 1m.
But there is an easier way than matching on your additional text string. You can just reformat the datetime column you have into a date column - in SQL this works with DATE_FORMAT(), in ETL this would be "Set column type" - you can find instructions here. If that doesnt work as intended, you can go with a bunch of "Date operations" and "Combine columns" that gives you a date column as well. Bear in mind that both date columns need to have the same format too.
You might want to group (like sum, also on the above knowledge page) your values by date (not datetime) before you join the sets, but that depends on what your data actually contains.
HTH JHL
0 -
Thanks for the answer but i still am not getting the correct results - i have tried a different way to join, instead of using the constats I normalized the date columns in both datasets by removing the time stamps with "set column type" date only command, but the result still gets over 100k rows... no idea why...
0 -
Any chance you could provide some example values of the two datasets you're wanting to join as well as what you're wanting to happen?
We should be able to provide more help with some more detail.
Sincerely,
ValiantSpur
0 -
Ok so I have basically data from two different currency exchanges the data going back several years, but i have current data from last week that is updated every 10 minutes. so if you imagine I have two years of date fields that have a corresponding value field of an average currency price and then when I get to Feb 13 i have data that updates several hundred times per day on the same day. I have the similar situation in two exchanges. Now, I want to compare one exchange vs another exchange so I want to have an ETL that has 3 columns one Date and two columns with values, one from each exchange. My command flow is this:
1. I add the two datasets to the ETL
2. I select DATE and VALUE column in each dataset
3. I normalize DATE column by using SET COLUMN TYPE command to remove the time stamps (so the column shows only the date)
4. I filter the columns by date to show only from Jan 1, 2017 (i have about 1k rows in each dataset this way)
5. I join the datasets by usin LEFT OUTER join (tried INNER join as well)
The output has in excess of 100k rows...
I hope this is clearer. Please LMK if I can provide any further information...
0 -
So if you're joining on date, you'll need to do something to take care of the "last week" data that is reporting back every 10 minutes. If you remove the timestamp and then join on date, you'll end up with every date in the last week being matched to every other row with that day (so instead of a 1 to 1 join you end up with a 1 to many, which is probably causing your high row count).
1st question: Do you only need 1 row per day? (I'll assume you do with my answer below)
2nd question: Is every date in your range accounted for? (ie, do both datasets have the exact same dates returned. If one returns weekends but not another then it will affect the results. I'll assume they are they same dates between them)
Step 1: It sounds like you did this right, but I would begin by setting both dates to a Date type.
Step 2: You need to group on your date field (If there are 200 records for Feb 13, we need to average the value for that day)
Here's a quick SQL transform you could follow to do this for each dataset:
SELECT `Date`, AVG(`Value`)
FROM Dataset
GROUP BY `Date`If your run both datasets through the grouping, you should end up with 1 record per day for each.
Step 3: Still assuming you have the same dates between both. You can then do a LEFT JOIN to combine the two
SELECT a.`Date`, a.`Value` AS aValue, b.`Value` AS bValue
FROM dataset1 AS a LEFT JOIN dataset2 AS b
ON a.`Date` = b.`Date`That should give you the result it sounds like you're looking for.
Let me know if I'm wrong on one of my assumptions or if you need me to clarify anything.
Hope this helps,
ValiantSpur
**Please mark "Accept as Solution" if this post solves your problem
**Say "Thanks" by clicking the "heart" in the post that helped you.0 -
You are correct in your assumptions. I need only one value per day.
As for the rest, you are indicating that I should be doing this in SQL and not ETL? because I am not too familiar with SQL process... but I will give it a go and see what is the result and post it here...
0 -
You can also do this via ETL if it's easier.
Here's what it would look like (i included the detail for one of the groupings:
Just make sure to group by your date field and average your value field.
Hope this helps,
ValiantSpur
0 -
yep this works! thank you so much!!!!
0
Categories
- All Categories
- 1.7K Product Ideas
- 1.7K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 294 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.8K Transform
- 97 SQL DataFlows
- 607 Datasets
- 2.1K Magic ETL
- 3.8K Visualize
- 2.4K Charting
- 707 Beast Mode
- 49 App Studio
- 39 Variables
- 667 Automate
- 170 Apps
- 446 APIs & Domo Developer
- 44 Workflows
- 7 DomoAI
- 33 Predict
- 13 Jupyter Workspaces
- 20 R & Python Tiles
- 391 Distribute
- 111 Domo Everywhere
- 274 Scheduled Reports
- 6 Software Integrations
- 115 Manage
- 112 Governance & Security
- Domo Community Gallery
- 31 Product Releases
- 9 Domo University
- 5.3K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 103 Community Announcements
- 4.8K Archive