Grouping by Cost, revenue and margin
Hey all, looking for ideas.
I have a data set that has shipping lanes (Origin/Destination) and then a summed cost and revenue for each shipment on each lane. Then a margin based on that for each lane and shipment.
I have each lane summarized into a monthly average, but I would like to be able to break that down a little further to show the groupings of costs, groupings of revenue and grouping of margin. That way we can see how many shipments fall into each group, what the cost and margins are for each group and then also show how many shipments are outliers and didn't fall into a group.
Example would be something like:
CA to MN - 130 shipments cost $300- 400, Margin $20-30, Revenue $320-430.
CA to MN - 20 shipments Cost 400-500, margin $5-10, Revenue -$10 to -15
A couple questions/difficulties:
- I would like to not use predefined ranges for costs, margin and revenue. I'd rather the system broke them up into groups with manual parameters such as number of ranges and manually setting the range for a given group (ie $100 difference in costs lumped together).
- Repeated three times based on cost, revenue and margin.
- Would there be a way to use the largest group as the 'mean' and measure the other groups from the largest group? Or to group things statistically based on the largest group?
- Assuming this is better to do at the ETl level than a card?
- Need to keep the individual shipment level detail so we can look at individual shipments within each grouping.
Maybe not a single answer here, but interested in hearing how others might approach it. Also, my programming is pretty much 0, but able to write cases' etc. decently in the ETLs. Do have access internally to programmers that could help if need be.
Thanks in advance!
Best Answers
-
Got it. To do clustering, you will want to use a Domo Jupyter notebook. The notebook is fairly straightforward to set up. You can load in your dataset, execute your python code clustering function, and output the results to another dataset in domo, or even as a column in the same dataset.
For your clustering function I suggest using a KNN Classifier
You may want to leverage your team of developers to write the code for you. Once the KNN classifier is set up, you can have the notebook execute automatically when the input dataset refreshes.
0 -
Not to gatekeep the fun that can be had in Jupyter, as someone that's describing themself as zero-code, I think ETLs and exploring the data with tables is probably a better starting point than trying to learn Python or R to do clustering.
Since you said your dataset has hundreds of columns, I would recommend starting by unpivoting your data into a "long" format. I would use the Dynamic Unpivot tile in MagicETL. Select everything except your Cost1-Cost5 columns to not be pivoted.
I noticed in your sample data that a lot of your costs are $0, so to reduce the number of rows, I would filter out all the $0 costs.
You mention wanting to compare your "biggest" to others, so I might use a Group By to get the sum of costs by cost category (if that's what you mean by "biggest"):
Here's the ETL all together:
And what the unpivoted data looks like after running:
From there, you can use a Pivot Table to do pretty much everything you described:
Note the plus signs next to the origin name, which you can click to expand to see the individual orders. Also, note I added the Total Costs and Total Revenue as quick filters, which you can let users play with to see meet those different criteria you described (300 to 400; -15 to -10). You can also add filters as cards on your dashboard.
As far as identifying clusters, I would start by just getting everything into a well-sorted table and get to know your data. Some of those clusters will become obvious. A sorted bar graph can also help identify large jumps in values.
As far as comparing against a particular group, you can use "Segments" for that:
This is all 0-code so far, and we've reduced what you've described as hundreds of columns down to a very manageable 11. The only thing you have to be careful of when working with the unpivoted data is to not accidentally double count any of your metrics that appear across rows. If you're using the data outside of pivot tables, functions with 'DISTINCT' and 'FIXED' can be your friends. Here are a couple that might be useful to you to give you an idea:
Please 💡/💖/👍/😊 this post if you read it and found it helpful.
Please accept the answer if it solved your problem.
0
Answers
-
Hey Adam,
Could you attach an excel spreadsheet with sample input and output data?
It will be easier to explain what ETL / beast mode will need to be created if we have an example of what the inputs and desired outputs are.
Thanks,
0 -
@nmizzell I think if I can output it in tabular data first that would be a big win and then as a team we can play with how they would like to see it best so maybe something like this for the first step. More thoughtful of how to get the data I want at the moment rather than the visual. I added some columns to the right O-U.
If I could somehow tell the system to find the clusters on the margins and break them into groups that would be super helpful.
Let me know if its not making sense. I know sometimes I have my mind into it for a while so probably missing saying some steps and leaving gaps for you.
Also, I am thinking that this is going to be a clustering question and how to do that in Domo.
0 -
Got it. To do clustering, you will want to use a Domo Jupyter notebook. The notebook is fairly straightforward to set up. You can load in your dataset, execute your python code clustering function, and output the results to another dataset in domo, or even as a column in the same dataset.
For your clustering function I suggest using a KNN Classifier
You may want to leverage your team of developers to write the code for you. Once the KNN classifier is set up, you can have the notebook execute automatically when the input dataset refreshes.
0 -
Not to gatekeep the fun that can be had in Jupyter, as someone that's describing themself as zero-code, I think ETLs and exploring the data with tables is probably a better starting point than trying to learn Python or R to do clustering.
Since you said your dataset has hundreds of columns, I would recommend starting by unpivoting your data into a "long" format. I would use the Dynamic Unpivot tile in MagicETL. Select everything except your Cost1-Cost5 columns to not be pivoted.
I noticed in your sample data that a lot of your costs are $0, so to reduce the number of rows, I would filter out all the $0 costs.
You mention wanting to compare your "biggest" to others, so I might use a Group By to get the sum of costs by cost category (if that's what you mean by "biggest"):
Here's the ETL all together:
And what the unpivoted data looks like after running:
From there, you can use a Pivot Table to do pretty much everything you described:
Note the plus signs next to the origin name, which you can click to expand to see the individual orders. Also, note I added the Total Costs and Total Revenue as quick filters, which you can let users play with to see meet those different criteria you described (300 to 400; -15 to -10). You can also add filters as cards on your dashboard.
As far as identifying clusters, I would start by just getting everything into a well-sorted table and get to know your data. Some of those clusters will become obvious. A sorted bar graph can also help identify large jumps in values.
As far as comparing against a particular group, you can use "Segments" for that:
This is all 0-code so far, and we've reduced what you've described as hundreds of columns down to a very manageable 11. The only thing you have to be careful of when working with the unpivoted data is to not accidentally double count any of your metrics that appear across rows. If you're using the data outside of pivot tables, functions with 'DISTINCT' and 'FIXED' can be your friends. Here are a couple that might be useful to you to give you an idea:
Please 💡/💖/👍/😊 this post if you read it and found it helpful.
Please accept the answer if it solved your problem.
0 -
Thanks guys I really appreciate the insight and both routes. Going to do some research down those paths and see what we can come up with. Appreciate it!
0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 738 Beast Mode
- 57 App Studio
- 40 Variables
- 685 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 47 Workflows
- 10 DomoAI
- 36 Predict
- 15 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 124 Manage
- 121 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive