How do I identify the second lowest value? I.e. MIN('COLUMN') +1?
I'm trying to sort my API calls by time and label the order in which they took place, however I'm not sure how to identify beyond the MIN time. Is there a way to identify the MIN and then look at the second or third or fourth value thereafter?
Ideally, my Beast Mode would then label each call with the order they came in by date and user.
Example:
MIN('TIME') THEN '1'
MIN('TIME')+1 THEN '2'
MIN(*TIME')+2 THEN '3'
ETC.
Thank you!
Best Answer
-
In order to get that numbering of ordered rows you're looking for, you'll need to use a dataflow. You can accomplish that result using any of the dataflow methods: Magic ETL, SQL (MySQL), or SQL (Redshift).
If you prefer MySQL or Redshift, check out this doc: http://knowledge.domo.com?cid=rank. However, I recommend Magic ETL because it is the easiest to set up.
To create a column of ordered row numbers in Magic ETL:
- Create a new Magic ETL dataflow
- Drag an "Input Dataset" tile onto the canvas and configure it by selecting your source dataset
- Drag a "Rank & Window" tile onto the canvas and configure it
- Click the "Add Function" button
- In box 1, name the output column (the column that will contain the row numbers)
- In box 2, select "Row Number"
- Click "Apply"
- In box 2, select the column that should determine the row ordering
- In box 3, select Ascending or Descending. Since you want to start with the earliest time, you'll need to use "Ascending"
- Finally, box 4 is optional. If you wanted to partition the row numbering by one or more columns, this is where you would do it. For example, perhaps you want to row-number records for each API name independently, so you could identify the earliest call to API 1 separately from the earliest call to API 2. You could accomplish that result by selecting the API name field in box 4
- Click the "Add Function" button
- Drag an "Output Dataset" tile onto the canvas and configure it by specifying the output dataset name.
- Name, save and run your dataflow and you're done
Now, you can build your visualization on your new output dataset, using your row number field as a filter or any other way you'd use any other column.
Hope that gets you want you're looking for.
Cheers
1
Answers
-
I'm fairly certain you'll need to use a SQL transform to make this happen.
Here's how i would approach it:
If you're finding min time by ID then I would do something like this
SELECT `ID`, DATE(`Time`) AS 'Day', MIN(`Time`) AS 'MinTime'
FROM dataset
GROUP BY `ID`, DATE(`Time`)Then use this to create a flag for your original dataset
SELECT a.*, CASE WHEN b.`MinTime` = a.`Time` THEN 1 ELSE 0 END AS 'MinFlag'
FROM dataset AS a LEFT JOIN transform1 AS b ON b.`ID` = a.`ID` AND DATE(a.`Time`) = DATE(b.`Time`)Now using this dataset in your card you can create a BeastMode that goes:
MIN(CASE WHEN `MinFlag` = 0 THEN `Time` END)
So this would create a flag for each id by day for the minimum time found and then your beastmode says when time is not equal to a minimum time return all the times and find the minimum of the returned values.
Hope that helps get you what you need,
Valiant
**Please mark "Accept as Solution" if this post solves your problem
**Say "Thanks" by clicking the "heart" in the post that helped you.0 -
If this is a data set that you are analyzing, it seems like you could utilize the ROW_NUMBER function. When you use the function to order by date in ascending order, it would make your earliest date have a row number of 1 and count up from there.
Additonally, if there are batches of data you are pulling in, you could use the PARTITION BY argument to have the row numbers start over at each day, batch number, etc. which would let you analyze something like "at what time are the second calls made?" over time.
Check out Amazon Redshift documentation here:
https://docs.aws.amazon.com/redshift/latest/dg/r_WF_ROW_NUMBER.html
And be sure to check out the example:
https://docs.aws.amazon.com/redshift/latest/dg/r_Examples_of_WF_ROW_NUMBER_WF.html
1 -
In order to get that numbering of ordered rows you're looking for, you'll need to use a dataflow. You can accomplish that result using any of the dataflow methods: Magic ETL, SQL (MySQL), or SQL (Redshift).
If you prefer MySQL or Redshift, check out this doc: http://knowledge.domo.com?cid=rank. However, I recommend Magic ETL because it is the easiest to set up.
To create a column of ordered row numbers in Magic ETL:
- Create a new Magic ETL dataflow
- Drag an "Input Dataset" tile onto the canvas and configure it by selecting your source dataset
- Drag a "Rank & Window" tile onto the canvas and configure it
- Click the "Add Function" button
- In box 1, name the output column (the column that will contain the row numbers)
- In box 2, select "Row Number"
- Click "Apply"
- In box 2, select the column that should determine the row ordering
- In box 3, select Ascending or Descending. Since you want to start with the earliest time, you'll need to use "Ascending"
- Finally, box 4 is optional. If you wanted to partition the row numbering by one or more columns, this is where you would do it. For example, perhaps you want to row-number records for each API name independently, so you could identify the earliest call to API 1 separately from the earliest call to API 2. You could accomplish that result by selecting the API name field in box 4
- Click the "Add Function" button
- Drag an "Output Dataset" tile onto the canvas and configure it by specifying the output dataset name.
- Name, save and run your dataflow and you're done
Now, you can build your visualization on your new output dataset, using your row number field as a filter or any other way you'd use any other column.
Hope that gets you want you're looking for.
Cheers
1 -
Thank you all! Many of these will work but the solution with the ETL was the easiest. Thank you for the help!
0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.8K Visualize
- 2.5K Charting
- 738 Beast Mode
- 56 App Studio
- 40 Variables
- 684 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 46 Workflows
- 10 DomoAI
- 35 Predict
- 14 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 123 Manage
- 120 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive