コメント
-
The flow consists of the following: 1. Add Formula. New column name: "Row Number", Formula: ROW_NUMBER() 2. Rank & Window. Order by "Row Number". No partition columns. New column name: "ID Count", contains Count of "ID" from Unbounded Preceding to 0 Following. 3. Filter. Condition: ID Is Not Null. 4. Join #2 to #3. Type:…
-
You could preserve your historical data by making a new DataFlow that uses the problematic output as an input, and then copy its contents to a new DataSet. Then you could bring that copied DataSet in as a secondary input to your original DataFlow and merge the new data with the historical data in any way you choose (Append…
-
Upsert works exactly like Append except that any pre-existing rows with the same key value (the value in the specified key column(s)) as a newer row will be removed. It isn't currently possible (in Magic ETL) to remove rows with a given key value entirely, nor is it possible to update the key value of a row. Reverting to…
-
In addition to the correction mentioned by @ggenovese, this segment also needs to change: date(Job Open Date)<'10/01/2024' . An unfortunate quirk of Magic ETL is that comparison operations with strings always happen as strings, which is usually not what you want. You must convert the right side of the comparison to a date…
-
Try an Add Formula tile and the TIME_TO_SEC function: name: duration_sec expression: TIME_TO_SEC(`duration`) This should work as long as all your string values are in that hours:minutes format.
-
@vaco Timestamps and Dates columns in Magic ETL don't have a format. They will always be displayed in a way that is determined by the table or card you're looking at them in, along with your company settings like timezone and locale. If you want to force a specific format everywhere, a string column is what you need. As…
-
What about this?: REPLACE(process_timestamp, 'T', ' ')
-
The flow looks like this: Inverse Amounts configuration: Join configuration: Restate Include / Exclude configuration:
-
Okay, so these "probably 5 rows" from the same campaign each have an impression count, but they're not divvied up the way you want. You want the campaign divided into exactly 3 rows, and you want the impressions divvied up 40%, 40%, and 20% between those three. The first thing you may need to do is a Group By the campaign…
-
Sure, you could branch it three ways (perhaps into three different Add Constants tiles) and then union it with itself using Append Rows. But I still don't feel like I understand what you're trying to do. You've said, "Lets say I have a million impressions in Campaign 1. I am looking to say take 40% of those and call it…
-
Are you trying to randomly assign rows the value A, B, or C with approximate probability .4, .4, and .2 respectively? Or do you need to divide the table as close as possible to perfectly in that proportion? If approximate is okay, you can do an Add Formula where you'll first generate a random number and then set the value…
-
I'd branch the Input DataSet tile into two Filter tiles, one with the rule amount > 0 and the other with the rule amount < 0 . Then, on one of the sides, use an Add Formula to create a column "match_amount" with the expression -amount . Then join the two branches on client ID, transaction type ID, transaction date, and…
-
For a date column called "d", the filter formula would be: dayofweek(d) = 7 and dayofmonth(d) < 8 and (year(today()) * 12 + month(today())) - (year(d) * 12 + month(d)) < (case when dayofweek(today()) = 7 or dayofweek(today()) < dayofmonth(today()) then 7 else 6 end) This checks three conditions: 1. The day is Saturday 2.…
-
Unless it's a Cloud Amplifier DataSet, which I don't believe it is, then the connector is exporting the data from your DynamoDB. Most connectors export to CSV files. Those files are the form in which Magic ETL sees your data.
-
When you set the type to "Date" in the Input DataSet's Data Handling section and the source data is CSV (as it is in many cases), it's not converting a timestamp to a date. It's converting a string to a date; the string is the text in the CSV field. The String to Date conversion discards all time and zone information,…
-
I can't see why only some of the tiles are displaying the error from what you've said and shown, but I can say that upsert DataSets don't support data version filtering. Most DataSets (Replace and Append DataSets, excluding upsert) are loaded into Magic ETL in the form of a list of the originally uploaded files (usually…
-
Unless the adjustments you need to make to those cross-month reservations requires aggregates (e.g. Group By or Rank & Window tiles), then I would suggest not filtering them out and adding them back, as you say. Instead, do everything in a single Add Formula tile. First formula: Name: CROSSMONTH, Formula: MONTH(RSD) <>…
-
In that error message, the slashes are not part of the problematic value. They are placed around the value by the error message, the way quotes usually would be. They are very confusing and I am going to replace them with quotes. My guess is the date format you want is '%c/%e/%y', like so: STR_TO_DATE(dateField,…
-
I agree with Mark, for something like this you're better off without regex. I would note, however, that you're also slightly better off just using two SPLIT_PART invocations, like this: SPLIT_PART(SPLIT_PART(my_string, 'My Prefix', 2), 'My Suffix', 1) Paired SPLIT_PART invocations like this will give you the part of your…
-
The empties may be coming into Domo as nulls, but they may also be empty strings (which is different from null). In some cases they could even be strings with "whitespace" characters in them like spaces, though that is only likely if the source of the data is subject to human mistakes. Use an Add Formula tile and name the…
-
I've discussed and diagnosed this issue in private messages with OP, but for any others who come across this issue in the future: Federated DataSets, that is, DataSets that reside on external databases and which are not part of the Cloud Amplifier feature, are not supported as inputs to Magic ETL. We prevent you from…
-
Where are you seeing the row count of 1,000,000? Does the DataFlow execution say it had 1,000,000 rows of output? The "Data Output" column here:
-
That looks right to me.
-
From your screenshots I'm assuming you have three formulas for each of five rows, that's where the numbers below are coming from. Start with an Input DataSet tile. We'll split that tile's output out to five Add Constants tiles, where we'll add a column called "Row Name" to each, with values "Baseline", "Rejected",…
-
This will be easier in the near future with the Join tile ON clause and the Series (row generator) tile. For now, the solution to this problem is described here. There's more than one answer; the more complicated one involves generating a series table within the flow, the simpler ones assume you've built a series table as…
-
You should be able to take the solution we did for Project Count with these changes: 1. Rename "ProjectId Const" tile to "Cost Const", name the constant "Planned Cost" instead of "Project Id", and make it zero instead of negative one. 2. In the Group By, do a Sum of "Planned Cost" instead of a Count Distinct of "Project…
-
You could add phase totals by adding another branch of the original input with its own Group By, grouping by only Phase and doing a Count Distinct on Project Id called Total Project Count, and then Join that branch back in to your final result using Phase on both sides as the join key.
-
Previews operate on a limited number of rows from their inputs, by default just 10k rows: This can cause Group By aggregates to look incomplete, Joins to miss when they should hit, and Filters to produce zero rows when they should produce some.
-
Also, since this is missing from my original comment, here is the configuration for the "Correct for Dummy" Add Formula tile:
-
The Add Constant that comes right before the Append should add a constant "Project Id" with value -1. The goal is to create a table that has three columns that all correspond to columns in the original table, Requestor Organization, Project Id, and Phase. You've got everything up to the Join correct, that should be giving…