Best Of
Re: Report Builder Table Formatting
Hello @DavidChurchman,
I noticed that in Report Builder, that even when you have alternating row color turned off in the App, it still shows up in the generated email. I would think this is not intended behavior and worth raising a case for it. I also see the same behavior with Emojis that you are seeing. It appears that Report Builder is generating the email using the Google Noto Emoji version of the emoji (black & white) versus the default (typically Microsoft Fluent Emoji - Color) that is shown in the browser.
I was able to get the alternating row color to go away by setting the alternate row color to be the same color as normal rows, then re-creating my report.
Changed emoji to cow, then changed this setting under table styles then deleted and re-created my report:
Zoomed in on Emojis for Comparison:
From emojiterra site:
🐄 Hopefully this is helpful! 🙂
Re: Data in Email Subject Line
if you select the "Download from link to page" option, and then choose file type "Email Body as one row" it will create a column for the email subject line and a column for date/time received.
Re: Using email connector- Gathering data from Subject Regex
The email regex can match structural patterns to allow or deny loading. But apparently it cannot extract regex groups to add them to dataset fields. Thus, you can't get the version number from the email subject as a data field.
But you should be able to put the version number inside the file itself as a column, or put the version number in the filename. You can then extract it using Magic ETL. The email body should work as well.
Reference:
Reference:
Re: Web Crawler Function not working in Domo Workflow
Hi @erikjamesmason ,
The workflow is running fine, but I am getting the following error: Unexpected response code: 401 – java.io.IOException: Toe - 3E6J96FA9F-RTA95-4DEH1."
Re: Has Anyone Implemented Automated Monitoring for Embedded Domo Pages?
Thanks for the vote of confidence @MarkSnodgrass! Not sure I deserve it 😅
@random_user_098765 I don't have personal experience doing this, but I know @jaeW_at_Onyx has done some scripting to automate dashboard/card testing at scale. This YouTube video that he published might be helpful: https://youtu.be/3ZwlzOlRBbA?si=0pl5BUAfSgxGqaXQ
Issue with "EDIT IN JUPYTER" Button on Scheduled ipynb File
I’m running into an issue with a scheduled Jupyter notebook in Domo. For context:
- I have an ipynb file in Domo, let’s call it "Help Desk Fetch All Groups". This notebook was originally created as a schedule for a Python script in Domo’s Jupyter Workspace.
- The original Python file was likely named "Help Desk Fetch All Groups.py", which the ipynb notebook referenced.
- At some point, the original Python file was renamed to something like "Help Desk Groups.py". Or something more obscure, or it was deleted.
Here’s the problem:
- When I open the scheduled ipynb file and click the “EDIT IN JUPYTER” button (top right), I expect it to open the underlying Python code.
- Instead, I get a blank screen. I also can’t find the original Python file under its previous or current name. And I have quite a few files to search through.
My concerns/questions:
- Is this notebook now an orphan, since the original Python file was renamed? If so, the interface should indicate that, so I know it’s safe to delete.
- If it’s not an orphan, why doesn’t the EDIT IN JUPYTER button take me to the underlying code?
- How can I recover or reconnect the ipynb schedule to its original Python script?
Re: Changing dataset update method from Replace to Partition doubles data
@Chinmay_Totekar I believe the ticket was closed, with the unsatisfying conclusion that this was best for starting with new dataset, not converting existing ones. I did find two ways around it, though:
- For the first one, make a clone of the dataset using the Schema Management tool (Admin » Governance Toolkit » Schema Management), making sure to have it copy the data. Once done, use the same tool to clear the data out, then temporarily rewrite the ETL to set the output dataset to partitioning and bring in the cloned data and write the data out. It's kind of a pain, but it's what I've been doing for the past couple of years.
- Last month I decided to take the Upsert and Partitioning in Magic ETL self-paced course in Domo University. At around the 25 minute mark in the video it talks about using the CLI tool to partition the dataset for you. I haven't tested it, but it does look like it would work. You'll probably still want to make a clone of the original dataset just in case something goes wrong, but this may be your best bet.
jimsteph
Changing dataset update method from Replace to Partition doubles data
We've noticed that if we change the update method of an existing dataset to Partition from replace, we end up with two records in the dataset for every new one (and both of them look identical, down to the field we're using for partitioning): it was quite the shock to see a dataset I expected to have 91 million rows suddenly had 182 million. The obvious takeaway is that we probably should start from scratch when using partitions, but the benefits of converting existing ETLs is too strong a siren song for me to resist.
Two questions:
- Has anyone else noticed this? Both of us here got bit by this, so I want to know if it's a general bug, if it just affects our instance, or if we're doing it wrong and it's Working As Designed™.
- What would be a way around this? If I have other ETLs downstream of the dataset I don't want to delete the existing one and start from scratch unless I absolutely have to. My quick-and-probably-inefficient idea is to store the data to a secondary dataset (deduping if necessary), set the original to partition, and figure out how to use the even more beta feature to tell it to keep no partitions. Let it run once to clear out the dataset, then reimport the data from the secondary dataset and send it back to the original.
Any help would be appreciated.
jimsteph
Re: Making Sense of Wildly Unstructured Data
You can create an ETL that starts by splitting each comma-separated value into columns (Split Column). That would give you each label/value in it own column but not aligned.
You would have to iterate over all the columns to find the patterns. Label: Value —> Label and Value. With Label: in one column and value in the next column.
Then convert your irregular columns into key-value pairs and unpivot the columns.
If you give us a sample csv file with a few records, we can try to give you a direct example. If you are familiar with running Python in Domo, you can do something like this (below). Again….I don't have a data sample to try it out.
import pandas as pd
import re
# -----------------------------
# 1. Load raw data
# -----------------------------
# If CSV has one column called 'raw':
df = pd.read_csv('raw_events.csv') # or pd.read_excel('raw_events.xlsx')
# If multiple columns from CSV, combine them into one string per row
df['raw_combined'] = df.apply(lambda row: ','.join(row.dropna().astype(str)), axis=1)
# -----------------------------
# 2. Split rows by comma
# -----------------------------
split_cols = df['raw_combined'].str.split(',', expand=True)
# -----------------------------
# 3. Extract label/value pairs
# -----------------------------
def extract_label_value(cell, next_cell=None):
if pd.isna(cell):
return None, None
# Case 1: Label: Value in same cell
match = re.match(r'\s*(.+?):\s*(.+)', cell)
if match:
return match.group(1).strip(), match.group(2).strip()
# Case 2: Label: in one cell, value in next
if next_cell is not None:
match2 = re.match(r'\s*(.+?):\s*$', cell)
if match2:
return match2.group(1).strip(), next_cell.strip()
return None, None
# Iterate over all columns and extract key/value pairs
records = []
for idx, row in split_cols.iterrows():
row_data = {}
i = 0
while i < len(row):
key, value = extract_label_value(row[i], row[i+1] if i+1 < len(row) else None)
if key:
row_data[key] = value
# If value was in next cell, skip it
if re.match(r'.+?:\s*$', row[i]):
i += 1
i += 1
records.append(row_data)
# -----------------------------
# 4. Create flattened DataFrame
# -----------------------------
flat_df = pd.DataFrame(records)
# -----------------------------
# 5. Optional: rename columns to match desired output
# -----------------------------
rename_map = {
'ID number': 'Session ID and Title',
'Site': 'Site',
'Room Layout Comments': 'Room Layout Comments',
'Start/End': 'Session Start date / Session End date',
'Schedule Date': 'Schedule date',
'Sponsor': 'Sponsor',
'Booked By': 'Booked by',
'Contact Person': 'Contact person',
'Registration Start-End Date': 'Registration start / Registration end',
'Fee': 'Fee',
'Contract Amount': 'Contract amount',
'Credits': 'Credits',
'Comments of Internal': 'Comments of internal',
'Confirmation Comments': 'Confirmation'
}
flat_df.rename(columns=rename_map, inplace=True)
# -----------------------------
# 6. Save to CSV
# -----------------------------
flat_df.to_csv('flattened_events.csv', index=False)
print("Flattening complete. Saved to flattened_events.csv")
November 2025 Domo Customer Feature Release-Preview
We have heard from many of you that it is helpful to have a little extra notice for what new features we are releasing. Our November release is coming up in 2 weeks so we are trying out posting a list of what is included now so you can think about how you might use these features and be ready when more information is available.









