PyDomo issue to upload data em update dataset

I'm having trouble to upload my data to Domo using the PyDomo API.
I read some .xlsx files and aggregate this files in one dataframe with the code below:
Then I try to update my dataset using this code:
domo = Domo(cliente_id, secret, api_host='api.domo.com')
dataset_up = domo.ds_update(dataset_id,df_final)
However I keep getting the same Exception:
Exception: Error updating Execution: {"status":400,"statusReason":"Bad Request","message":"Too many non-consecutive part ids. Make consecutive or upload multiple data versions","toe":"83K7GEC1P0-EARGT-O3A2S"}
How can I fix this issue? I don't have any idea of what to do besides deleting the current dataset and creating a new one with the same name and ID everytime but I really don't want to do this way.
Thanks in advance for whoever answer this.
Answers
-
I'd recommend creating a new stream and uploading each excel document as part of the stream and then close out your stream. Domo has a Python example on how to use streams here:
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**0 -
The error indicates the ds_update() method isn't being used correctly for large or incremental datasets.
Domo's API expects part ids to be consecutive and well ordered. If PyDomo messages up the part numbering or the data is too large, the API rejects it.
I like Grant's suggestion. Give it a try. If you need, you could try breaking up the upload into manual steps. Something such asfrom pydomo import Domo
domo = Domo(client_id, secret, api_host='api.domo.com')
# Create a DataSet upload execution
execution = domo.ds_create_upload(dataset_id)
# Upload the DataFrame
domo.ds_upload_part(dataset_id, execution['id'], 1, df_final)
# Commit the upload
domo.ds_commit_upload(dataset_id, execution['id'])Make sure part numbers (the 1 in the ds_upload_part) are consecutive starting from 1.
Don't reuse old execution IDs.Chunked upload example
import pandas as pd
from pydomo import Domo
# Setup your credentials
client_id = 'your_client_id'
secret = 'your_secret'
dataset_id = 'your_dataset_id'
# Connect to Domo
domo = Domo(client_id, secret, api_host='api.domo.com')
# Example DataFrame (replace with your actual df_final)
# df_final = pd.read_csv('your_data.csv')
# Chunking parameters
chunk_size = 100000 # Adjust based on your memory/network
num_chunks = (len(df_final) - 1) // chunk_size + 1
# Start a new upload execution
execution = domo.ds_create_upload(dataset_id)
# Loop through and upload each chunk
for i in range(num_chunks):
start = i * chunk_size
end = start + chunk_size
chunk = df_final.iloc[start:end]
part_number = i + 1 # part numbers must start at 1
print(f"Uploading part {part_number} with rows {start} to {end}...")
domo.ds_upload_part(dataset_id, execution['id'], part_number, chunk)
# Commit the upload
domo.ds_commit_upload(dataset_id, execution['id'])
print("Upload completed successfully!")** Was this post helpful? Click Agree or Like below. **
** Did this solve your problem? Accept it as a solution! **0
Categories
- All Categories
- 2K Product Ideas
- 2K Ideas Exchange
- 1.6K Connect
- 1.3K Connectors
- 311 Workbench
- 6 Cloud Amplifier
- 9 Federated
- 3.8K Transform
- 659 Datasets
- 117 SQL DataFlows
- 2.2K Magic ETL
- 816 Beast Mode
- 3.3K Visualize
- 2.5K Charting
- 84 App Studio
- 46 Variables
- 778 Automate
- 190 Apps
- 482 APIs & Domo Developer
- 83 Workflows
- 23 Code Engine
- 41 AI and Machine Learning
- 20 AI Chat
- 1 AI Playground
- 2 AI Projects and Models
- 18 Jupyter Workspaces
- 411 Distribute
- 120 Domo Everywhere
- 280 Scheduled Reports
- 11 Software Integrations
- 145 Manage
- 141 Governance & Security
- 8 Domo Community Gallery
- 48 Product Releases
- 12 Domo University
- 5.4K Community Forums
- 41 Getting Started
- 31 Community Member Introductions
- 115 Community Announcements
- 4.8K Archive