Hi there!
I learned about the DomoAI Textual Analysis App at Domopalooza, and have found it helpful in analyzing responses to open-ended survey questions (whenever I have a fairly small data set). However, our annual product survey has a lot of responses (7000+), and when I try to use the Textual Analysis App, I just get the error message "error fetching responses." ChatCPT tells me the following:
The "Error fetching data" you're hitting is most likely due to the prompt length becoming too long when concatenating 7000+ survey responses into a single GPT request. GPT models have a token limit (usually ~4,000–8,000 tokens for standard endpoints), and you’re probably exceeding that.
Strategy: Chunk your data into smaller batches and process them iteratively. Instead of sending all 7000+ responses in one big prompt, break them into chunks (e.g., 200–300 at a time), run GPT on each chunk, and then optionally aggregate the chunked responses.
However, I don't love this solution. The whole point is to streamline how we analyze these large numbers of open-ended responses. Any ideas on what else I might try? Thank you!