Error fetching results in DomoAI Textual Analysis App

amberd
amberd Member

Hi there!

I learned about the DomoAI Textual Analysis App at Domopalooza, and have found it helpful in analyzing responses to open-ended survey questions (whenever I have a fairly small data set). However, our annual product survey has a lot of responses (7000+), and when I try to use the Textual Analysis App, I just get the error message "error fetching responses." ChatCPT tells me the following:

The "Error fetching data" you're hitting is most likely due to the prompt length becoming too long when concatenating 7000+ survey responses into a single GPT request. GPT models have a token limit (usually ~4,000–8,000 tokens for standard endpoints), and you’re probably exceeding that.

Strategy: Chunk your data into smaller batches and process them iteratively. Instead of sending all 7000+ responses in one big prompt, break them into chunks (e.g., 200–300 at a time), run GPT on each chunk, and then optionally aggregate the chunked responses.

However, I don't love this solution. The whole point is to streamline how we analyze these large numbers of open-ended responses. Any ideas on what else I might try? Thank you!

Tagged:

Answers

  • If I solved your problem, please select "yes" above

  • Hey @amberd!

    I'm Nathan, the one who actually developed the app and presented it at Domopalooza, I'm glad you've found use from it!

    ChatGPT is correct, the token limit for Domo's LLM is pretty small (I've found through testing that the Text Generation model caps out at around ~450,000 characters), but after the presentation, I have continued to dig into this, and discovered that if you use the Text Summarization model instead, it can handle a much larger amount of input data! Sadly, I discovered this too late, and the app is already published in the App Store using the Text Gen model.

    It takes a bit of JavaScript magic to swap the models out in the code, but if you're interested in this solution, here's the new JS function that I'm now using:

    async function getGPTResponse(prompt) {
    const endpoint = {
    url: "summarize",
    body: {
    "input":prompt,
    "system":'You are a helpful assistant that writes concise summaries. DO NOT mention that this is a concise summary, just give me the numbered list with descriptions, nothing else. \n',
    "promptTemplate":{
    "template":"${input}"
    },
    "model":"domo.domo_ai.domogpt-summarize-v1:anthropic",
    "outputStyle":"NUMBERED",
    "outputWordLength":{
    "max":300,
    "min":100
    }
    }
    }; try {
    let result = await domo.post('/domo/ai/v1/text/' + endpoint.url, endpoint.body);
    if (result && result.choices[0] && result.choices[0].output) {
    return result.choices[0].output;
    }
    } catch (err) {
    console.log(err);
    // Check if the error has a response property with additional details
    if (err.response) {
    console.error(`Error Response: ${JSON.stringify(err.response)}`);
    return `Unexpected error occurred: ${err.response.status} - ${err.response.statusText}`;
    } else {
    // Log generic error details if the response is not present
    console.error(`Error: ${err.message}`);
    return `Unexpected error occurred: ${err.message}`;
    }
    }

    return ERROR_MESSAGE; }

    You'll want to update the "system" variable in there to match the context you want to give Domo, but this should get you started!

    BIG DISCLAIMER: I have found that this model is way more inconsistent than Text Generation, and it will sometimes mis-format or simply ignore some parts of your prompt. If you know any JS, then you can try to put more instructions in the "system" variable and remove it from the "prompt" variable, since it seems to listen to that a little bit better than the prompt itself.

    Hope this helps you!