Datasets stuck refreshing

1. So I have a SQL Postgres database that I am uploading to domo via Workbench via 6 ODBS different datasets. (lets call them 1, 2, 3, 4, 5, 6) Each dataset is updated every 30mins from the Workbench - these work like clockwork without a problem.

2. Each dataset is then modified in ETL with some calculations and form in turn 1a, 2a, 3a, 4a, 5a, 6a. These Dataflows are automatically triggered each 30mins by the ODBS refreshes. Normally an update takes 8-12 secs. These datasets regularly get stuck on "running" and go on "running" for an hour or so, then cancel and restart. I would like to be able to troubleshoot and/or understand what causes it to get stuck like that because in turn it causes following dataflows to get stuck as well...

3. 1a, 2a, 3a, 4a, 5a, 6a are then combined into two separate datasets with the "join" command and other calculation (lets call them X and Y). X and Y can be triggered by an update in any of the 1a, 2a, 3a, 4a, 5a, 6a datasets. But the same thing happens they get stuck for an hour without refreshing - Most likely it is caused by the 1a, 2a, 3a, 4a, 5a, 6a getting stuck but just to confirm I am asking here....


anyone with any suggestions will be very welcome!

Thank you!



  • It sounds like an action in your ETL is just getting hung up, after which the process resets.

    Have you viewed the dataflow history tab that shows what happened at each step?  This screen is helpful to view where breakdowns happen and why.  On the last column, status, the green or red Success or Failed icon is a link.  If you click on that it will take you to the status of each step in the dataflow.  Hover over each step status to find the tooltip error message.  My guess is it's on a join step.

    failure status.png

    Let us know how it goes.




    MajorDomo @ Merit Medical

    **Say "Thanks" by clicking the heart in the post that helped you.
    **Please mark the post that solves your problem by clicking on "Accept as Solution"
  • The problem is I don't get fails - what happens is that the the etl runs for an hour or so (50min to 1hr 20min) and then it cancels and autorestarts and then in most cases it goes right through automatically - but sometimes it can again hang for an hour. If I click on details - nothing shows because it did not"fail" it autocancelled...


    Screen Shot 2018-02-23 at 12.15.42 PM.png

  • That's frustrating.  And running dataflows don't show that detail. 

    One of my dataflows just now had the same problem and I can't tell either.  Usually it runs in a few minutes, but this one autocancelled at 45 minutes


    This almost sounds like a processing engine problem.  Like a system hiccup, especially if those dataflows are all the same version.  I don't think users have the details available to diagnose system problems like that.



    MajorDomo @ Merit Medical

    **Say "Thanks" by clicking the heart in the post that helped you.
    **Please mark the post that solves your problem by clicking on "Accept as Solution"
  • well my problem is I am using a free account for now and the support is refusing to talk to me anylonger because I have already asked too many questions... so hence I am here trying to get community help... :( so not sure what I should try next...

  • Domo stores a bunch of metadata about each dataflow run. 

    How long it takes to load each dataset (ms)

    How long it takes to run each transformation

    How long it takes to load the output to permanent storage

    How many rows are loaded

    Timestamps at each step

    How many rows are written



    I did some more research on the similar error that I had this morning, and my delays were related to dataset load times.  You get a hint of this when you can see how many rows were loaded compared to the other successful runs.



    I can't say that my failure is the same as yours, but I'm starting to think it has to do with failures to finish loading data into the dataflow for processing.  There's likely nothing you or I can do about the system not loading your data properly for processing, especially when it's intermittent like that, except to lobby for improved error handling and debugging.


    Maybe someone out there in the Dojo has some other ideas.

    MajorDomo @ Merit Medical

    **Say "Thanks" by clicking the heart in the post that helped you.
    **Please mark the post that solves your problem by clicking on "Accept as Solution"
  • so far my datasets are dealing with a minimal number of rows that are added each time... i would say at most 10 rows would be added to a dataset on each refresh... as my SQL updated every 10 mins and the workbench every 30 - meaning if everything goes according to plan 3 rows of data is added every 30 mins but if the dataflow gets stuck and only updated every hour or hour and a half then additional rows would be added on the next successful refresh... ultimately my SQL DB is very small so far - I am not dealing with refreshing millions of rows... so it is very strange behaviour.... in any event I appreciate all the help you have provided so far!