Already in queue for execution
We are having a problem with Workbench where it looks as though the jobs are not running because the queue is backed up. When I look at the Workbench Service log files I see the following:
Job Id 82 - next run date of 2015-06-26 05:30:00 Local is past due. Already in queue for execution.
Any thoughts or ideas how to clear out the queue to make the jobs begin running as scheduled again?
Best Answers
-
I agree with you. Perhaps something on the order of an iterative experiment is needed. Since you know that it will upload manually, and will upload at least daily, but will not upload ever 15 minutes, perhaps you can simply try in fifteen minute increments to allow the most effective use of workbench and also allow for the most frequent updating. Try 30 minutes, and if that does not work, try 45, and then 60, and so on. Assuming that there will be future data feeds or larger data feeds that will be uploaded in the future, it would be good to check out the queue from time to time and update the frequency of updates accordingly, if necessary. Does this sound like a workable solution to you?
"When I have money, I buy books. When I have no money, I buy food." - Erasmus of Rotterdam0 -
Sorry for the slow response, but we did get this issue resolved. We ended up turning off all jobs for a day, then went back and scheduled them over much broader intervals. Maybe some of the files were to big and backing up the whole system, especially if we were trying to run them every 15 minutes or half hour.
Since we did that things, have been running smoothly. Thanks for all the input and response everyone.
0
Answers
-
If you find an issue with one of those jobs, in workbench, you can manually validate and upload the file. This is a bit time consuming but will at least clear up the queue as fast as you can click and wait.
"When I have money, I buy books. When I have no money, I buy food." - Erasmus of Rotterdam0 -
@milesscott, did nalbright's response help address your issue with the queue?
1 -
I know I can manually update, but the scheduler was working fine until a week or so again. I'll post the log files from teh workbench service. It looks like the scheduled jobs aren't running because of this. Anyone seen this before and how do you clear the queue?
0 -
I think that you're correct that the scheduled jobs aren't running because something is stuck. Do you have any idea what changed between the before and after?
"When I have money, I buy books. When I have no money, I buy food." - Erasmus of Rotterdam0 -
I don't know of anything that really changed and that is the frustrating part. I think I am going to remove the schedule feature from all the jobs and let it go for a day and see what, if anything happens. I'll manually update the critical ones and see if it somehow goes through and clears out the queue in the meantime.
0 -
I can see how that would be very frustrating. Could you let us know if anything changes as a result of your test?
"When I have money, I buy books. When I have no money, I buy food." - Erasmus of Rotterdam0 -
Sure will.
1 -
So I changed some of the jobs to Manuelly update, some to update every 15 mintes, some to every day.
It appears as though the manuel ones didn't run at all, as expected. And the everyday ones only ran once a day, also as expected.
However the ones every 15 minutes only ran once a day, if at all. And it looks in the log files as though it is still all backed up and looks the same as the post above.
0 -
Now that's very interesting. Is the data that you're trying to update every 15 minutes actually changing that frequently?
"When I have money, I buy books. When I have no money, I buy food." - Erasmus of Rotterdam0 -
It does, and maybe every hour would work fine too. But our executives would certainly like to see this data updating more frequently then once a day.
0 -
Certainly so. Alright, so if the data is actually updating, but Workbench has trouble uploading it, and there is nothing in the data that is changing any of the parameters for the data flow, then the question is, what is causing the trouble for those particlar uploads? Is it just taking too long to get them all done in 15 minutes so that the entire workbench workflow looks like rush hour traffic where no one is going anywhere, or is there something else that is a problem?
"When I have money, I buy books. When I have no money, I buy food." - Erasmus of Rotterdam0 -
I have a feeling that workbench is just backed up. The problem then is how to clear the queue to allow the jobs to run automatically again, albeit not nearly as frequently.
0 -
I agree with you. Perhaps something on the order of an iterative experiment is needed. Since you know that it will upload manually, and will upload at least daily, but will not upload ever 15 minutes, perhaps you can simply try in fifteen minute increments to allow the most effective use of workbench and also allow for the most frequent updating. Try 30 minutes, and if that does not work, try 45, and then 60, and so on. Assuming that there will be future data feeds or larger data feeds that will be uploaded in the future, it would be good to check out the queue from time to time and update the frequency of updates accordingly, if necessary. Does this sound like a workable solution to you?
"When I have money, I buy books. When I have no money, I buy food." - Erasmus of Rotterdam0 -
Just a couple of ideas for troubleshooting...
1. You can restart the Workbench service using the system tray icon - right click, then choose Service and Restart. The running job will be terminated, all the queued jobs will be lost since they are all in memory. But as soon as the service starts again it will re-queue all those jobs since they haven't been successfully completed.
2. You can look in task manager for a workbench.datacollector process. If you view the command-line parameters you will be able to see the jobid that is running. You can look at the log for that specific job and it may give some additional information about what is stalling. (Alternatively, you can find this information from the log you posted, you just may have to search a bit to find where it logs the most recent time that it's starting a job.)
0 -
@milesscott, were you able to resolve your issue? Please let us know!
0 -
@milesscott,
Did @jeremyhurren reply help you with your issue? Were you able to get this resolved?
-Austn
Domo Support**Say “Thanks" by clicking the thumbs up in the post that helped you.
**Please mark the post that solves your problem by clicking on "Accept as Solution"0 -
Sorry for the slow response, but we did get this issue resolved. We ended up turning off all jobs for a day, then went back and scheduled them over much broader intervals. Maybe some of the files were to big and backing up the whole system, especially if we were trying to run them every 15 minutes or half hour.
Since we did that things, have been running smoothly. Thanks for all the input and response everyone.
0 -
Greetings,
I went ahead and marked @nalbright & the last post by @milesscott as solutions for this thread.
Thanks all for the great collaboration!
Regards,
Dani
0 -
You're very welcome; I'm glad to see it resolved ?
"When I have money, I buy books. When I have no money, I buy food." - Erasmus of Rotterdam0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.9K Visualize
- 2.5K Charting
- 738 Beast Mode
- 57 App Studio
- 40 Variables
- 685 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 47 Workflows
- 10 DomoAI
- 36 Predict
- 15 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 124 Manage
- 121 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive