Question: cufflink and tophat queueing more than 24hrs
gravatar for
21 months ago by
sanathoi.guru10 wrote:

cufflink queue is not working. I have been trying to run tophat and cufflink since last 2 days, nothing change with the quote "This job is waiting to run".

tophat cufflinks • 791 views
ADD COMMENTlink modified 21 months ago by Jennifer Hillman Jackson25k • written 21 months ago by sanathoi.guru10

I have the same problem, too.

ADD REPLYlink written 21 months ago by kuk220
gravatar for Jennifer Hillman Jackson
21 months ago by
United States
Jennifer Hillman Jackson25k wrote:


The jobs in your single active history have been queued for only 7 hours (now). Were queued about an hour when this was posted. Please give these jobs time to run. Queue time can vary with how many jobs are launched/in queue at the same time, the type of job, server load, and the target cluster (when specified).

We are looking into reported delays that are much longer (not confirmed yet), read here for details about that:

Thanks! Jen, Galaxy team

ADD COMMENTlink written 21 months ago by Jennifer Hillman Jackson25k

Update: Server updates were made a few hours ago and everyone should start to see shorter job delays.

ADD REPLYlink written 21 months ago by Jennifer Hillman Jackson25k

Hi A workflow was in queue for almost 40hrs. After deleting it, i created a new history(active) for tophat only, thinking it should work faster.

ADD REPLYlink written 21 months ago by sanathoi.guru10


This is a common action for users to try, but it doesn't actually help. Deleting and restarting is almost always going to prolong the wait time for a job to queue. The exception can be if a job happens to be specifically targetted at a different cluster than the original job (Job Resource option, available for certain tools) that happens to have a shorter queue at that time.

Which clusters are busy changes throughout the day, each day. So these types of re-runs are a guess at best when done and generally no more successful than just allowing jobs to stay as originally queued. When there is a server/cluster issue, older queued jobs retain their placement and will queue before newly queued jobs as all start processing properly again.

When it is actually better to delete and rerun (perm delete, or the job takes much longer to quit out, further extending wait time) - we will let the community know here or through Twitter or through emails sent to the mailing lists. There have only been two times in the last several years (7+) when this was needed. Not all users benefited or needed to rerun and both events were many years ago during times when major changes were underway (noted with a banner on the Galaxy Main server).

When a job executes and then fails (red result) for a cluster reason, it is always a good idea to re-run at least once. If it fails again, double check the inputs, as format or other problems with inputs can trigger cluster failures too (and are the most common reason for failures). If you ever have a question about a job failure, a bug report can be sent in and we'll help with feedback to resolve. When there is a tool bug or problem, we want to learn about it and get it fixed.

More help is covered in the Support wiki, Sections 7-14. These address the most common usage issues that result in job failures and are the same tests we do when reviewing reported errors through bug reports (or here in Biostars):


ADD REPLYlink written 21 months ago by Jennifer Hillman Jackson25k
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 180 users visited in the last hour