7 months ago by
Jobs can queue and execute at different rates. When running multiple large jobs (Cufflinks is considered large), some will start, complete, then others will run. Resources are shared among all users of the public Galaxy Main https://usegalaxy.org server.
A gray job indicates that it is queued. Avoid deleting and restarting, as this removes the original job from the queue and places the new one back at the end, further extending wait time.
Support FAQs that describe how jobs execute: https://galaxyproject.org/support/#datasets-and-histories
More details about Galaxy Main and jobs/resources: https://galaxyproject.org/main/
The only reason why a job would stay queued and never run is if one of these is going on:
There is a server/cluster problem. Those can be reported here. None are known right now, and if your other jobs are running, a problem server-side is very unlikely to be a factor.
One or more of the inputs to the job are in an error state. Check to make sure these are active datasets (not deleted) and are intact (no metadata warnings). This includes the BAM input plus any reference annotation GTF or custom reference genome used.
If your account is over quota (250 GB), a warning will be shown above the history panel stating that new jobs cannot be run until you are back under quota. The solution is to permanently delete unneeded data. Download a backup first if you wish to use it again later. https://galaxyproject.org/learn/managing-datasets/#delete-vs-delete-permanently
Thanks! Jen, Galaxy team