Question: Tophat is stuck in "jobs waiting to run"
0
gravatar for mpark4
3.4 years ago by
mpark40
United States
mpark40 wrote:

Just had a quick question: I'm trying to use tophat, but my jobs are stuck in waiting to run. They were stuck overnight and never ran. I have plenty of space (only using 43%). Is there a way to get them to start running faster?

rna-seq tophat • 1.1k views
ADD COMMENTlink modified 3.4 years ago • written 3.4 years ago by mpark40
0
gravatar for Jennifer Hillman Jackson
3.4 years ago by
United States
Jennifer Hillman Jackson23k wrote:

Hello,

The public Main Galaxy instance is very busy. The best strategy during peak-usage times is to allow queued jobs to stay queued until then execute. Do not delete/restart or you will lose your place in the queue. This is also a good time to use a workflow or to queue up subsequent analysis jobs/steps, so that jobs are prepared and in the queue, ready to execute, as resource becomes available.

Some details about how to interpret dataset status:
http://wiki.galaxyproject.org/Support#Dataset_status_and_how_jobs_execute

Best, Jen, Galaxy team

ADD COMMENTlink written 3.4 years ago by Jennifer Hillman Jackson23k
0
gravatar for mpark4
3.4 years ago by
mpark40
United States
mpark40 wrote:

Thank you for such a quick reply! I do have access to the Cloundman (although I've never used it before). Would that let my jobs run faster? Would I need to re-upload my data?

Best,

Margaret

ADD COMMENTlink written 3.4 years ago by mpark40

Hi Margaret,

CloudMan instances do allow for customization of resources. You can scale up memory and add more compute nodes. If your jobs will run faster there depends on how many other users are on that CloudMan instance with active jobs and how many compute nodes have been added. However, if you are the only one running jobs on a particular instance, then all nodes are dedicated to your jobs.

You'll need to upload the data. Here is how to download/upload data:
http://wiki.galaxyproject.org/Support#Loading_data (scroll down, next section is downloading)

Workflows can also be downloaded/uploaded - see the "Workflow" home page in any instance for the functions.

Import using history/dataset URLs that point from an instance or importing a complete history file between Galaxy instances has been unpredictable recently (a known issue we are working on). You can try those if you want (links in history menu and per dataset) - but if it doesn't work, just fall back to directly download/uploading through one of the methods described in the wiki.

Good luck with your project, Jen, Galaxy team

ADD REPLYlink written 3.4 years ago by Jennifer Hillman Jackson23k
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 107 users visited in the last hour