For the last 3 weeks I have not been able to get a job to run in galaxy on this server. Is this a problem of backlog or something specific to me?
Hello,
The history you were working in has been permanently deleted (purged) and the rest are empty. I don't see anything obviously wrong with the input BAMs (all runs based on the same genome, have results, around the same mapping rates, similar size).
As far as I can tell, the job was simply too large to run at the public server Galaxy Main https://usegalaxy.org with the given parameters/content. Using "Ignore duplicates" might be a factor -- instead, these could be removed first with the tool RmDup in a distinct step to break the job up. You could also try pre-filtering the BAM on other attributes: mapping quality, properly paired reads, exclude unmapped, etc. Downsampling or setting a larger bin size are also options.
If all else fails and/or changing the run parameters doesn't meet your research goals, moving to your own Galaxy where more resources can be allocated would be the solution.
FAQ: My job ended with an error. What can I do? https://galaxyproject.org/support/tool-error/
Thanks! Jen, Galaxy team
Sorry, I will add a little more detail. I have been trying to run DeepTools - BamCompare jobs on RNAseq BAM files that were output from HISAT2? The jobs are running for a long time and eventually terminating due to taking to long to run. I ran the first replicate through without any problems a few weeks ago but I am stuck with this replicate.
Any suggestions? Brandon