I'm running tophat/cufflinks/cuffdiff on Galaxy for what seems like is probably a large data set (I've got 3 replicates of 8 different conditions). I loaded everything Wednesday morning and it is still not done. It has run through all the Tophat runs and some of the Cufflinks, but the ones that have not run yet are still waiting to load and are not even queued as far as I can tell. (They have an exclamation mark next to them instead of a clock).
Do I need to make more space available in order for this to run? I am currently using 71% of the allotted space and based on previous data experiments, I think this will be enough for the file sizes. I've been deleting some large files as I go from the server (for instance, I deleted the initial files after I converted them because otherwise they were taking up too much space), and I could maybe delete the groomed files if I need to in order for it to run, however, I would prefer not to if it isn't necessary.
Or is this a problem because of the addition to running some jobs on Jetstream that is just causing general delays? I don't need this analysis done in any particular rush, but if it is sitting there indefinitely and I need to do something to correct it, I would like to know.