Hello,
I'm running tophat/cufflinks/cuffdiff on Galaxy for what seems like is probably a large data set (I've got 3 replicates of 8 different conditions). I loaded everything Wednesday morning and it is still not done. It has run through all the Tophat runs and some of the Cufflinks, but the ones that have not run yet are still waiting to load and are not even queued as far as I can tell. (They have an exclamation mark next to them instead of a clock).
Do I need to make more space available in order for this to run? I am currently using 71% of the allotted space and based on previous data experiments, I think this will be enough for the file sizes. I've been deleting some large files as I go from the server (for instance, I deleted the initial files after I converted them because otherwise they were taking up too much space), and I could maybe delete the groomed files if I need to in order for it to run, however, I would prefer not to if it isn't necessary.
Or is this a problem because of the addition to running some jobs on Jetstream that is just causing general delays? I don't need this analysis done in any particular rush, but if it is sitting there indefinitely and I need to do something to correct it, I would like to know.
Thank you.
The 'exclamation mark state' means that the job is probably waiting on its inputs. Are you sure that you did not delete some of it while cleaning up?
Thank you. I doublechecked and that doesn't seem to be the problem. However, I did notice that the Tophat runs that Cufflinks is waiting on do have a "An error occured while setting the metadata for this set "
I think it is confused about which genome alignment to use but I can't correct it because it says that because this dataset is being used as input or output, metadata cannot be changed. Should I cancel the Cufflinks runs and see if I can update it? (I don't think it is this necessarily, since it was also a problem on several Tophat datasets that the Cufflinks did run on and that I updated after the run was complete).
I would try cancelling the cufflinks job and running auto-detect to set the metadata on the tophat dataset. If all inputs are ready the 'exclamation mark state' should not last longer than a few minutes.
I cancelled the remaining cufflinks (deleted then permanently deleted). I still was not able to change the metadata, even after waiting overnight and trying the next morning. (It still says that they are being used as input or output so metadata cannot be change).