Hi, I ran Tophat on my RNAseq data and an error occurred setting the metadata. This has happened before several times, and I usually just tell it to auto-detect the metadata and it does so and all is well. However, this time I am getting a further error message when I try to have the metadata auto-detected because the program thinks that the dataset is being used as input for another job, which it is not. I did have the pipeline running to run Cufflinks on the output of Tophat, but when the lack of metadata halted progress, I deleted the Cufflinks job, purged the deleted files, tried to delete all hidden files. At the top of the page it is still listed that there ARE hidden files, but when I try to show them there are none. How can I get around this hangup? Thank you for your help! Alyssa
Hello Alyssa,
Try refreshing the history (top of history panel, a double circle icon). Permanently deleting the datasets that used the Tophat dataset as an input will also help, but it may take some time for that to process through the database.
Tophat is considered a legacy tool and presents with this metadata problem. There will not be a correction and it makes this tool very inconvenient to use in a workflow or other queued work. We strongly recommend using HISAT instead. Please see this prior post about why and what setting to use if you plan to use the output as input for Cufflinks: https://biostar.usegalaxy.org/p/23367/#23369
Also, hidden but deleted datasets are still counted as "hidden". You can ignore this count or unhide the datasets, your preference.
Thanks, Jen, Galaxy team