Question: Tophat2 tool error
gravatar for catic.andre
3.3 years ago by
United States
catic.andre0 wrote:


I was running Tophat2 on a 14 GB groomed fastq file. Local setup, 8 GB RAM, default Tophat2 settings for single end. After 16 hours, I receive the following error:

Traceback (most recent call last):
  File "/home/catic/galaxy/lib/galaxy/jobs/runners/", line 129, in queue_job
    job_wrapper.finish( stdout, stderr, exit_code )
  File "/home/catic/galaxy/lib/galaxy/jobs/", line 1199, in finish
    dataset.datatype.set_meta( dataset, overwrite=False )
  File "/home/catic/galaxy/lib/galaxy/datatypes/", line 281, in set_meta
    proc = subprocess.Popen( args=command, stderr=open( stderr_name, 'wb' ) )
  File "/usr/lib/python2.7/", line 710, in __init__
    errread, errwrite)
  File "/usr/lib/python2.7/", line 1327, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory

Tool execution generated the following error message:

Unable to finish job


Does anyone know where the problem is?
Thank you,



software error • 920 views
ADD COMMENTlink modified 3.3 years ago by Jennifer Hillman Jackson25k • written 3.3 years ago by catic.andre0
gravatar for Jennifer Hillman Jackson
3.3 years ago by
United States
Jennifer Hillman Jackson25k wrote:


This was probably a failure due to the job exceeding memory resources. 8 GB is the minimum recommended for a local Galaxy. 16 GB would be better (and possibly more) for RNA-seq analysis.

The truth test is if the job will execute line command. As a general rule, if a job will execute in an environment line command, then it will execute in Galaxy, given the same resources. 

Best, Jen, Galaxy team


ADD COMMENTlink written 3.3 years ago by Jennifer Hillman Jackson25k

Thank you, Jennifer. 16 GB RAM didn't do much, but 64 GB did the trick :-)

One problem, though: the working job files get excessively large running Tophat2 on a 15 GB fastq file (>50 GB). Is this to be expected?
Thanks again.


ADD REPLYlink written 3.3 years ago by catic.andre0

The size of the data will depend on the fastq sequence content, the finished state of the reference genome, how much repetitive data the reference fastq/genome contain, and the Tophat2 settings. In short, the more permissive the match criteria, the more memory a job will require and the larger the data footprint. See the Tophat manual for more details about how to tune these parameters to increase match stringency if the size becomes problematic.

Thanks, Jen

ADD REPLYlink modified 3.3 years ago • written 3.3 years ago by Jennifer Hillman Jackson25k
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 172 users visited in the last hour