Question: job length
gravatar for lynny
4.3 years ago by
United States
lynny20 wrote:


I used the join command at and would like to know how long the job ran:

Tool: Join
Name: Join on data 1 and data 2
Created: Fri Jul 25 11:47:19 2014 (UTC)
Filesize: 52.7 MB
Dbkey: hg19
Format: interval
Galaxy Tool Version: 1.0.0
Tool Version:  
Tool Standard Output: stdout
Tool Standard Error: stderr
Tool Exit Code: 0
API ID: bbd44e69cb8906b5718132ea354f613b

The links to stdout and stderr are to blank pages.

Can I access a log that tells me how long the job ran?

Thank you,




utility galaxy • 928 views
ADD COMMENTlink modified 4.3 years ago by Jennifer Hillman Jackson25k • written 4.3 years ago by lynny20

I wanted to know the run time for some jobs too, but as far as I've been able to figure out, there is no way to tell how long a job ran from the user end of things on galaxy (if anyone else has found otherwise, please correct me!). 

ADD REPLYlink written 4.3 years ago by Suzanne Gomes120
gravatar for Martin Čech
4.3 years ago by
Martin Čech ♦♦ 4.9k
United States
Martin Čech ♦♦ 4.9k wrote:

This feature currently does not exist. I believe we have the information in the database but it is not accessible through the UI.

It is one of the priorities for coming releases.

ADD COMMENTlink written 4.3 years ago by Martin Čech ♦♦ 4.9k
gravatar for Jennifer Hillman Jackson
4.3 years ago by
United States
Jennifer Hillman Jackson25k wrote:


Certain tools now have an option included to report the mapping time (example: Bowtie2 (correction from Tophat2 - that will be coming soon!)). This is found in the full parameter settings, near the end. Please keep in mind that this does not include queue time. The public server is often busy - but how busy is the important bit.

Execute the job and see, it will vary, expectedly. Then permit the job to run to completion without stopping/restarting, or you will lose your original place in the queue, extending wait time.

Queue time is dependent on these major factors: the type of job you run (compute intensive jobs, often found in tool groups staring with an "NGS", go to our busiest queue - due to the jobs often executing for the longest time period), how many other users are also using the public Main instance ( or other Galaxy - if not using a local/dloud, and how many jobs are queued to execute prior to this one in any of your histories (prioritize if work is urgent). This wait time is difficult to predict, of course, but our team (and extended team) has been discussing methods, and our new configuration at TACC this past year offers more potential.

This all said, Main can very often be quite speedy. When you happen across such a window, it is a very good time to launch a workflow. 

Hopefully this helps you both, Jen, Galaxy team

ADD COMMENTlink modified 4.3 years ago • written 4.3 years ago by Jennifer Hillman Jackson25k

Hi Jennifer,

Thank you for the response.

We are very grateful for public galaxy and are simply interested in a benchmark of the join command.

We would like to see the total time elapsed and the time used to execute the command.

Warmly, Lynn


ADD REPLYlink modified 4.3 years ago • written 4.3 years ago by lynny20

Hello Lynn, Adding run time for all tools is a priority in the upcoming releases. However, keep in mind that run-time on our cluster/database configuration may differ from yours, and that this particular command is sensitive to the size and composition of the inputs. If this benchmarking is intended for anything other than use on the public Main instance, you might want to run this on the cloud or local Galaxy it will be executed on (in bulk?), with your actual data (samples - small and large), to get a truer sense of timing. Best, Jen, Galaxy team

ADD REPLYlink written 4.3 years ago by Jennifer Hillman Jackson25k
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 165 users visited in the last hour