We're running into an odd instance of a single tool ("ChrScan" in the "MegaMapper" tool) submitting jobs that are held in queue forever (the input file exists, but the jobs are always labeled, "This job is waiting to run"). When I look in the database at these jobs (select id,tool_id,command_line,job_runner_name from job order by create_time desc limit 2;
), they seem to have a number of empty fields, among them command_line, param_file and runner_name. Other jobs submitted by the same user (e.g., a blast search) have these fields filled in in a meaningful manner and those jobs run without any issue. Does anyone have any guess what's going on here? My presumption is that these jobs are being held in the queue due to these fields being emtpy, but presumably someone more familiar with the inner workings of Galaxy could shed some light on this. If this is why things aren't being run, then does anyone have any guesses as to why these fields aren't being populated in an appropriate manner?
Update: Another user complained about a similar problem, but this time with a blast search always being queued. The fields mentioned above were once again not being populated for the submitted jobs. I restarted the docker instance containing Galaxy and postgresql and that seems to have somehow resolved things. I haven't a clue what was wrong, since nothing had been modified recently. While the immediate problem is now resolved, it'd still be nice if someone could provide any insight into what might have happened.
Update2: I should probably mention that after the restart the aforementioned fields seem to be filled in on jobs for which they previously weren't.