Question: RNA STAR Gapped-read mapper for RNA-seq data
0
gravatar for negin_valizadegan
6 months ago by
negin_valizadegan0 wrote:

Hello Galaxy team,

I am trying to map an RNAseq data to human reference genome using the RNA STAR Gapped-read mapper for RNA-seq data. I keep receiving the error of "This job was terminated because it used more memory than it was allocated."

I have only three files on my galaxy and I permanently deleted everything else. I have a fasta human reference genome file, a GTF annotation file and an RNAseq file of one individual to be aligned so I don't think lack of memory is the issue because if Galaxy cannot do this one job with three files, then why do we have these tools? The whole thing that I have there is about 16 GB. Actually, on the top of the page, it tell me that I am using only 6% of the memory.

Could someone please help me with that?

Thanks, Negin

ADD COMMENTlink modified 5 months ago • written 6 months ago by negin_valizadegan0
1
gravatar for Jennifer Hillman Jackson
6 months ago by
United States
Jennifer Hillman Jackson23k wrote:

Hello,

The available space in your account is distinct from the memory needed to execute a job.

Changes to increase the memory allocation for this tool are in progress. We expect this to be completed early next week. Rerun your job then.

If it does fail again next week, the next step is to double check the inputs as format/content problems can also trigger this type of error. This is how: https://galaxyproject.org/support/tool-error/#type-exceeds-memory-allocation

Thanks! Jen, Galaxy team

ADD COMMENTlink written 6 months ago by Jennifer Hillman Jackson23k
0
gravatar for negin_valizadegan
6 months ago by
negin_valizadegan0 wrote:

Thank you for your reply Jen! I changed one of the parameters of the setting "Filter alignments containing non-canonical junctions" from No to Yes and "Remove alignments with unannotated non-canonical junction" and I didn't get an error this time. Instead, the job has been at "This is a new dataset and not all of its data are available yet" stage for two days now. I am not sure if the server is busy or the job just has stopped in an step somewhere.

ADD COMMENTlink modified 6 months ago • written 6 months ago by negin_valizadegan0

Hi Negin, Did the prior jobs complete without any metadata warnings? Expand the dataset to check - there will be a link to correct this if so. I've seen a few unexpected jobs with that warning from this time period. Once corrected, the job should start to execute or you may need to rerun.

48 hours is a long delay for the multi-queue. Because of the nature of the cluster issues a few days ago, a small number of queued jobs may not queue up. So after checking metadata, even if no changes are needed, if the wait has been that long, rerun anyway for just this case (don't rerun if the queue time has been shorter, otherwise you will lose your existing place in the queue, extending the delay).

Sorry you were having trouble. Get back to us if this does not work and we can look at your account directly.

ADD REPLYlink written 6 months ago by Jennifer Hillman Jackson23k

Hi Jennifer,

I have done this analysis multiple times and each time I got the same error as I got first with the memory issues. I am not sure what is going on. I changed multiple parameters, still doesn't work. TopHat seems to be working but there is a note next to TopHat tool that says it is deprecated. Does that mean that I can't trust it?

Thanks, Negin

ADD REPLYlink modified 5 months ago • written 5 months ago by negin_valizadegan0

Tophat should be avoided if at all possible. Use HISAT or RNA STAR instead.

Metadata for Tophat can be reset (the data is intact - it is the other related information describing the data that has a problem).

A link to reset metadata is in the expanded dataset or you can click on the pencil icon for the dataset and "autodetect" the metadata directly using the button at the bottom of the first Edit Attributes tab.

ADD REPLYlink written 5 months ago by Jennifer Hillman Jackson23k
0
gravatar for negin_valizadegan
5 months ago by
negin_valizadegan0 wrote:

Hi Jennifer,

I have done this analysis multiple times and each time I got the same error as I got first with the memory issues. I am not sure what is going on. I changed multiple parameters, still doesn't work. TopHat seems to be working but there is a note next to TopHat tool that says it is deprecated. Does that mean that I can't trust it?

Thanks, Negin

ADD COMMENTlink modified 5 months ago • written 5 months ago by negin_valizadegan0

The memory failure is not an error with the tool. It is informing that the job is too large to run at http://usegalaxy.org.

A gray job is queued. This is also not an error. Jobs will queue for a variable amount of time.

That said, we now know that there were some cluster issues over the weekend that extended gray queued job delays. These are now resolved. Allow queued jobs to stay queued and these will process when resources are available.

Related help topics:

ADD REPLYlink modified 5 months ago • written 5 months ago by Jennifer Hillman Jackson23k
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 84 users visited in the last hour