Question: Blastx red error on -- Memory failure, Job exceeds resources
gravatar for j_pasook
3 months ago by
j_pasook40 wrote:

Hello everyone,

I ran Blastx on server using my assembled Transcriptome against Nr database and I got this red error;

Fatal error: Exit code 255 () Error memory mapping:/data/db/databases/blast/2018-01-22/nr/nr.21.psq openedFilesCount=101 threadID=0 Error: NCBI C++ Exception: T0 "/opt/conda/conda-bld/blast_1531859873000/work/c++/src/corelib/ncbiobj.cpp", line 977: C

I also ran the same Blastx but against SWISSPROT database and it worked perfectly. Can any one tell me what should I do ? I need to blast against Nr database because my data is from invertebrate specie.

Thanks, j_pasook

ADD COMMENTlink modified 3 months ago by Jennifer Hillman Jackson25k • written 3 months ago by j_pasook40
gravatar for Jennifer Hillman Jackson
3 months ago by
United States
Jennifer Hillman Jackson25k wrote:


The job exceeded the memory allocation on the server.

Try runs with a smaller number of query sequences (batches or downsampled). BLASTX can be quite memory intensive and NR is a larger target than SP. Also, NR can include predicted genes, which can complicate results, where SP is fully curated -- or it was the last time I reviewed it.

I would also suggest reviewing the parameters used. Find these under Advanced Options. The Galaxy wrapper around the tool has options that are the same as when the tool is used line-command, so NCBI's BLAST documentation covers what each one does.

Thanks! Jen, Galaxy team

ADD COMMENTlink written 3 months ago by Jennifer Hillman Jackson25k

Hello Jennifer,

Downsizing my samples didn't work. I even downsized into 2kb in size which has 3-4 transcripts in it.

Thank you, j_pasook

ADD REPLYlink written 3 months ago by j_pasook40
  • Did you leave the option Filter out low complexity regions (with SEG) set to the default "Yes"? That should help keep repetitive regions from capturing a large number of spurious hits.
  • The options for Maximum hits to show and Max HSPs can also be set (the default is to report all for both). Certain transcripts can capture quite a large number of overlapping HSPs (literally hundreds) -- although that situation is more common with nuc-vs-nuc Blast runs from my prior experiences. It is still worth a test with Blastx.
  • Try just running the jobs with a single transcript and see what happens. Maybe try all four in distinct runs, it may be informative.
  • Try mapping those downsampled sequences with BLAT via the web at UCSC (map against the reference genome, not NR) and compare where those transcripts are aligning with respect the other tracks UCSC has for that genome. All UCSC genomes include repeat tracks. This will only work if the transcripts are all from the same species and UCSC supports a genome build for it. BLAT is pretty robust about handling/organizing HSPs (removes/merges the duplicate and near duplicate hits that Blast will report).

I suspect a transcript content problem of some type -- maybe contamination of some sort? Mitochrondrial, ribosomal, bacterial, sequencing artifact, etc. But that is just a guess.

If none of that works to figure out what is going on, or if you just want to first double check with the admins of that server to see if there are any known issues, the support mail is

ADD REPLYlink modified 3 months ago • written 3 months ago by Jennifer Hillman Jackson25k
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 169 users visited in the last hour