hi, I'm posting this question because i didn't see a previous post about it (slurmstepd error, seen on thursday oct 30 @ 6 PM)
i ran a job using the galaxy browser interface following the Galaxy 101 tutorial. this particular workflow included importing 1) all exons [~610,000 regions in bed format] and 2) all repeats [~5,600,000 regions in bed format] from the hg19 build from UCSC genome website.
after importing these two datasets, the next step is a join command to merge the two datasets. this job cancelled with (slurmstepd: error: job ###### exceeded memory limit (8034000 > 7864320), being killed) message.
I'm sure i could find a workaround for this issue, but i am a beginner and it is conceptually easier for me to simply join a all exons dataset with a all repeats dataset without dividing them into smaller pieces. the memory limit seems to be just over the limit, so perhaps earlier genome builds contained slightly less information and ran without triggering the error.
my question is: is the memory limit a temporary issue, i.e. will the memory limit be increased in the near future? or...
do i need to find a workaround?
thanks, chris