I've read on this forum about the proposed solutions to this memory issue. Other than applying for AWS grant, it's not clear if there is any "short-term" solution.
My current situation is
- I run analyses on usegalaxy.org.
- most prior runs were fine in terms of the "memory" allocation.
- last run (6.4 GB pileup data running with Varscan) has repeatedly been terminated due to the memory issue (like what another user has mentioned, some successful runs (with identical parameters) in the past were also of the same size)
- I tried "re-run" a couple of times - actually a submission of a new request for Varscan run on the same pileup data. No success so far. I did wait for more than one hour between each submission
From what I read, please correct me for any misunderstanding, AWS grant is processed 4 times a year. Thus it seems that any technical run-in with memory on a random occasion would not be solved by AWS grant.
Q1. is 6.4 GB considered a big memory burden?
Q2. Any short-term/quick solution to this current job?
Any suggestion? Thank you