Question: Error "Out Of Memory" When Trying To Retrieve Output
0
gravatar for Delong, Zhou
5.3 years ago by
Delong, Zhou140
Canada
Delong, Zhou140 wrote:
Hello, I wanted to download the accepted junction .bam file from tophat output of my local instance and I get an "out of memory" error. When I examine the server via command line, I found that a python process used by galaxy occupied more than 80% of total memory (on the virtual machine with 10G of RAM).. I tried curl command to retrieve the datafile after rebooted the virtual machine, and python is activated again and used up all the memory. The bam is around 20G of size, but I never had this kind of problem with other tophat analyses before on my local instance although they are of the same size. The discription on the web mentioned some .dat files that I manage to find on the disk, but not the bam. Can anyone explain what python is doing and how can I solve this please? Thanks, Delong
galaxy • 1.3k views
ADD COMMENTlink modified 5.3 years ago by Dannon Baker270 • written 5.3 years ago by Delong, Zhou140
0
gravatar for Dannon Baker
5.3 years ago by
Dannon Baker270
United States
Dannon Baker270 wrote:
Do you have debug enabled in your universe_wsgi.ini? IIRC, this causes the entire request to be loaded into memory (which is a bad thing when the response is 20GB).
ADD COMMENTlink written 5.3 years ago by Dannon Baker270
Hello, This solved my problem. Thanks. Delong ________________________________ De : Dannon Baker [dannon.baker@gmail.com] Envoyé : 29 aoűt 2013 16:07 Ŕ : Delong, Zhou Cc : galaxy-user@bx.psu.edu Objet : Re: [galaxy-user] Error "out of memory" when trying to retrieve output Do you have debug enabled in your universe_wsgi.ini? IIRC, this causes the entire request to be loaded into memory (which is a bad thing when the response is 20GB). Hello, I wanted to download the accepted junction .bam file from tophat output of my local instance and I get an "out of memory" error. When I examine the server via command line, I found that a python process used by galaxy occupied more than 80% of total memory (on the virtual machine with 10G of RAM).. I tried curl command to retrieve the datafile after rebooted the virtual machine, and python is activated again and used up all the memory. The bam is around 20G of size, but I never had this kind of problem with other tophat analyses before on my local instance although they are of the same size. The discription on the web mentioned some .dat files that I manage to find on the disk, but not the bam. Can anyone explain what python is doing and how can I solve this please? Thanks, Delong ___________________________________________________________ The Galaxy User list should be used for the discussion of Galaxy analysis and other features on the public server at usegalaxy.org<http: usegalaxy.org="">. Please keep all replies on the list by using "reply all" in your mail client. For discussion of local Galaxy instances and the Galaxy source code, please use the Galaxy Development list: http://lists.bx.psu.edu/listinfo/galaxy-dev To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
ADD REPLYlink written 5.3 years ago by Delong, Zhou140
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 169 users visited in the last hour