Question: Problem Uploading Files Gt 1 Mbyte To Galaxy Cloud
0
gravatar for Greg Edwards
6.9 years ago by
Greg Edwards190
Greg Edwards190 wrote:
Hi, I'm building a Galaxy Cloud implementation, and running into a problem uploading data files > 1 Mbyte, or 1,048,576 bytes. I've started Galaxy Cloudman and a single server, all according to to the Wiki. I can upload fasta format files < 1048756 bytes, and general CSV format peptide files, and indeed junk data files built with variants of " jot -r -c 1030000 a z | rs -g 0 100 >junk ". If I upload files (via the built-in Get Data / Upload Files From Your Computer) that are > than approx 2^20 bytes, the upload just hangs in the History and never completes. These files upload fine to a local Galaxy server, and to Galaxy Main. On the Galaxy local server, they run fine with my custom proteomics code. Ditto on the Cloud galaxy server when the (smaller than 1 Mbtye) files successfully upoad. I can't see anthing useful in the logs, but I'm not sure which logs I should be looking at. I'm just looking at tyhe logs presented on the Cloudman admin page. [image: logs.png] My environment is Mac, OS X 10.6.8. My data files are terminated with /n, but the problem manifests the same with Windows style /r/n files. I'll leave it at that in the hope this is a ridiculously simple probelm and needs no more description ! If not, please let me know the logs I should look at and whatever else I should check. Many thanks, Greg E -- Greg Edwards, Port Jackson Bioinformatics gedwards2@gmail.com
galaxy • 1.2k views
ADD COMMENTlink modified 6.9 years ago by Dannon Baker3.7k • written 6.9 years ago by Greg Edwards190
0
gravatar for Dannon Baker
6.9 years ago by
Dannon Baker3.7k
United States
Dannon Baker3.7k wrote:
Hi Greg, This is a problem with the default client_max_body_size option in nginx being set far too small in the nginx.conf on the cloud AMI. It'll be fixed with our next AMI update, but to fix it you have to edit the nginx.conf on your cloud node to change the client_max_body_size to something more appropriate for Galaxy uploads and then restart the nginx process. You'd have to do it for every instance, unfortunately, since that section of the filesystem is not persisted after shutdown. As a workaround the URL upload will work correctly with any size file if you're able to host the file you want to upload somewhere local, or you could use FTP upload and that should also function correctly. -Dannon
ADD COMMENTlink written 6.9 years ago by Dannon Baker3.7k
Dannon, Thanks. Glad it's not my error ! Could ypu let me know where nginx.conf is, and how to "restart the nginx process" ? This is a prototype proof-of-concept implementation, and I think a once-per-instance custom fix which preserves the simple ability to use Get Data / Upload Files From Your Computer will be best for now. Thanks (and also for being awake at 4am in Pennsylvania, if I have my time zones right) Greg E -- Greg Edwards, Port Jackson Bioinformatics gedwards2@gmail.com
ADD REPLYlink written 6.9 years ago by Greg Edwards190
Hi Dannon, I found nginx.conf at /opt/galaxy/pkg/nginx_upload_module-2.0.12/nginx.conf It contains .. client_max_body_size 100m; That looks like 100 Megabyte ?? but seems to result in 1 Megabyte. Can I set it to 10000m, or 1g ? Can't see how to restart nginx yet. Thanks, Greg E -- Greg Edwards, Port Jackson Bioinformatics gedwards2@gmail.com
ADD REPLYlink written 6.9 years ago by Greg Edwards190
It's the other nginx.conf at /opt/galaxy/pkg/nginx/conf/nginx.conf, I don't think that one you found is used. And it isn't explicitly set at all currently, so the default 1m gets applied. And, to reload with the modified config, I think this should do the trick (as the ubuntu user)- sudo kill -HUP `cat /opt/galaxy/pkg/nginx/logs/nginx.pid` -Dannon
ADD REPLYlink written 6.9 years ago by Dannon Baker3.7k
Dannon, Cool, that seems to have worked. I set server { listen 80; client_max_body_size 500m; in /opt/galaxy/pkg/nginx/conf/nginx.conf and issued sudo kill -HUP `cat /opt/galaxy/pkg/nginx/logs/nginx.pid` and it's uploading my test files of 2mb to 25mb ok. I think it will be easier to do that in my Cloud setup than instruct my (not very computer keen) wet-lab Proteomist clients how to use a subsidiary FTP package, easy though that is. But I'll try the FTP approach too for compariosn. Thanks, Greg E -- Greg Edwards, Port Jackson Bioinformatics gedwards2@gmail.com
ADD REPLYlink written 6.9 years ago by Greg Edwards190
Great, glad it worked for you! And, once we release a new AMI, it'll be fixed retroactively. That is, you'll be able to use your existing cloud setup with the new AMI and it should just work. -Dannon
ADD REPLYlink written 6.9 years ago by Dannon Baker3.7k
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 182 users visited in the last hour