We have a Python script that uses the Galaxy API to upload files to a user's most recently used history on one of our institution's Galaxy instances. It has been working mostly fine for uploading from machines on our institution's network (it timed out on us one time), but it times out for our collaborators (outside of our institution) every time. They are able to successfully run a simple API request with our Galaxy instance, so to me this implies that there is an issue with the file upload. I was looking through an API tutorial here and found this in the step 6 script:
# NOTE: it may happen that the HDA will error! If that's the case - this will loop # forever. You could press 'ctrl+c' to stop this script - but it's a better # idea to add more code to handle the possibility. We'll be optimistic and # (for simplicity) assume everything will work.
Is there something broken with uploading files through the API? Should we add some lines to our Python script to wait a few seconds for the upload to time out and then retry the upload? These are small text files (kb to mb range), so I believe we have given them adequate time to upload.
The idea behind this script was to make it a Galaxy tool to allow users to transfer files from our collaborator's Galaxy instance to our Galaxy instance. Is there a better way to do this besides manually clicking on the download link for data in one Galaxy instance and then pasting that link into the upload file interface in another Galaxy instance?