We are looking into using galaxy on a larger scale production server but one of the major limitations is that in order fro the user to run a tool on a dataset it has to have been uploaded to galaxy. We would like to have a way to have each user view data on shared drive so there is not massive uploading of terabytes of data to run certain types of jobs. Submitting the job as user as listed here https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster will only let them run the job as the user, our hope is to make sure the information is only accessible by the user with permission but when they do they can select files without having to upload the data.
Hi
You are talking about "certain types of jobs"....so, I assume you want to give access of particular files (e.g. fastq files generated in your sequencing facility ? ) to the users.
Have you considered adding an extra layer to your tool wrappers, which provide the information about the location of the files.
For example: the user provides a file or sample name. This information is enough for the tool, to find and read the data in order to execute e.g. FastQC.
Or you work with the 'dynamic_options' or the 'from_file' attribute (see https://wiki.galaxyproject.org/Admin/Tools/ToolConfigSyntax ) in order to generate user specific interfaces listing all available files.
Regards, Hans-Rudolf
I have done the extra layer with some tools and that does work but it still does not solve the problem of having data in the actual galaxy instance without uploading data. I am still trying to get a handle on using the dynamic_options tag set I think you may answered on my biostar question regarding this tag.
Get Options from a File During Job Creation
As Daniel pointed out a few days ago, working with 'Data Libraries' might be an other option. Creation of 'Data Libraries' can be scripted using BioBlend (http://bioblend.readthedocs.org/en/latest/index.html)
wrt 'dynamic_options tag': please provide the code you have written and if you have any error message
Hans-Rudolf