Question: FastQ detects full drive on Docker machine with volumes mounted directly in Windows
2
gravatar for alexandrurepede
2.1 years ago by
alexandrurepede20 wrote:

The error

[Errno 28] No space left on device

Tried to run

FastQ

The actual command

python /shed_tools/toolshed.g2.bx.psu.edu/repos/devteam/fastqc/3fdc1a74d866/fastqc/rgFastQC.py -i "/export/galaxy-central/database/files/000/dataset_1.dat" -d "/export/galaxy-central/database/job_working_directory/000/73/dataset_79_files" -o "/export/galaxy-central/database/files/000/dataset_79.dat" -t "/export/galaxy-central/database/files/000/dataset_80.dat" -f "fastq" -j "Illumina iDEA Datasets (sub-sampled)/BT20 paired-end RNA-seq subsampled (end 1)"

Context Running Galaxy from Hyper-V on a Windows 10. The Docker image is https://github.com/bgruening/docker-galaxy-stable. Started the image from Kitematic with volumes mounted /var/lib/docker, /data, /export

While looking on the system, I noticed that all mounted drives apear as full

#df -a
Filesystem     1K-blocks     Used Available Use% Mounted on
none            57830428 11706668  43163020  22% /
tmpfs            1006672        0   1006672   0% /dev
tmpfs            1006672  1006672         0 100% /data
tmpfs            1006672  1006672         0 100% /export
/dev/sda2       57830428 11706668  43163020  22% /etc/resolv.conf
/dev/sda2       57830428 11706668  43163020  22% /etc/hostname
/dev/sda2       57830428 11706668  43163020  22% /etc/hosts
tmpfs            1006672  1006672         0 100% /var/lib/docker
tmpfs            1006672        0   1006672   0% /proc/kcore
tmpfs            1006672        0   1006672   0% /proc/timer_list
tmpfs            1006672        0   1006672   0% /proc/sched_debug

Is this a detection error because of the mounted drives? Any ideas on how to fix/trick the system?

EDIT : edited title to mentioned that my volumes are mounted into Windows (so not even in the VM)

fastq conda galaxy docker • 1.0k views
ADD COMMENTlink modified 2.1 years ago • written 2.1 years ago by alexandrurepede20

Since you're using Kitematic, I assume you're running OS X with the hypervisor. What happens if you run Linux instead? I wonder if the inceptionesque windows->OSX->linux setup is causing the problem with limited volume sizes.

Alternatively, what sort of settings are you giving your hypervisor?

ADD REPLYlink written 2.1 years ago by Devon Ryan1.9k

i was using windows 10 and yes it runs ok on linux, but i made a data container for the linux machine not volumes mounted directly through kitematic. still, i was curios what would be the issue...

ADD REPLYlink written 2.1 years ago by alexandrurepede20

Good question, I unfortunately don't know.

ADD REPLYlink written 2.1 years ago by Devon Ryan1.9k
0
gravatar for Mo Heydarian
2.1 years ago by
Mo Heydarian830
United States
Mo Heydarian830 wrote:

Hello,

I have run into a similar situation when my Galaxy in Docker gets close to 10 Gb of data. This is not an ideal solution, but when this has happened I clear out all of the Docker volumes and start from scratch by shutting down my Galaxy in Docker and running:

$ docker volume rm $(docker volume ls -qf dangling=true)

Be aware, the above strategy WILL DELETE YOUR DATA.

Perhaps Bjorn Gruening or Devon Ryan could suggest an alternative approach?

If you've figured out a solution since posting this message, do let us know!

Cheers, Mo Heydarian

ADD COMMENTlink written 2.1 years ago by Mo Heydarian830

Mo, please look out for a VM on your system and try to increase the space that the VM can use.

ADD REPLYlink written 2.1 years ago by Bjoern Gruening5.1k

i was running on kitematic mounted volumes, that means there were out of the MobyLinuxVM, and into my win machine, right?

ADD REPLYlink written 2.1 years ago by alexandrurepede20
0
gravatar for alexandrurepede
2.1 years ago by
alexandrurepede20 wrote:
<meant to add a comment, not answer...>
ADD COMMENTlink modified 2.1 years ago • written 2.1 years ago by alexandrurepede20
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 177 users visited in the last hour