I have received Illumina paired-end genome sequence data as a .tar
file. When unpacked the data for each genome accession is split into
about 100 fastq files. Total of about 37 Gpb per genome.
Can you recommend the best way to organise this data prior to mapping
to reference genome?
I can concatenate unpacked files using DOS command line into forward
and reverse before uploading: is this the best approach? Is there a
tools that will start with the .tar file?
Merging the data prior to upload would probably be simplest. Files in
galaxy history are not in .tar format at this time.
Loading forward and reverse separately will most likely be important
from a scientific perspective for analysis.
Once ready for upload, you can tar or gz - as long as each load is a
single file - or leave uncompressed - either is fine. Using FTP is
required for larger data (>= 2G) and using a client that will allow
to track progress/resume an interrupted load can be helpful. Each file
can be up to 50G in size if you have an account.
Hopefully this helps,
Galaxy Support and Training