Question: Numpy 1.9 local install failure
gravatar for Andy
13 months ago by
Andy10 wrote:

Hi all,

I am running a local instance of galaxy and in an attempt to reinstall a newer version of MACS2 I have run into a dependency issue. The error comes in installing numpy 1.9 returning this message:

Traceback (most recent call last): File "", line 264, in <module> setup_package() File "", line 210, in setup_package raise RuntimeError("Writing custom site.cfg failed! %s" % ex) RuntimeError: Writing custom site.cfg failed! 'ATLAS_LIB_DIR'

Any help would be greatly appreciated, cheers!

github admin install local macs2 • 545 views
ADD COMMENTlink written 13 months ago by Andy10

Hi Andy - Sorry to hear you are having problems. Two questions:

  • Please confirm you are installing the IUC version of the MACS2 wrapped tools. This is recommended release/version.
  • What Galaxy version is your local? 17.05? If not, try upgrading and reinstalling. There were many tool updates that work best in 17.05 due to the upgrades around dependency handling. Most newer tools will also work best in the upcoming 17.09, expected to be released sometime this week.

Upgrade as needed, let us know how that goes, confirm versions, and we can troubleshoot from there. I have already asked for some developer input (should we need it). Thanks, Jen, Galaxy team

ADD REPLYlink modified 13 months ago • written 13 months ago by Jennifer Hillman Jackson25k

Hi Jen,

Thanks for reaching out! I was trying to install a different version of MACS2 in the hopes that the bdgdiff empty output file issue was resolved: I could not find the ticket from the link included on the page and was unable to determine if the issue was successfully resolved so I opted for the most recent available version.

I am not sure what version of galaxy I am running. Is there another way to check if I cannot use the "git" command?

I have tried to install the version of macs2 you have mentioned and am getting an error now (when running bdgdiff to determine if the output file is empty): Fatal error: Exit code 127()

Cheers, Andy.

ADD REPLYlink written 13 months ago by Andy10
gravatar for Jennifer Hillman Jackson
13 months ago by
United States
Jennifer Hillman Jackson25k wrote:


You will definitely want to use the newer tool release. As far as I know, that MACS2 tool issue was resolved only there. [note: updated comment].

The post you reference is older and I updated the link there to point to the current Known Issues page.

This is the post for the current version of MACS2 that is known to work correctly when all dependencies are installed. I see that some others have been having trouble recently getting those dependencies in place and the conversation is still in progress. Please follow along with the advice there and add in your own comments if you are experiencing different problems. The github ticket reaches the developers directly who can help solve any problems the tool repo itself:

Please explain more about why the git command is not helping to determine the Galaxy version if you still need help with that.

Thanks! Jen, Galaxy team

ADD COMMENTlink modified 13 months ago • written 13 months ago by Jennifer Hillman Jackson25k

Thanks again for your help!

I "inherited" a workstation from a colleague that I believe may have installed galaxy by downloading zipped packages directly rather than through github. I am waiting to hear from him.

ADD REPLYlink written 13 months ago by Andy10

Ok, that helps with context. I suspect your instance is older, which means that the Conda dependency resolution upgrades won't be a match for the current methods. Upgrading an instance obtained that way is not possible, which will present problems over time (see

Using an older Galaxy version could impact many tools installs, especially for the tools that have been specifically updated to work with these newer releases. MACS2 is one of those tools, as are most devteam and iuc owned tools hosted in the Tool Shed. Ideally, your colleague can help to determine the original version. If not at least 17.05, you'll need to figure out how to handle upgrades.

Basing off of the github repo would be better for tool installs reasons (your immediate concern), but also allows for easier upgrade methods to be used. However you choose to do this, you'll want to stay current, and how is your decision. Security updates, for example, would be very important to stay current with. Functionality upgrades/bug fixes are others - both for admin and general usage (this is where the methods for tool installs fit in).

No matter how you obtained Galaxy originally, once installed and running, there are ways to save back the database (and current tool/data content), start up a new instance based off the gitbub release repo, swapping in the prior content, upgrade the database schema, and more. This is non-trivial, but overall would be worth it going forward. Getting help from someone locally with good general admin experience, especially if there are advanced admin changes to consider (Postgres db, FTP set up, web hosting, clusters running jobs and possibly other offloaded tasks). This admin may be you now, and we can do our best to help. :)

Whoever works on it can interact with the devs directly for details/troubleshooting not covered at Admin FAQs or the Degobah Admin Training by using the Gitter channels or possibly, depending on the type of issues that come up during the migration. The galaxy-dev mailing list is also a great place to get feedback and review prior Q/A (that prior Q/A - and more - can be searched using the upper right corner Search function at That search function is very effective at finding FAQs and other Galaxy resources for topics of interest.

ADD REPLYlink modified 13 months ago • written 13 months ago by Jennifer Hillman Jackson25k

Hi Jen,

Thanks for the help and all the info. I understand about half of what you have said here and, unfortunately, the laurels rest with me. Is there anyway to save work histories from this instance to import them into a new instance later? It seems that the fix is a galaxy far, far away and I should probably have a backup before I do anymore damage.

ADD REPLYlink written 13 months ago by Andy10

You are funny, and I agree, moving content can be a simpler method if you do not have a great deal of it or just want to start over and not deal with the complexities of a full migration. And starting with a fresh instance has advantages, one of which is that you will have complete knowledge of that server's config since you will be creating it. You might want to consider using a docker Galaxy image as a baseline - there would less work to get things up and running, including loading tools and natively indexed data in batch. How to get one going is included in the Dagobah Galaxy Admin training site linked in my prior post.

Small note here: the next release, 17.09 is just a few days away and includes many important upgrades and functionality enhancements. You might want to wait for that, even though once you are using a github source, upgrading is straightforward and well documented. The official release will be announced a few ways - at @galaxyproject Twitter, the galaxy-dev and galaxy-announce mailing lists, and here at Biostars. The Docker version will release a few days after that. Perhaps compare that to the Ansible Playbook admin strategy - one may appeal to you more than the other.

Now to the details of transferring content ... apologies for the length, I am summarizing what you will find in many distinct places. (I really should put this into a new FAQ ... :)

Histories can be downloaded and saved as a file, then uploaded into another Galaxy server. See the History menu options Export to File/Upload from File. It takes some time to build the archive. Messages that come up in during the process will guide you through the steps.

Should both Galaxy instances be active and either on the same network and publically available (or publically available in general), the histories can be moved via URL, skipping the actual download/upload steps. Histories should be in a shared state when using the URL method. Use the History menu option Share or Publish - with just Share by link being enough, going further with Publish is not necessary.

Datasets and Workflows can also be moved between Galaxy servers, with a slightly different process.

For Datasets, use the disc icon per dataset - then actually download or capture the link, then load with the Upload tool (files under 2 GB or by URL any size). If the dataset is over 2 GB and in a file, load with FTP first, then use the Upload tool to transfer the FTP loaded dataset into a History. URL transfer for these also requires that the servers are publically accessible (internal network or everywhere).

For Workflows, go to your Workflow home screen using the top masthead link then click on the workflow to see the menu option for Download. Upload workflows using the "up" button icon in the upper right corner of that same home screen in the new instance. Warnings will appear if a workflow contains tools or tool version not available in the other server alerting about what needs to be added or how the workflow can be edited to use the tool versions already available (sometimes portions are automatically replaced when first opened in the workflow editor, but also with warnings).

Keeping distinct backups are important, of both data and the database itself. This should be part of your overall admin strategy.

There are several hub FAQs and prior Q&A related to these topics - the search function I mentioned will help you to find these. These can include examples/troubleshooting and reviewing can help if you run into problems or need more details. But let us know if you get stuck and need specific help for a function through one of the communication channels (if complex) or here if just a quick question.

ADD REPLYlink modified 13 months ago • written 13 months ago by Jennifer Hillman Jackson25k
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 168 users visited in the last hour