CentOS 7 system, Python 2.7. The OS's installed python has directory
/usr/lib/python2.7/site-packages
and that is where a
python setup.py install
command would install a package. On a computing cluster I would like to install some packages so that they are referenced from that directory but which actually reside in an NFS served directory here:
/usr/common/lib/python2.7/site-packages
That is, I do not want to have to run setup.py on each of the cluster nodes to do a local install on each, duplicating the package on every machine. The packages already installed locally must not be affected, some of those are used by the OS's commands. Also the local ones must work even if the network is down for some reason. I am not trying to set up a virtual environment, I am only trying to place a common set of packages in a different directory in such a way that the OS supplied python sees them.
It isn't clear to me what is the best way to do this. It seems like such a common problem that there must be a standard or preferred way of doing this, and if possible, that is the method I would like to use.
This command
/usr/bin/python setup.py install --prefix=/usr/common
would probably install into the target directory. However the "python" command on the cluster nodes will not know this package is present, and there is no "network" python program that corresponds to the shared site-packages.
After the network install one could make symlinks from the local to the shared directory for each of the files created. That would be acceptable, assuming that is sufficient.
It looks like the PYTHONPATH environmental variable might also work here, although I'm unclear about what it expects
for "path" (full path to site-packages, or just the /usr/common part.)
EDIT: This does seem to work as needed, at least for the test case. The software package in question was installed using --prefix, as above. PYTHONPATH was not previously defined.
export PYTHONPATH=/usr/common/lib/python2.7/site-packages
python $PATH_TO_START_SCRIPT/start.py
ran correctly.
Thanks.
Related
For testing my libraries on multiple Python versions I have a single virtual environment that I install them into, and reference them with their complete name/version (i.e. python3.7). Recently I noticed that sys.path is still referencing the source library instead of the copied library (i.e. /source/python/... instead of /source/virtualenv/lib/python3.7/...)
I've tried make install instead of make altinstall 1, I've searched for answers -- so far nothing has helped.
How do I fix this?
1 PSA: If you use make (alt)install and you don't want to clobber your system Python, make sure and use
./configure --prefix /path/to/install_to/here
TL;DR Remove the pyvenv.cfg in the virtualenv root directory.
The issue is the interaction with the virtualenv, and not make. Somewhere in Python's startup it checks to see if it is running in a virtualenv and, if it is, uses the libraries from it's original installation (and I had created the virtualenv from my source copy).
The solution is to remove the pyvenv.cfg file in the root of the virtualenv. This will completely isolate the virtualenv from the system (so no sharing of site-packages nor dist-packages), but is exactly what I wanted for my purposes.
I apologize if this is not the correct site for this. If it is not, please let me know.
Here's some background on what I am attempting. We are working on a series of chat bots that will go into production. Each of them will run on a environment in Anaconda. However, our setup uses tensorflow, which uses gcc to be compiled, and compliance has banned compilers from production. In addition, compliance rules also frown on us using pip or conda install in production.
As a way to get around this, I'm trying to tar the Anaconda 3 folder and move it into prod, with all dependencies already compiled and installed. However, the accounts between environments have different names, so this requires me to go into the bin folder (at the very least; I'm sure I will need to change them in the lib and pckg folders as well) and use sed -i to rename the hard coded paths to point from \home\<dev account>\anaconda to \home\<prod account>\anaconda, and while this seems to work, its also a good way to mangle my installation.
My questions are as follows:
Is there any good way to transfer anaconda from one user to another, without having to use sed -i on these paths? I've already read that Anaconda itself does not support this, but I would like your input.
Is there any way for me to install anaconda in dev so the scripts in it are either hard coded to use the production account name in their paths, or to use ~.
If I must continue to use sed, is there anything critical I should be aware of? For example, when I use grep <dev account> *, I will some files listed as binary file matches. DO I need to do anything special to change these?
And once again, I am well aware that I should just create a new Anaconda installation on the production machine, but that is simply not an option.
Edit:
So far, I've changed the conda.sh and conda.csh files in /etc, as well as the conda, activate, and deactivate files in the root bin. As such, I'm able to activate and deactivate my environment on the new user account. Also, I've changed the files in the bin folder under the bot environment. Right now, I'm trying to train the bot to test if this works, but it keeps failing and stating that a custom action does not exist in the the list. I don't think that is related to this, though.
Edit2:
I've confirmed that the error I was getting was not related to this. In order to get the bot to work properly with a ported version of Anaconda, all I had to change was the the conda.sh and conda.csh files in /etc so their paths to python use ~, do the same for the activate and deactivate files in /bin, and change the shebang line in the conda file in /bin to use the actual account name. This leaves every other file in /bin and lib still using the old account name in their shebang lines and other variable that use the path, and yet the bots work as expected. By all rights, I don't think this should work, but it does.
Anaconda is touchy about path names. They're obviously inserted into scripts, but they may be inserted into binaries as well. Some approaches that come to mind are:
Use Docker images in production. When building the image:
Install compilers as needed.
Build your stuff.
Uninstall the compilers and other stuff not needed at runtime.
Squash the image into a single layer.
This makes sure that the uninstalled stuff is actually gone.
Install Anaconda into the directory \home\<prod account>\anaconda on the development or build systems as well. Even though accounts are different, there should be a way to create a user-writeable directory in the same location.
Even better: Install Anaconda into a directory \opt\anaconda in all environments. Or some other directory that does not contain a username.
If you cannot get a directory outside of the user home, negotiate for a symlink or junction (mklink.exe /d or /j) at a fixed path \opt\anaconda that points into the user home.
If necessary, play it from the QA angle: Differing directory paths in production, as compared to all other environments, introduce a risk for bugs that can only be detected and reproduced in production. The QA or operations team should mandate that all applications use fixed paths everywhere, rather than make an exception for yours ;-)
Build inside a Docker container using directory \home\<prod account>\anaconda, then export an archive and run on the production system without Docker.
It's generally a good idea to build inside a reproducible Docker environment, even if you can get a fixed path without an account name in it.
Bundle your whole application as a pre-compiled Anaconda package, so that it can be installed without compilers.
That doesn't really address your problem though, because even conda install is frowned upon in production. But it could simplify building Docker images without squashing.
I've been building Anaconda environments inside Docker and running them on bare metal in production, too. But we've always made sure that the paths are identical across environments. I found mangling the paths too scary to even try. Life has become much simpler when we switched to Docker images everywhere. But if you have to keep using sed... Good Luck :-)
This is probably what you need : pip2pi.
This only works for pip compatible packages.
As I understand you need to move your whole setup as previously compiled as .tar.gz file, then here are a few things you could try:
Create a requirements.txt. These packages can help :
a. pipreqs
$ pipreqs /home/project/location
Successfully saved requirements file in /home/project/location/requirements.txt
b. snakefood.
Then, install pip2pi
$ pip install pip2pi
$ pip2tgz packages/ foo==1.2
...
$ ls packages/
foo-1.2.tar.gz
bar-0.8.tar.gz
pip2tgz passes package arguments directly to pip, so packages can be specified in any format that pip recognises:
$ cat requirements.txt
foo==1.2
http://example.com/baz-0.3.tar.gz
$ pip2tgz packages/ -r requirements.txt bam-2.3/
...
$ ls packages/
foo-1.2.tar.gz
bar-0.8.tar.gz
baz-0.3.tar.gz
bam-2.3.tar.gz
After getting all .tar.gz files, .tar.gz files can be turned into PyPI-compatible "simple" package index using the dir2pi command:
$ ls packages/
bar-0.8.tar.gz
baz-0.3.tar.gz
foo-1.2.tar.gz
$ dir2pi packages/
$ find packages/
packages/
packages/bar-0.8.tar.gz
packages/baz-0.3.tar.gz
packages/foo-1.2.tar.gz
packages/simple
packages/simple/bar
packages/simple/bar/bar-0.8.tar.gz
packages/simple/baz
packages/simple/baz/baz-0.3.tar.gz
packages/simple/foo
packages/simple/foo/foo-1.2.tar.gz
but they may be inserted into binaries as well
I can confirm that some packages have hard-coded the absolute path (including username) into the compiled binary. But if you restrict usernames to have the same length, you can apply sed on both binary and text files to make almost everything work as perfect.
On the other hand, if you copy the entire folder and use sed to replace usernames on only text files, you can run most of the installed packages. However, operations involving run-time compilation might fail, one example is installing a new package that requires compilation during installation.
I'm developer and I love to code with python, the problem is most of the server in which need to deploy my packages are Solaris 10 and they don't have python upgraded, so I configured one development server as my dev environment, and I didn't have problems following my common steps
CC="/usr/sfw/bin/gcc -m64" CXX="/usr/sfw/bin/gcc -m64"./configure -- prefix=$HOME/.local
/usr/sfw/bin/gmake
/usr/sfw/bin/gmake install
When was necessary just added CFLAGS CPPFLAGS LDFLAGS and so, I used this way because I'm not a privileged user.
Then with those steps, I created a script to build all those packages and it works only in this environment but it takes a lot of time (a little problem and not the most important), The main problem is, after ran my script with other user same server I got tons of errors so it began to frustrate me, I suppose the problem is they use different paths and libraries and I can't inspect all servers to compare them, since I'm not privileged to do that.
I'm trying to discover a clear path to reuse my local-dev environment and install it in any environment with same Solaris 10,
I found the packages are the way to share them reducing the installation time and be sure they will be installed, it is true?
In my experience, all the package always being compiled in root paths as /usr/local/ and I don't know if it is true.
Recently I read the http://www.sunfreeware.com/ (discontinued) deliver their packages in $HOME/usr/local, Regrettably, I couldn't get any source from there to looking how they did.
If I compile without prefix in my dev-environment and use gmake with DESTDIR set to $HOME/.local/ it preserves the source to build a package right?
Using pkgadd install me the packages in same $HOME/.local path?
I have looked into other python module distribution questions. My need is a bit different (I think!, I am python newbie+)
I have a bunch of python scripts that I need to execute on remote machines. Here is what the target environment looks like;
The machines will have base python run time installed
I will have a SSH account; I can login or execute commands remotely using ssh
i can copy files (scp) into my home dir
I am NOT allowed to install any thing on the machine; the machines may not even have access to Internet
my scripts may use some 'exotic' python modules -- most likely they won't be present in the target machine
after the audit, my home directory will be nuked from the machine (leave no trace)
So what I like to do is:
copy a directory structure of python scripts + modules to remote machine (say in /home/audituser/scripts). And modules can be copied into /home/audituser/scripts/pythhon_lib)
then execute a script (say /home/audituser/scripts/myscript.py). This script will need to resolve all modules used from 'python_lib' sub directory.
is this possible? or is there a better way of doing this? I guess what I am looking is to 'relocate' the 3rd party modules into the scripts dir.
thanks in advance!
Are the remote machines the same as each other? And, if so, can you set up a development machine that's effectively the same as the remote machines?
If so, virtualenv makes this almost trivial. Create a virtualenv on your dev machine, use the virtualenv copy of pip to install any third-party modules into it, build your script within it, then just copy that entire environment to each remote machine.
There are three things that make it potentially non-trivial:
If the remote machines don't (and can't) have virtualenv installed, you need to do one of the following:
In many cases, just copying a --relocatable environment over just works. See the documentation section on "Making Environments Relocatable".
You can always bundle virtualenv itself, and pip install --user virtualenv (and, if they don't even have pip, a few steps before that) on each machine. This will leave the user account in a permanently-changed state. (But fortunately, your user account is going to be nuked, so who cares?)
You can write your own manual bootstrapping. See the section on "Creating Your Own Bootstrap Scripts".
By default, you get a lot more than you need—the Python executable, the standard library, etc.
If the machines aren't identical, this may not work, or at least might not be as efficient.
Even if they are, you're still often making your bundle orders of magnitude bigger.
See the documentation sections on Using Virtualenv without bin/python, --system-site-packages, and possibly bootstrapping.
If any of the Python modules you're installing also need C libraries (e.g., libxml2 for lxml), virtualenv doesn't help with that. In fact, you will need the C libraries to be almost exactly the same (same path, compatible version).
Three other alternatives:
If your needs are simple enough (or the least-simple parts involve things that virtualenv doesn't help with, like installing libxml2), it may be easier to just bundle the .egg/.tgz/whatever files for third-party modules, and write a script that does a pip install --user and so on for each one, and then you're done.
Just because you don't need a full app-distribution system doesn't mean you can't use one. py2app, py2exe, cx_freeze, etc. aren't all that complicated, especially in simple cases, and having a click-and-go executable to copy around is even easier than having an explicit environment.
zc.buildout is an amazingly flexible and manageable tool that can do the equivalent of any of the three alternatives. The main downside is that there's a much, much steeper learning curve.
You can use virtualenv to create a self-contained environment for your project. This can house your own script, as well as any dependency libraries. Then you can make the env relocatable (--relocatable), and sync it over to the target machine, activate it, and run your scripts.
If these machines do have network access (not internet, but just local network), you can also place the virtualenv on a shared location and activate from there.
It looks something like this:
virtualenv --no-site-packages portable_proj
cd portable_proj/
source bin/activate
# install some deps
pip install xyz
virtualenv --relocatable .
Now portable_proj can be disted to other machines.
I want to install python on a flash drive in a virtual environment so that I can develop code wherever I am. Is this possible to do in such a way that I can use my flash drive on windows/mac/linux computers?
For windows, head to Portable Python (http://PortablePython.com) to see various options you have,
For linux and Mac you don't need to install it on USB drive as those systems usually come with Python pre-installed. If you need specific packages for those systems, bring them on USB together with a command line script that can load them with one call in virtualenv on those systems and you are good to go !
Be aware that this is never 100% bullet proof as you are depending on Python version you are using/bringing packages for.
You could try looking at setting up something using some VirtualEnv type environments, with the various Python versions installed on your machines.
Not sure how you'd get round the different paths on the different operating systems though.
Virtualenv: http://pypi.python.org/pypi/virtualenv
As #millimoose pointed out, you could install three different versions of Python.
For each Python package you are working on, you can create a .pth file in the site-packages directory of each Python version that you would like to use the package from.
Note that, as described here:
If you put a .pth file in the site-packages directory containing a path, python searches this path for imports.
For example, if you have a package named my_package that you are working on that resides at the path C:\Users\Me\Documents\dev_packages\my_package, you can add a file with extension .pth (note that the name doesn't matter, specifically it doesn't have to have any relation to the package name), with the contents:
C:\Users\Me\Documents\dev_packages
This will add C:\Users\Me\Documents\dev_packages to the Python import search-path, causing the my_package package to be discovered. By placing this .pth file in the site-packages directory of each Python version, my_package will be available in all corresponding versions of Python.