My python package contains a lot of files compiled by python-protobuf (python2-protobuf-2.5.0 on Arch Linux), I installed the package on Ubuntu server 12.04.3 (which have python-protobuf-2.4.1), tried to run the code, and hit the following error:
from google.protobuf.internal import enum_type_wrapper
ImportError: cannot import name enum_type_wrapper
I think it's because the protobuf modules in my package are compiled by protobuf-2.5.0 and they do not work with protobuf-2.4.1.
I have no idea of the environments in which my code may run, the version of protobuf may vary. How to make my package work with both protobuf 2.4 and 2.5?
(A possible way: include two different sets of protobuf libraries (one compiled by 2.4.1, the other compiled by 2.5.0) in my package, get google.protobuf version at runtime and select the protobuf libraries to import. Is it possible?
You need to specify the version of protobuf that will work with in your setup.py in the list install_requires=['protobuf>=2.5.0']. With a Python package, you can put just the name or the exact versions that will run with the package using ==. I believe you can also specify != for specific versions.
If you are not packaging it with a setup.py, you should set up a virtualenv and put a file install_requires.txt with all the specific python packages and versions in the root of the project.
That might look like:
$ cd ../project
$ virtualenv project_venv
$ source project_venv/bin/activate
$ cd project
$ pip install protobuf>=2.5.0
$ pip freeze > ./requirements.txt
Then someone you distribute to can activate their virtualenv and do:
$ pip install -r requirements.txt
Make sure your package will work from a fresh virtualenv by installing with that method. This is also good to check before installing via a setup.py. You want to make sure your requirements will get anyone working who just does a fresh sudo python setup.py install, or python setup.py install in a virtualenv context.
You can exit a virtualenv context with:
$ deactivate
Your best bet may be to include a copy of the protobuf runtime library with your package, maybe under a different package name. Then you can make sure that it matches the version of your generated code.
Another option is to invoke protoc as part of the installation process, so you get whatever version is available on the host.
I don't think packaging multiple versions of your generated code sounds like a good idea -- you'll just have problems again when the next protobuf release comes out.
Related
I've created a new RPM using python bdist_rpm . Normally python setup.py install would install python dependencies like websocket-client or any other package. But the RPM just refuses to install anything.
Apparently the suggestion from various other posts seem to be in the line of just requiring them in setup.cfg as rpm packages. This doesn't make sense to me since most of the rpm packages seem to be on really old version and I can't possibly create rpms for all the python packages i require. I need a much recent version and it doesn't make sense that the yum installs don't actually install the packages.
What is the right (clean and easiest) way to do it ? I believe if a setup.py has something like
install_requires=[
"validictory",
"requests",
"netlogger>=4.3.0",
"netifaces",
"pyzmq",
"psutil",
"docopt"
],
Then it should try to either include them in the rpm or try to install it.
I am trying on a clean centos vm using vagrant which I keep destroying and then install the rpm.
Well the super hack way i used was to just add a post install script with all the requirements as easy_install installation (instead of pip because older versions may not have pip and even after installing pip, the approach failed on systems with python 2.6)
#Adding this in setup.py
options = {'bdist_rpm':{'post_install' : 'scripts/rpm_postinstall.sh'}},
Then the script is as follows:
easy_install -U <pkgnames>
Of course a post_uninstall can also be added if you want to clean up which I wouldn't because you have no clue what is using the packages installed apart from this app.
The logic of the rpm approach seems to be for this but its honestly over engineering and I'd rather package all the modules with the rpm to ensure it always works. ** Screaming out for a cleaner solution **
I am managing several modules on an HPC, and want to install some requirements for a tool using pip.
I won't use virtualenv because they don't work well with our module system. I want to install module-local versions of packages and will set PYTHONPATH correctly when the module is loaded, and this has worked just fine when the packages I am installing are not also installed in the default python environment.
What I do not want to do is uninstall the default python's versions of packages while I am installing module-local versions.
For example, one package requires numpy==1.6, and the default version installed with the python I am using is 1.8.0. When I
pip install --install-option="--prefix=$RE_PYTHON" numpy==1.6
where RE_PYTHON points to the top of the module-local site-packages directory, numpy==1.6 installs fine, then pip goes ahead and starts uninstalling 1.8.0 from the tree of the python I am using (why it wants to uninstall a newer version is beyond me but I want to avoid this even when I am doing a local install of e.g. numpy==1.10.1).
How can I prevent pip from doing that? It is really annoying and I have not been able to find a solution that doesn't involve virtualenv.
You have to explicitly tell pip to ignore the current installed package by specifying the -I option (or --ignore-installed). So you should use:
PYTHONUSERBASE=$RE_PYTHON pip install -I --user numpy==1.6
This is mentioned in this answer by Ian Bicking.
This weekend I've been reading up on conda and the python packaging user guide because I have a simple pure python project that depends on numpy. It seemed to me that distributing/installing this project via conda was better than pip due to this dependency.
One thing on which I'm still not clear: conda will install a python package from a recipe in build.sh, but it seems like build.sh just ends up calling python setup.py install for most python packages.
So even if I want to distribute/install my python package with conda, I still end up depending on setuptools (or distutils) for the actual installation, correct? I was unable to find a conda utility analogous to setuptools; am I missing something?
FWIW, I posted this question on the conda issue tracker.
Thanks!
Typically you will still be using distutils (or setuptools if the library requires it) to install things, yes. It is not technically required. The build.sh can be anything. If you wanted to, you could just copy the code into site-packages. Using setup.py install is recommended, though, as libraries will already have setup.py working, it will install metadata that can be read by pip, and it will compile any extension modules and install any data files.
I'm wondering if there's a way to "install" single-file python modules using pip (i.e. just have pip download the specified version of the file and copy it to site-packages).
I have a Django project that uses several 3rd-party modules which aren't proper distributions (django-thumbs and a couple others) and I want to pip freeze everything so the project can be easily installed elsewhere. I've tried just doing
pip install git+https://github.com/path/to/file.git
(and tried with the -e tag too) but pip complains that there's no setup.py file.
Edit: I should have mentioned - the reason I want to do this is so I can include the required module in a requirements.txt file, to make setting up the project on a new machine or new virtualenv easier.
pip requires a valid setup.py to install a python package. By definition every python package has a setup.py... What you are trying to install isn't a package but rather a single file module... what's wrong with doing something like:
git clone git+https://github.com/path/to/file.git /path/to/python/install/lib
I don't quite understand the logic behind wanting to install something that isn't a package with a package manager...
The simplest way to deal with python package installations, so far, to me, has been to check out the source from the source control system and then add a symbolic link in the python dist-packages folder.
Clearly since source control provides the complete control to downgrade, upgrade to any branch, tag, it works very well.
Is there a way using one of the Package installers (easy_install or pip or other), one can achieve the same.
easy_install obtains the tar.gz and install them using the setup.py install which installs in the dist-packages folder in python2.6. Is there a way to configure it, or pip to use the source version control system (SVN/GIT/Hg/Bzr) instead.
Using pip this is quite easy. For instance:
pip install -e hg+http://bitbucket.org/andrewgodwin/south/#egg=South
Pip will automatically clone the source repo and run "setup.py develop" for you to install it into your environment (which hopefully is a virtualenv). Git, Subversion, Bazaar and Mercurial are all supported.
You can also then run "pip freeze" and it will output a list of your currently-installed packages with their exact versions (including, for develop-installs, the exact revision from the VCS). You can put this straight into a requirements file and later run
pip install -r requirements.txt
to install that same set of packages at the exact same versions.
If you download or check out the source distribution of a package — the one that has its "setup.py" inside of it — then if the package is based on the "setuptools" (which also power easy_install), you can move into that directory and say:
$ python setup.py develop
and it will create the right symlinks in dist-packages so that the .py files in the source distribution are the ones that get imported, rather than copies installed separately (which is what "setup.py install" would do — create separate copies that don't change immediately when you edit the source code to try a change).
As the other response indicates, you should try reading the "setuptools" documentation to learn more. "setup.py develop" is a really useful feature! Try using it in combination with a virtualenv, and you can "setup.py develop" painlessly and without messing up your system-wide Python with packages you are only developing on temporarily:
http://pypi.python.org/pypi/virtualenv
easy_install has support for downloading specific versions. For example:
easy_install python-dateutil==1.4.0
Will install v1.4, while the latest version 1.4.1 would be picked if no version was specified.
There is also support for svn checkouts, but using that doesn't give you much benefits from your manual version. See the manual for more information above.
Being able to switch to specific branches is rarely useful unless you are developing the packages in question, and then it's typically not a good idea to install them in site-packages anyway.
easy_install accepts a URL for the source tree too. Works at least when the sources are in Subversion.