If I run:
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools
This install Python version 3.9 on my Alpine, but because I Work with Django 1.10.5 it gives me errors, so I need to install python 3.5.
How can I specify this?
Doing package pinning in Alpine might break at any point if you are not maintaining your own version of the package repository because of their policy regarding their packages:
We don't at the moment have resources to store all built packages indefinitely in our infra. Thus we currently keep only the latest for each stable branch, and has always been like that.
PyPi and npm just keep source versions. We have to do binary builds which are considerably larger, and may need to be redone when one of the dependencies changes. So as whole this is a magnitude more difficult and storage hungry problem. Of course it is unfortunate that the same rules applies to all packages, even to the python/npm ones that would not need to be rebuilt as often as they are source distributions.
There has been discussion of keep all packages tagged as Alpine in the future. However, this is still "in-progress". The official recommendation is to keep your own mirror / repository with all the specific package and their versions that you may want to use.
Source: https://gitlab.alpinelinux.org/alpine/abuild/-/issues/9996#note_87135
A better idea, in order to achieve this, would be to come from a python image, right away.
So your Dockerfile would start with:
FROM python:3.5-alpine
## Here goes your `RUN` commands, without the need to install python and pip,
## since they are build in the base image already
Related
I am using some compiled modules on a cluster where my home directory is shared across a few different architectures. By manually copying files I can have these two version peacefully co-existing:
/home/wright/.local/lib/python3.8/site-packages/ImageD11/_cImageD11.cpython-38-powerpc64le-linux-gnu.so
/home/wright/.local/lib/python3.8/site-packages/ImageD11/_cImageD11.cpython-38-x86_64-linux-gnu.so
Is there a way to get pip to leave the previously compiled version(s) in place when installing for another architecture?
This just needed the -I or --ignore-installed for pip. It then ignores the installed version(s) and writes over the top. Just re-run it for each architecture, platform and python version you need.
Other useful options may be --force-reinstall and almost always you'll need --no-deps as well as remembering python -m pip install pip --upgrade before starting.
Like this: you can run a code together with the system python, or any of the other venv's you find. Obviously, that code needs to be version tolerant with respect to any dependencies.
I've created a new RPM using python bdist_rpm . Normally python setup.py install would install python dependencies like websocket-client or any other package. But the RPM just refuses to install anything.
Apparently the suggestion from various other posts seem to be in the line of just requiring them in setup.cfg as rpm packages. This doesn't make sense to me since most of the rpm packages seem to be on really old version and I can't possibly create rpms for all the python packages i require. I need a much recent version and it doesn't make sense that the yum installs don't actually install the packages.
What is the right (clean and easiest) way to do it ? I believe if a setup.py has something like
install_requires=[
"validictory",
"requests",
"netlogger>=4.3.0",
"netifaces",
"pyzmq",
"psutil",
"docopt"
],
Then it should try to either include them in the rpm or try to install it.
I am trying on a clean centos vm using vagrant which I keep destroying and then install the rpm.
Well the super hack way i used was to just add a post install script with all the requirements as easy_install installation (instead of pip because older versions may not have pip and even after installing pip, the approach failed on systems with python 2.6)
#Adding this in setup.py
options = {'bdist_rpm':{'post_install' : 'scripts/rpm_postinstall.sh'}},
Then the script is as follows:
easy_install -U <pkgnames>
Of course a post_uninstall can also be added if you want to clean up which I wouldn't because you have no clue what is using the packages installed apart from this app.
The logic of the rpm approach seems to be for this but its honestly over engineering and I'd rather package all the modules with the rpm to ensure it always works. ** Screaming out for a cleaner solution **
My python package contains a lot of files compiled by python-protobuf (python2-protobuf-2.5.0 on Arch Linux), I installed the package on Ubuntu server 12.04.3 (which have python-protobuf-2.4.1), tried to run the code, and hit the following error:
from google.protobuf.internal import enum_type_wrapper
ImportError: cannot import name enum_type_wrapper
I think it's because the protobuf modules in my package are compiled by protobuf-2.5.0 and they do not work with protobuf-2.4.1.
I have no idea of the environments in which my code may run, the version of protobuf may vary. How to make my package work with both protobuf 2.4 and 2.5?
(A possible way: include two different sets of protobuf libraries (one compiled by 2.4.1, the other compiled by 2.5.0) in my package, get google.protobuf version at runtime and select the protobuf libraries to import. Is it possible?
You need to specify the version of protobuf that will work with in your setup.py in the list install_requires=['protobuf>=2.5.0']. With a Python package, you can put just the name or the exact versions that will run with the package using ==. I believe you can also specify != for specific versions.
If you are not packaging it with a setup.py, you should set up a virtualenv and put a file install_requires.txt with all the specific python packages and versions in the root of the project.
That might look like:
$ cd ../project
$ virtualenv project_venv
$ source project_venv/bin/activate
$ cd project
$ pip install protobuf>=2.5.0
$ pip freeze > ./requirements.txt
Then someone you distribute to can activate their virtualenv and do:
$ pip install -r requirements.txt
Make sure your package will work from a fresh virtualenv by installing with that method. This is also good to check before installing via a setup.py. You want to make sure your requirements will get anyone working who just does a fresh sudo python setup.py install, or python setup.py install in a virtualenv context.
You can exit a virtualenv context with:
$ deactivate
Your best bet may be to include a copy of the protobuf runtime library with your package, maybe under a different package name. Then you can make sure that it matches the version of your generated code.
Another option is to invoke protoc as part of the installation process, so you get whatever version is available on the host.
I don't think packaging multiple versions of your generated code sounds like a good idea -- you'll just have problems again when the next protobuf release comes out.
In Ubuntu 13.04, I have installed Scrapy for python-2.7, from the tarball. Executing a crawl command results in the below error:
ImportError: Error loading object 'scrapy.telnet.TelnetConsole': No module named conch
I've also tried installing twisted conch using easy_install and using the tarball. I have also removed the scrappy.egg and .info and the main scrappy folder from the python path.
Reinstalling scrapy does not help as well.
Can some one point me in the right direction?
On Ubuntu, you should avoid using easy_install wherever you can. Instead, you should be using apt-get, aptitude, "Ubuntu Software Center", or another of the distribution-provided tools.
For example, this single command is all you need to install scrapy - along with every one of its dependencies that is not already installed:
$ sudo apt-get install python-scrapy
easy_install is not nearly as good at installing things as apt-get. Chances are the reason you can't get it to work is that it didn't quite install things sensibly, particularly with respect to what was already installed on the system. Sadly, it also leaves no record of what it did, so uninstallation is difficult or impossible. You may now have a big mess on your system that prevents proper installations from working as well (or maybe not, you might be lucky). It's difficult to say whether this is the case, since there are a lot of different pieces that go into a working system, and they all need to fit together just right, and it's difficult to enumerate them so you can check them, let alone enumerate the ways they can each be broken.
Ensure you have the python development headers:
apt-get install build-essential python-dev
Install scrapy with pip:
pip install Scrapy
Ubuntu packages
New in version 0.10.
Scrapinghub publishes apt-gettable packages which are generally fresher than those in Ubuntu, and more stable too since they’re continuously built from Github repo (master & stable branches) and so they contain the latest bug fixes.
To use the packages:
Import the GPG key used to sign Scrapy packages into APT keyring:
Step.1
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 627220E7
Create /etc/apt/sources.list.d/scrapy.list file using the following command:
Step.2
echo 'deb http://archive.scrapy.org/ubuntu scrapy main' | sudo tee /etc/apt/sources.list.d/scrapy.list
Update package lists and install the scrapy-0.24 package:
Step.3
sudo apt-get update && sudo apt-get install scrapy-0.24
Note
Repeat step 3 if you are trying to upgrade Scrapy.
Warning
python-scrapy is a different package provided by official debian repositories, it’s very outdated and it isn’t supported by Scrapy team.
The simplest way to deal with python package installations, so far, to me, has been to check out the source from the source control system and then add a symbolic link in the python dist-packages folder.
Clearly since source control provides the complete control to downgrade, upgrade to any branch, tag, it works very well.
Is there a way using one of the Package installers (easy_install or pip or other), one can achieve the same.
easy_install obtains the tar.gz and install them using the setup.py install which installs in the dist-packages folder in python2.6. Is there a way to configure it, or pip to use the source version control system (SVN/GIT/Hg/Bzr) instead.
Using pip this is quite easy. For instance:
pip install -e hg+http://bitbucket.org/andrewgodwin/south/#egg=South
Pip will automatically clone the source repo and run "setup.py develop" for you to install it into your environment (which hopefully is a virtualenv). Git, Subversion, Bazaar and Mercurial are all supported.
You can also then run "pip freeze" and it will output a list of your currently-installed packages with their exact versions (including, for develop-installs, the exact revision from the VCS). You can put this straight into a requirements file and later run
pip install -r requirements.txt
to install that same set of packages at the exact same versions.
If you download or check out the source distribution of a package — the one that has its "setup.py" inside of it — then if the package is based on the "setuptools" (which also power easy_install), you can move into that directory and say:
$ python setup.py develop
and it will create the right symlinks in dist-packages so that the .py files in the source distribution are the ones that get imported, rather than copies installed separately (which is what "setup.py install" would do — create separate copies that don't change immediately when you edit the source code to try a change).
As the other response indicates, you should try reading the "setuptools" documentation to learn more. "setup.py develop" is a really useful feature! Try using it in combination with a virtualenv, and you can "setup.py develop" painlessly and without messing up your system-wide Python with packages you are only developing on temporarily:
http://pypi.python.org/pypi/virtualenv
easy_install has support for downloading specific versions. For example:
easy_install python-dateutil==1.4.0
Will install v1.4, while the latest version 1.4.1 would be picked if no version was specified.
There is also support for svn checkouts, but using that doesn't give you much benefits from your manual version. See the manual for more information above.
Being able to switch to specific branches is rarely useful unless you are developing the packages in question, and then it's typically not a good idea to install them in site-packages anyway.
easy_install accepts a URL for the source tree too. Works at least when the sources are in Subversion.