Prefix: I am currently working with some legacy machines in the field and need to install specific versions of libpython. These machines do not have access to the internet so apt-get or pip is not an option. These machines are currently running Ubuntu 16.
I am not entirely sure what libpython even is BUT I was able to install it with sudo apt-get install libpython3.6-dev on my personal machine. However, there is a very high probability that I will need a specific version.
Question: How do I install a specific version of libpython so that the Shared Objects (.so) and head (.a) files are created properly?
Here are the files that were created via my 3.6 install:
/usr/lib/python3.6/config-3.6m-i386-linux-gnu/libpython3.6.so
/usr/lib/python3.6/config-3.6m-i386-linux-gnu/libpython3.6m.so
/usr/lib/python3.6/config-3.6m-i386-linux-gnu/libpython3.6m.a
/usr/lib/python3.6/config-3.6m-i386-linux-gnu/libpython3.6m-pic.a
More Info:
There is a proprietary install .deb that is firing off these errors saying the following:
dpkg: dependency problems prevent configuration of [Program Name]:
[Program Name] depends on libpython3.6 (>= 3.6.5); however:
Package libpython3.6 is not installed.
Related
I'm working on a DevOps project for a client who's using Python. Though I never used it professionally, I know a few things, such as using virtualenv and pip - though not in great detail.
When I looked at the staging box, which I am trying to prepare for running a functional test suite, I saw chaos. Tons of packages installed globally, and those installed inside a virtualenv not matching the requirements.txt of the project. OK, thought I, there's a lot of cleaning up. Starting with global packages.
However, I ran into a problem at once:
➜ ~ pip uninstall PyYAML
Not uninstalling PyYAML at /usr/lib/python2.7/dist-packages, owned by OS
OK, someone must've done a 'sudo pip install PyYAML'. I think I know how to fix it:
➜ ~ sudo pip uninstall PyYAML
Not uninstalling PyYAML at /usr/lib/python2.7/dist-packages, owned by OS
Uh, apparently I don't.
A search revealed some similar conflicts caused by users installing packages bypassing pip, but I'm not convinced - why would pip even know about them, if that was the case? Unless the "other" way is placing them in the same location pip would use - but if that's the case, why would it fail to uninstall under sudo?
Pip denies to uninstall these packages because Debian developers patched it to behave so. This allows you to use both pip and apt simultaneously. The "original" pip program doesn't have such functionality
Update: my answer is relevant only to old versions of Pip. For the latest versions, Pip is configured to modify only the files which reside only in its "home directory" - that is /usr/local/lib/python3.* for Debian. For the latest tools, you will get these errors when you try to delete the package, installed by apt:
For pip 9.0.1-2.3~ubuntu1 (installed from Ubuntu repository):
Not uninstalling pyyaml at /usr/lib/python3/dist-packages, outside environment /usr
For pip 10.0.1 (original, installed from pypi.org):
Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
The point is not that pip cannot install the package because you don't have enough permissions, but because it is not a package installed through pip, so it doesn't want to uninstall it.
dist-packages is where packages installed by the OS package manager reside; as they are handled by another package manager (e.g. apt on Ubuntu/Debian, pacman on Arch, rpm/yum on CentOS, ... ) pip won't touch them (but still has to know about them as they are installed packages, so they can be used to satisfy dependencies of pip-installed packages).
You should also probably avoid to touch them unless you use the correct package manager, and even so, they may have been installed automatically to satisfy the dependencies of some program, so you may not remove them without breaking it. This can usually be checked quite easily, although the exact way depends from the precise Linux distribution you are using.
I have a python package that requires the python development libraries and headers to be installed. Is there any way to easily install these in setup.py install and pip install?
The closest thing I have come up with so far is to either just tell users to install them manually, individually do apt, yum, etc installs in setup.py, or manually download and build the python libs, which would still likely require specialization depending on OS.
For a project I am working on I am using Debian (8) as base OS. The target I am developing for is an ARM based platform. So for easy cross compiling I am using the multiarch functionality that debian provides.
Unfortunately I run into an issue when I try to install python for both my host system and the system I am cross compiling for. It looks like they cannot be installed next to each other.
When I try to install python for both architectures using apt-get install (apt-get install python python:armhf), I get this error:
Reading package lists... Done
Building dependency tree... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python : Depends: python2.7 (>= 2.7.9-1~) but it is not going to be installed
PreDepends: python-minimal (= 2.7.9-1) but it is not going to be installed
Conflicts: python:armhf but 2.7.9-1 is to be installed
python:armhf : Conflicts: python but 2.7.9-1 is to be installed
If I first install python for my host system and then try to install python for armhf, apt wants to remove the first python installation again.
Anybody seen this before? Any idea how to solve this?
Multiarch as of Debian Jessie does not allow the parallel installation of executables:
The package python contains executables that are installed to /usr/bin (e.g. pdb, pydoc, ...)
The package python:armhf also contains those executables and they should also get installed to /usr/bin.
Therefore python and python:armhf can not be installed at the same time since the executables of one package would overwrite the executable of the other package.
The good thing is, that you do not need two python interpreters. In your case I would just install the python interpreter that is needed for the host architecture (e.g. python:amd64). Please note that the installation of build dependencies with a command such as sudo apt-get build-dep -a armhf PACKAGE-NAME might sometimes fail and you have to guess what packages need to be installed manually.
I am working on an EC2 VM running Linux (I'm fairly new to Linux and Bash) which comes installed with Python 2.6. I upgraded to Python 2.7. When I try to install new modules, they install in /usr/lib/python2.6/site-packages but I need to change this to install in /usr/lib/python2.7/site-packages. I've tried a bunch of different ways to update the PYTHONPATH which I've found in various other post on Stackoverflow and other sites, but to no avail. Some I've tried are:
PYTHONPATH=$PYTHONPATH:/usr/lib/python2.7/site-packages export PYTHONPATH
PYTHONPATH="/usr/lib/python2.7/site-packages:$PYTHONPATH"
How can I update the install path to the new 2.7 path?
You covered how Python 2.7 was installed (which is a manual installation), how are you installing your modules?
If you sudo yum install <python-package>, you are going about this using system level (distribution specific) way of getting packages installed, which means it will only put packages in the system python location, in your case in the site-package directory in python2.6.
If you had used sudo pip install <python-package>, it should possibly work since you completely destroyed the default python installation which yum might have need (refer to Upgrade python without breaking yum).
With virtualenv, you can specify isolated, local locations for which you can install python packages to, isolating them from system level and you can fix a virtualenv to any available versions of python on your system, guaranteeing the right sets of libraries with the right sets of packages with all the correct versions (for python and the packages) specific to the needs of a particular application, which means you don't have to deal with the system/distribution level python path issues as that can be a huge source of headache. For example, on the system level you have a package by your distro that depends on some old versions of sqlalchemy, but in your actual application you need the most recent version, with virtualenv you can mask out the system level package and have the latest version installed locally there.
The output of pip freeze on my machine has included the following odd line:
command-not-found==0.2.44
When trying to install requirements on a different machine, I got the obvious No distributions at all found for command-not-found==0.2.44. Is this a pip bug? Or is there any real python package of that name, one which does not exist in pypi?
Indeed, as mentioned in the follow up comments, Ubuntu has a python package, installed via dpkg/apt that is called "python-commandnotfound"
$apt-cache search command-not-found
command-not-found - Suggest installation of packages in interactive bash sessions
command-not-found-data - Set of data files for command-not-found.
python-commandnotfound - Python 2 bindings for command-not-found.
python3-commandnotfound - Python 3 bindings for command-not-found.
As this is provided via apt, and not available in the pypi repo, you won't be able to install it via pip, but pip will see that it is installed. For the purposes of showing installed packages, pip doesn't care if a package is installed via apt, easy_install, pip, manually, etc.
In short, if you actually need it on another host (which I assume you don't) you'll need to apt-get install python-commandnotfound.