testing a virtual environment (virtualenv) - python

I'm not sure if it makes sense but I would like to be able to test my virtual environments to see if everything a particular project needs is indeed installed from requirements files and none of the requirements / dependencies are missing.
How could I do it?

Use yolk to spot out-of-date dependencies:
$ yolk --show-updates
Paste 1.7.2 (1.7.5.1)
PasteDeploy 1.3.3 (1.5.0)
PasteScript 1.7.3 (1.7.5)
coverage 3.4 (3.6)
…
To install missing ones, the usual way is to have a requirements.txt for pip install -r. If you mean how to initially build one of those, then running e.g. pylint on your project will uncover unsatisfied imports.

Related

Pip requirements installation fails in Travis due to idna version conflict

One of my Travis build tests have started to fail with the following error:
The conflict is caused by:
The user requested idna==3.1
requests 2.25.1 depends on idna<3 and >=2.5
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
However, this runs fine on my local machine. For example:
(venv) C:\Users\Asus\PycharmProjects\elastic-migrate>tox -e py38
GLOB sdist-make: C:\Users\Asus\PycharmProjects\elastic-migrate\setup.py
py38 create: C:\Users\Asus\PycharmProjects\elastic-migrate\.tox\py38
py38 installdeps: -rrequirements.txt
py38 inst: C:\Users\Asus\PycharmProjects\elastic-migrate\.tox\.tmp\package\1\elastic-migrate-0.1.0.dev126+g8e5eb23.zip
py38 installed: appdirs==1.4.4,atomicwrites==1.4.0,attrs==20.3.0,certifi==2020.12.5,cfgv==3.2.0,chardet==4.0.0,click==7.1.2,click-log==0.3.2,codecov==2.1.11,colorama==0.4.4,coverage==5.3.1,distlib==0.3.1,elastic-migrate # file:///C:/Us
ers/Asus/PycharmProjects/elastic-migrate/.tox/.tmp/package/1/elastic-migrate-0.1.0.dev126%2Bg8e5eb23.zip,filelock==3.0.12,flake8==3.8.4,identify==1.5.10,idna==2.10,importlib-metadata==3.3.0,iniconfig==1.1.1,jsonschema==3.2.0,mccabe==0.
6.1,more-itertools==8.6.0,nodeenv==1.5.0,packaging==20.8,pluggy==0.13.1,pre-commit==2.9.3,py==1.10.0,pycodestyle==2.6.0,pyfakefs==4.3.3,pyflakes==2.2.0,pyparsing==2.4.7,pyrsistent==0.17.3,pytest==6.2.1,pytest-cov==2.10.1,pytest-mock==3
.4.0,PyYAML==5.3.1,requests==2.25.1,requests-mock==1.8.0,setuptools-scm==5.0.1,six==1.15.0,SQLAlchemy==1.3.22,toml==0.10.2,tox==3.20.1,urllib3==1.26.2,validator-collection==1.5.0,virtualenv==20.2.2,wcwidth==0.2.5,zipp==3.4.0
py38 run-test-pre: PYTHONHASHSEED='473'
For reference:
My .travis.yml file
My tox.ini file
This has started happening since I've tried to add python 3.9 support to the project, and pyup has upgraded the dependencies subsequently. As I've dug about it a little, I've found that there are others facing the same issues. However, I am unable to find a satisfactory way to go about it. What is the recommended way to handle tox environment dependencies better? One requirements.txt file doesn't seem to be the right way of doing it.
Historically, pip didn't have a proper dependency resolver. So, if you asked it to install a package without any version flag, you’d be getting the newest version of the package, even if it conflicts with other packages that you had already installed.
However, with pip 20.3, this changes, and now pip has a stricter dependency resolver. Pip will now complain if any of your sub-dependencies are incompatible.
As a quick fix, you can pin your idna version in your requirements.txt to 2.05. As a longer-term solution, you can adopt a tool like pip-tools where you will be able to pin your top-level dependencies in a requirements.in file and run a pip-compile command to generate the requirements.txt file. This way there will be an explicit delineation between the top-level dependencies and the sub-dependencies. Also, the tool will resolve the sub-dependency conflicts for you.

pipenv install installs dependencies every time / Pycharm doesn't recognize them

I'm having a variety of problems with my pipenv setup (another question here differences between users even after using Pipfile and Pipfile.lock with explicit versions) and I just noticed something else that seems funky.
It turns out in my project folder (with both a Pipfile and Pipfile lock created, with an initial pipenv install having been run, and without pipenv shell invoked), I can run pipenv install as many times as I want and every time it says that it is installing 74 dependencies. Does this mean that the pipenv install isn't taking effect, or does it just mean it's running through the dependencies to make sure they are installed?
It seems like there might be a problem, because when I open Pycharm for the project for that folder, it gives me the below alert ("Package requirements..." with the option to install requirements from Pipfile.lock).
I'm on the latest Pycharm which is setup to use the pipenv environment that I created with pipenv install, and I can confirm that it's using that environment based on Pycharm->Preferences->Project->Project Interpreter where it shows that it's using the right virtualenv for this folder.
But it seems that both pipenv install and Pycharm don't think the dependencies have been installed.
To answer your second question, the requirements are not being installed again. Every time you run pipenv install it will say that it's installing all requirements from your Pipfile.lock file, but if you run pipenv install -v to make it verbose and see the output, you'll see things like the following:
Installed version (4.1.2) is most up-to-date (past versions: 4.1.2)
Requirement already up-to-date: whitenoise==4.1.2 in c:\users\mihai\.virtualenvs\pipenvtest-1zyry8jn\lib\site-packages (from -r C:\Users\Mihai\AppData\Local\Temp\pipenv-1th31ie1-requirements\pipenv-r4e3zcr7-requirement.txt (line 1))
(4.1.2)
Since it is already installed, we are trusting this package without checking its hash. To ensure a completely repeatable environment, install into an empty virtualenv.
Cleaning up...
Removed build tracker 'C:\\Users\\Mihai\\AppData\\Local\\Temp\\pip-req-tracker-ip_gjf7h'
So to answer your question, it just runs through them to check if they're installed, installing them only if necessary.

Install a new package from requirement.txt without upgrading the dependencies which are already satisfied

I am using requirement.txt to specify the package dependencies that are used in my python application. And everything seems to work fine for packages of which either there are no internal dependencies or for the one using the package dependencies which are not already installed.
The issue occurs when i try to install a package which has a nested dependency on some other package and an older version of this package is already installed.
I know i can avoid this while installing a package manually bu using pip install -U --no-deps <package_name>. I want to understand how to do this using the requirement.txt as the deployment and requirement installation is an automated process.
Note:
The already installed package is not something i am directly using in my project but is part of a different project on the same server.
Thanks in advance.
Dependency resolution is a fairly complicated problem. A requirements.txt just specifies your dependencies with optional version ranges. If you want to "lock" your transitive dependencies (dependencies of dependencies) in place you would have to produce a requirements.txt that contains exact versions of every package you install with something like pip freeze. This doesn't solve the problem but it would at least point out to you on an install which dependencies conflict so that you can manually pick the right versions.
That being said the new (as of writing) officially supported tool for managing application dependencies is Pipenv. This tool will both manage the exact versions of transitive dependencies for you (so you won't have to maintain a "requirements.txt" manually) and it will isolate the packages that your code requires from the rest of the system. (It does this using the virtualenv tool under the hood). This isolation should fix your problems with breaking a colocated project since your project can have different versions of libraries than the rest of the system.
(TL;DR Try using Pipenv and see if your problem just disappears)

pip uninstall fails with "owned by OS" - even under sudo

I'm working on a DevOps project for a client who's using Python. Though I never used it professionally, I know a few things, such as using virtualenv and pip - though not in great detail.
When I looked at the staging box, which I am trying to prepare for running a functional test suite, I saw chaos. Tons of packages installed globally, and those installed inside a virtualenv not matching the requirements.txt of the project. OK, thought I, there's a lot of cleaning up. Starting with global packages.
However, I ran into a problem at once:
➜ ~ pip uninstall PyYAML
Not uninstalling PyYAML at /usr/lib/python2.7/dist-packages, owned by OS
OK, someone must've done a 'sudo pip install PyYAML'. I think I know how to fix it:
➜ ~ sudo pip uninstall PyYAML
Not uninstalling PyYAML at /usr/lib/python2.7/dist-packages, owned by OS
Uh, apparently I don't.
A search revealed some similar conflicts caused by users installing packages bypassing pip, but I'm not convinced - why would pip even know about them, if that was the case? Unless the "other" way is placing them in the same location pip would use - but if that's the case, why would it fail to uninstall under sudo?
Pip denies to uninstall these packages because Debian developers patched it to behave so. This allows you to use both pip and apt simultaneously. The "original" pip program doesn't have such functionality
Update: my answer is relevant only to old versions of Pip. For the latest versions, Pip is configured to modify only the files which reside only in its "home directory" - that is /usr/local/lib/python3.* for Debian. For the latest tools, you will get these errors when you try to delete the package, installed by apt:
For pip 9.0.1-2.3~ubuntu1 (installed from Ubuntu repository):
Not uninstalling pyyaml at /usr/lib/python3/dist-packages, outside environment /usr
For pip 10.0.1 (original, installed from pypi.org):
Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
The point is not that pip cannot install the package because you don't have enough permissions, but because it is not a package installed through pip, so it doesn't want to uninstall it.
dist-packages is where packages installed by the OS package manager reside; as they are handled by another package manager (e.g. apt on Ubuntu/Debian, pacman on Arch, rpm/yum on CentOS, ... ) pip won't touch them (but still has to know about them as they are installed packages, so they can be used to satisfy dependencies of pip-installed packages).
You should also probably avoid to touch them unless you use the correct package manager, and even so, they may have been installed automatically to satisfy the dependencies of some program, so you may not remove them without breaking it. This can usually be checked quite easily, although the exact way depends from the precise Linux distribution you are using.

How to handle different versions of python protobuf

My python package contains a lot of files compiled by python-protobuf (python2-protobuf-2.5.0 on Arch Linux), I installed the package on Ubuntu server 12.04.3 (which have python-protobuf-2.4.1), tried to run the code, and hit the following error:
from google.protobuf.internal import enum_type_wrapper
ImportError: cannot import name enum_type_wrapper
I think it's because the protobuf modules in my package are compiled by protobuf-2.5.0 and they do not work with protobuf-2.4.1.
I have no idea of the environments in which my code may run, the version of protobuf may vary. How to make my package work with both protobuf 2.4 and 2.5?
(A possible way: include two different sets of protobuf libraries (one compiled by 2.4.1, the other compiled by 2.5.0) in my package, get google.protobuf version at runtime and select the protobuf libraries to import. Is it possible?
You need to specify the version of protobuf that will work with in your setup.py in the list install_requires=['protobuf>=2.5.0']. With a Python package, you can put just the name or the exact versions that will run with the package using ==. I believe you can also specify != for specific versions.
If you are not packaging it with a setup.py, you should set up a virtualenv and put a file install_requires.txt with all the specific python packages and versions in the root of the project.
That might look like:
$ cd ../project
$ virtualenv project_venv
$ source project_venv/bin/activate
$ cd project
$ pip install protobuf>=2.5.0
$ pip freeze > ./requirements.txt
Then someone you distribute to can activate their virtualenv and do:
$ pip install -r requirements.txt
Make sure your package will work from a fresh virtualenv by installing with that method. This is also good to check before installing via a setup.py. You want to make sure your requirements will get anyone working who just does a fresh sudo python setup.py install, or python setup.py install in a virtualenv context.
You can exit a virtualenv context with:
$ deactivate
Your best bet may be to include a copy of the protobuf runtime library with your package, maybe under a different package name. Then you can make sure that it matches the version of your generated code.
Another option is to invoke protoc as part of the installation process, so you get whatever version is available on the host.
I don't think packaging multiple versions of your generated code sounds like a good idea -- you'll just have problems again when the next protobuf release comes out.

Categories

Resources