Pytest doesn't recognize -n option after pytest-xdist installation - python

I have installed pytest-xdist on top of a working pytest environment :
pip install pytest-xdist
and I have received this output
Downloading/unpacking pytest-xdist
Downloading pytest-xdist-1.10.tar.gz
Running setup.py egg_info for package pytest-xdist
no previously-included directories found matching '.hg'
Downloading/unpacking execnet>=1.1 (from pytest-xdist)
Downloading execnet-1.2.0.tar.gz (163kB): 163kB downloaded
Running setup.py egg_info for package execnet
warning: no files found matching 'conftest.py'
Requirement already satisfied (use --upgrade to upgrade): pytest>=2.4.2 in /Users/sal/Documents/code/Python/VirtualEnv/Spring/lib/python2.7/site-packages (from pytest-xdist)
Requirement already satisfied (use --upgrade to upgrade): py>=1.4.20 in /Users/sal/Documents/code/Python/VirtualEnv/Spring/lib/python2.7/site-packages (from pytest>=2.4.2->pytest-xdist)
Installing collected packages: pytest-xdist, execnet
Running setup.py install for pytest-xdist
no previously-included directories found matching '.hg'
Running setup.py install for execnet
warning: no files found matching 'conftest.py'
Successfully installed pytest-xdist execnet
Cleaning up...
at this point I have tried to run my test suite in parallel
py.test -n 4
but I received this output instead
usage: py.test [options] [file_or_dir] [file_or_dir] [...]
py.test: error: unrecognized arguments: -n
Output of 'py.test --version is'
This is pytest version 2.6.2, imported from /Users/sal/Documents/code/Python/VirtualEnv/Spring/lib/python2.7/site-packages/pytest.pyc
setuptools registered plugins:
pytest-capturelog-0.7 at /Users/sal/Documents/code/Python/VirtualEnv/Spring/lib/python2.7/site-packages/pytest_capturelog.pyc
pytest-contextfixture-0.1.1 at /Users/sal/Documents/code/Python/VirtualEnv/Spring/lib/python2.7/site-packages/pytest_contextfixture.pyc
pytest-cov-1.7.0 at /Users/sal/Documents/code/Python/VirtualEnv/Spring/lib/python2.7/site-packages/pytest_cov.pyc
pytest-django-2.6.2 at /Users/sal/Documents/code/Python/VirtualEnv/Spring/lib/python2.7/site-packages/pytest_django/plugin.pyc
pytest-pydev-0.1 at /Users/sal/Documents/code/Python/VirtualEnv/Spring/lib/python2.7/site-packages/pytest_pydev.pyc
pytest-runfailed-0.3 at /Users/sal/Documents/code/Python/VirtualEnv/Spring/lib/python2.7/site-packages/pytest_runfailed.pyc
and pytest-xdist is effectively missing.
What I was wrong? Thanks.

Like user2412166, I suffered the same issue. Unlike user2412166, the solution in my case was to relax the permissions on the xdist and pytest_xdist-1.14.dist-info system directories installed by pip3.
Some backstory: For security, I run a strict umask on my system prohibiting all access to other users and write access to group users by default:
$ umask
027
While this is usually a good thing, it also occasionally gets me into trouble. Installing python-xdist via pip3 under this umask:
$ sudo pip3 install pytest-xdist
...resulted in pip3 prohibiting read and execution access to non-superusers – which had better be only me:
$ ls -l /usr/lib64/python3.4/site-packages/xdist
drwxr-x--- 3 root root 4.0K 2016-04-10 01:19 xdist/
$ ls -l /usr/lib64/python3.4/site-packages/pytest_xdist-1.14.dist-info
drwxr-x--- 3 root root 4.0K 2016-04-10 01:19 xdist/
While pip3 was not wrong in doing so, py.test was (...arguably!) wrong in silently ignoring rather than explicitly reporting an obvious permissions issue during plugin detection.
This was trivially fixable by recursively granting other users both read and directory execution permissions for the afflicted system directories:
$ chmod -R o+rX /usr/lib64/python3.4/site-packages/xdist
$ chmod -R o+rX /usr/lib64/python3.4/site-packages/pytest_xdist-1.14.dist-info
The proof is the command-line pudding:
$ ls -l /usr/lib64/python3.4/site-packages/xdist
drwxr-xr-x 3 root root 4.0K 2016-04-10 01:19 xdist/
$ ls -l /usr/lib64/python3.4/site-packages/pytest_xdist-1.14.dist-info
drwxr-xr-x 3 root root 4.0K 2016-04-10 01:19 xdist/
$ py.test --version
This is pytest version 2.8.7, imported from /usr/lib64/python3.4/site-packages/pytest.py
setuptools registered plugins:
pytest-xdist-1.14 at /usr/lib64/python3.4/site-packages/xdist/looponfail.py
pytest-xdist-1.14 at /usr/lib64/python3.4/site-packages/xdist/plugin.py
pytest-xdist-1.14 at /usr/lib64/python3.4/site-packages/xdist/boxed.py
Thus was the unclear made clear, the buggy debugged, and the slow tests parallelized quickly.

I had the same problem. The problem is not with the version. Somehow py.test cannot see where xdist is. Here's what worked for me:
pip install pytest --user
pip install pytest-xdist --user
export PATH=$HOME/.local/bin:$PATH

Please try py.test --version and look at the output which explains where things are imported from, including plugins. Very likely are not running the py.test that you think you are running.

I ran into this problem and figured out it was because of a really old setuptools (the default version that ships on Centos6.7)
pip list | grep setuptools
setuptools (0.6rc11)
So first upgrade setuptools
sudo pip install --upgrade setuptools
Then reinstall pytest and pytest-xdist
sudo pip install --upgrade pytest pytest-xdist --force-reinstall
After this pytest was able to discover the xdist plugin.

Related

python pip module installations produces WARNING Target directory /python/bin already exists

In a docker container, I have python installed and virtualenv. I am installing multiple pip modules but eventually receive this error when trying to install multiple modules:
WARNING: Target directory /python/bin already exists. Specify --upgrade to force replacement.
Here's how I'm installing the pip modules:
python3 -m pip install virtualenv
python3 -m venv cer
source cer/bin/activate
pip3 install pandas -t ./python
pip3 install numpy-t ./python
pip3 install requests -t ./python # WARNING HAPPENS HERE
pip3 install xlsxwriter -t ./python # WARNING HAPPENS HERE
Contents of ./python/bin/ after pandas installation;
ls -lh
-rwxr-xr-x 1 root root 216 Dec 8 17:50 f2py
-rwxr-xr-x 1 root root 216 Dec 8 17:50 f2py3
-rwxr-xr-x 1 root root 216 Dec 8 17:50 f2py3.7
Contents of ./python/bin/ after numpy installation is the same as above. Contents of ./python/bin/ after requests installation is the same but that's when the Target directory warning appears.
If I run pip3 install requests -t ./python --upgrade like the WARNING suggests the contents of ./python/bin/ are overwritten with this:
-rwxr-xr-x 1 root root 244 Dec 8 17:56 normalizer
but in reality I would want the previous contents not to get overwritten & rather they get merged instead so all the pip modules installed have what they need.
How would I go about achieving this?
Or should these modules be installed in another way to avoid this problem?
Ultimately I need to upload these modules / dependencies for a lambda function I'm creating.
The solution was to install the pip modules simultaneously;
python3 -m pip install virtualenv
python3 -m venv cer
source cer/bin/activate
pip3 install pandas numpy requests xlsxwriter -t ./python

Pytest does not find pyYAML in Travis CI tests that have PyPI and conda package dependencies

I am trying to set up automatic testing for my Python package using Travis CI. My Python package depends on Iris as well as other packages such as PyYAML, numpy, etc. It also depends on a PyPI package (ScriptEngine). Now, I would like to set up a Travis CI environment using conda (to install Iris) and pip (to install the PyPI package as well as checking the requirements for PyYAML and numpy). I would then like to install my package using pip install ..
To test if this works, I have written one simple Pytest test that imports PyYAML.
I am currently trying to do this using this .travis.yml file:
language: python
python:
- "3.6"
- "3.7"
- "3.8"
# command to install dependencies
install:
- sudo apt-get update
- wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
- bash miniconda.sh -b -p $HOME/miniconda
- source "$HOME/miniconda/etc/profile.d/conda.sh"
- hash -r
- conda config --set always_yes yes --set changeps1 no
- conda update -q conda
# Useful for debugging any issues with conda
- conda info -a
- conda env create -f tests/test-environment.yml python=$TRAVIS_PYTHON_VERSION
- conda activate test-environment
- conda install pip
- conda install -c conda-forge iris
- pip install -r requirements.txt
- pip install .
# command to run tests
script: pytest
Note: This is the first time for me to really work with Travis CI. This script is a mixture of examples from the Conda docs as well as the Travis CI docs.
Pytest then fails to import PyYAML (although it gets installed because of the requirements.txt as well as the Iris dependencies):
Here is the confirmation from the logs that it got installed:
Requirement already satisfied: pyYAML>=5.1 in /home/travis/miniconda/envs/test-environment/lib/python3.8/site-packages (from ece-4-monitoring==0.1.0) (5.3.1)
And this is the Error from Pytest:
$ pytest
============================= test session starts ==============================
platform linux -- Python 3.7.1, pytest-4.3.1, py-1.7.0, pluggy-0.8.0
rootdir: /home/travis/build/valentinaschueller/ece-4-monitoring, inifile:
collected 1 item / 1 errors
==================================== ERRORS ====================================
_________________ ERROR collecting tests/test_file_handling.py _________________
ImportError while importing test module '/home/travis/build/valentinaschueller/sciptengine-tasks-ecearth/tests/test_file_handling.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/test_file_handling.py:3: in <module>
import helpers.file_handling as file_handling
helpers/file_handling.py:1: in <module>
import yaml
E ModuleNotFoundError: No module named 'yaml'
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.12 seconds ============================
The command "pytest" exited with 2.
If I try this exact setup using a conda virtual environment locally on my computer, I do not get this problem. Why does this not work on the Travis CI virtual machine?
As cel suggested in their comment: I could fix the problem by explicitly requiring pytest in the requirements.txt.
It is not necessary to activate test-environment in the script part. However, a very helpful tip was to use echo $(which pytest) && pytest instead of just pytest to check if pytest is installed at /home/travis/miniconda/envs/test-environment/bin/pytest.

Where are stored wheels .whl cached files?

$ python3 -m venv ~/venvs/vtest
$ source ~/venvs/vtest/bin/activate
(vtest) $ pip install numpy
Collecting numpy
Cache entry deserialization failed, entry ignored
Using cached https://files.pythonhosted.org/packages/d2/ab/43e678759326f728de861edbef34b8e2ad1b1490505f20e0d1f0716c3bf4/numpy-1.17.4-cp36-cp36m-manylinux1_x86_64.whl
Installing collected packages: numpy
Successfully installed numpy-1.17.4
(vtest) $
I'm looking for where this wheel numpy-1.17.4-cp36-cp36m-manylinux1_x86_64.whl has been cached ?
$ sudo updatedb
$ locate numpy-1.17.4
$ # nada ;(
Documentation https://pip.pypa.io/en/stable/reference/pip_install/#wheel-cache tell us that Pip will read from the subdirectory wheels within the pip cache directory and use any packages found there.
$ pip --version
pip 9.0.1 from ~/venvs/vtest/lib/python3.6/site-packages (python 3.6)
$
To answer Hamza Khurshid numpy is not on ~/.cache/pip/wheels
$ find ~/.cache/pip/wheels -name '*.whl' |grep -i numpy
$
it look like .cache/pip/wheels is only used for user created wheels not for downloaded wheels, should I use export PIP_DOWNLOAD_CACHE=$HOME/.pip/cache ?
The message
Using cached https://files.pythonhosted.org/packages/d2/ab/43e678759326f728de861edbef34b8e2ad1b1490505f20e0d1f0716c3bf4/numpy-1.17.4-cp36-cp36m-manylinux1_x86_64.whl
means pip is using the HTTP cache, not the wheel cache (which is only used for locally-built wheels, like you mentioned).
The name of the file in the HTTP cache is the sha224 of the URL being requested.
You can retrieve the file like
$ pwd
/home/user/.cache/pip/http
$ find . -name "$(printf 'https://files.pythonhosted.org/packages/65/26/32b8464df2a97e6dd1b656ed26b2c19460
6c16fe163c695a992b36c11cdf/six-1.13.0-py2.py3-none-any.whl' | sha224sum - | awk '{print $1}')"
./f/6/0/2/d/f602daffc1b0025a464d60b3e9f8b1f77a4538b550a46d67018978db
The format of the file is not stable though, and depends on pip version. For specifics you can see the implementation that's used in the latest cachecontrol, which pip uses.
If you want to get the actual file, an easier way is to use pip download, which will take the file from the cache into your current directory if it matches the URL that would be otherwise downloaded.
Refer to the following path for finding WHL cache files.
In windows,
%USERPROFILE%\AppData\Local\pip\cache
In Unix,
~/.cache/pip
In macOS,
~/Library/Caches/pip.

Difference between sudo -H pip install and pip --user install

I am wondering what is the difference between these two commands (I have the feeling that they are identical):
sudo -H pip install <package>
pip --user install <package>
More informations:
From the sudo manpage:
-H, --set-home
Request that the security policy set the HOME environment
variable to the home directory specified by the target user's
password database entry. Depending on the policy, this may be
the default behavior.
And the pip user guide: https://pip.pypa.io/en/stable/user_guide/
Related questions:
What is the difference between pip install and sudo pip install?
What is the purpose of "pip install --user ..."? and
sudo pip install VS pip install --user
But none of them talk about the sudo -H option or the precise difference between the two.
sudo is short for 'superuser do'. It simply runs the command with root privileges, which may be useful if you are installing to a directory you normally wouldn't have access to.
However, in the examples you have given the two commands would function identically, as you don't need root privileges to pip install --user
The difference comes down to the permissions that are given to the package, and the location where the package is installed. When you run a command as root, the package will be installed with root permissions.
Here's an example:
Running sudo -H pip3 install coloredlogs results in the following:
$ sudo pip3 show coloredlogs | grep Location
Location: /usr/local/lib/python3.8/dist-packages
$ ls -l /usr/local/lib/python3.8/dist-packages
drwxr-sr-x 4 root staff 4096 Feb 25 01:14 coloredlogs
$ which coloredlogs
/usr/local/bin/coloredlogs
Running pip3 install --user <package> results in the following:
$ pip3 show coloredlogs | grep Location
Location: /home/josh/.local/lib/python3.8/site-packages
$ ls -l /home/josh/.local/lib/python3.8/site-packages
drwxrwxr-x 4 josh josh 4096 Feb 25 01:14 coloredlogs
$ which coloredlogs
coloredlogs not found
Notice the location differences between the two, and also notice that the package isn't installed on PATH when installed using the --user flag. If for some reason I wanted to directly call the package, I would need to add /home/josh/.local/bin to my PATH.

When zipping a virtual env for AWS Lambda deployment, what can I leave out?

Introduction
I'm just starting to use AWS Lambda and as much as I hate it, I freaking love it. I've created a Makefile to help me package my virtual env and ship to S3. After I figured out that cryptography requires a hidden file in the site-packages directory #GRRR, I started wondering how I can further improve my packaging process.
Context
This is what a new virtualenv on a new Amazon Linux AMI EC2 instance looks like.
$ uname -srvm
Linux 4.4.51-40.58.amzn1.x86_64 #1 SMP Tue Feb 28 21:57:17 UTC 2017 x86_64
$ cat /etc/system-release
Amazon Linux AMI release 2016.09
$ virtualenv --version
15.1.0
$ pip --version
pip 9.0.1 from /usr/local/lib/python2.7/site-packages (python 2.7)
$ virtualenv temp
New python executable in /home/ec2-user/temp/bin/python2.7
Also creating executable in /home/ec2-user/temp/bin/python
Installing setuptools, pip, wheel...done.
fig. 1
$ ls -a temp/lib/python2.7/site-packages/
. packaging-16.8.dist-info setuptools-34.3.2.dist-info
.. pip six-1.10.0.dist-info
appdirs-1.4.3.dist-info pip-9.0.1.dist-info six.py
appdirs.py pkg_resources six.pyc
appdirs.pyc pyparsing-2.2.0.dist-info wheel
easy_install.py pyparsing.py wheel-0.29.0.dist-info
easy_install.pyc pyparsing.pyc
packaging setuptools
fig. 2
I found that in order to do the python development I needed (using paramiko), I had to do this to prepare (prior to fig.1 & fig.2):
sudo yum install gcc python27-devel libffi-devel openssl-devel
sudo -H pip install --upgrade pip virtualenv
fig. 3
Question
Of those site-packages in fig. 2, which ones can I omit from the zip I send to AWS?
For the sake of comparison, this is what my complete project's virtualenv has in it (and the only thing I pip installed was paramiko):
$ ls -a aws_lambda_project/lib/python2.7/site-packages/
. packaging
.. packaging-16.8.dist-info
appdirs-1.4.3.dist-info paramiko
appdirs.py paramiko-2.1.2.dist-info
appdirs.pyc pip
asn1crypto pip-9.0.1.dist-info
asn1crypto-0.22.0.dist-info pkg_resources
cffi pyasn1
cffi-1.9.1.dist-info pyasn1-0.2.3.dist-info
_cffi_backend.so pycparser
cryptography pycparser-2.17.dist-info
cryptography-1.8.1.dist-info pyparsing-2.2.0.dist-info
easy_install.py pyparsing.py
easy_install.pyc pyparsing.pyc
enum setuptools
enum34-1.1.6.dist-info setuptools-34.3.2.dist-info
idna six-1.10.0.dist-info
idna-2.5.dist-info six.py
ipaddress-1.0.18.dist-info six.pyc
ipaddress.py wheel
ipaddress.pyc wheel-0.29.0.dist-info
.libs_cffi_backend
This works for me, please give it a try:
$ mkdir paramiko-lambda && cd paramiko-lambda
$ virtualenv env --python=python2.7 && source env/bin/activate
$ pip freeze > pre_paramiko.txt
$ pip install paramiko
$ pip freeze > post_paramiko.txt
I then put the following in a script to make sure it works locally:
from __future__ import print_function
import paramiko
def handler(event, context):
print(paramiko.__version__)
ssh_client = paramiko.SSHClient()
if __name__ == '__main__':
handler(event=None, context=None)
The last two lines are optional, just a simple way to test the script locally. To see what was installed along with paramiko, I compared the two text files:
$ diff -u pre_paramiko.txt post_paramiko.txt
--- pre_paramiko.txt
+++ post_paramiko.txt
## -1,4 +1,13 ##
appdirs==1.4.3
+asn1crypto==0.22.0
+cffi==1.10.0
+cryptography==1.8.1
+enum34==1.1.6
+idna==2.5
+ipaddress==1.0.18
packaging==16.8
+paramiko==2.1.2
+pyasn1==0.2.3
+pycparser==2.17
pyparsing==2.2.0
six==1.10.0
The modules with a + were installed with paramiko so must be included with .zip archive that gets uploaded to AWS Lambda. It would be easy to write a bash script that takes the output of the diff command and automate the creation of the .zip archive, but I'm just going to enter them in manually.
$ cd env/lib/python2.7/site-packages
$ zip -x "*.pyc" -r ../../../../paramiko_lambda.zip packaging asn1crypto cffi cryptography enum idna ipaddress paramiko pyasn1 pycparser
$ cd ../../../../
$ zip -r paramiko_lambda.zip paramiko_lambda.py
I needed to add the packaging folder probably because of print(paramiko.__version__) so it may not be necessary. The paramiko_lambda.zip file was 2.5 MB and while not huge had a lot of unnecessary data, specifically *.pyc files. Excluding *.pyc files reduced the file to 1.5 MB.

Categories

Resources