How to omit dependency when exporting requirements.txt using Poetry - python

I have a Python3 Poetry project with a pyproject.toml file specifying the dependencies:
[tool.poetry.dependencies]
python = "^3.10"
nltk = "^3.7"
numpy = "^1.23.4"
scipy = "^1.9.3"
scikit-learn = "^1.1.3"
joblib = "^1.2.0"
[tool.poetry.dev-dependencies]
pytest = "^5.2"
I export those dependencies to a requirements.txt file using the command poetry export --without-hashes -f requirements.txt --output requirements.txt resulting in the following file requirements.txt:
click==8.1.3 ; python_version >= "3.10" and python_version < "4.0"
colorama==0.4.6 ; python_version >= "3.10" and python_version < "4.0" and platform_system == "Windows"
joblib==1.2.0 ; python_version >= "3.10" and python_version < "4.0"
nltk==3.8.1 ; python_version >= "3.10" and python_version < "4.0"
numpy==1.24.1 ; python_version >= "3.10" and python_version < "4.0"
regex==2022.10.31 ; python_version >= "3.10" and python_version < "4.0"
scikit-learn==1.2.1 ; python_version >= "3.10" and python_version < "4.0"
scipy==1.9.3 ; python_version >= "3.10" and python_version < "4.0"
threadpoolctl==3.1.0 ; python_version >= "3.10" and python_version < "4.0"
tqdm==4.64.1 ; python_version >= "3.10" and python_version < "4.0"
that I use to install the dependencies when building a Docker image.
My question: How can I omit the the colorama dependency in the above list of requirements when calling poetry export --without-hashes -f requirements.txt --output requirements.txt?
Possible solution: I could filter out the line with colorama by producing the requirements.txt file using poetry export --without-hashes -f requirements.txt | grep -v colorama > requirements.txt. But that seems hacky and may break things in case the Colorama requirement is expressed across multiple lines in that file. Is there a better and less hacky way?
Background: When installing this list of requirements while building the Docker image using pip install -r requirements.txt I get the message
Ignoring colorama: markers 'python_version >= "3.10" and python_version < "4.0" and platform_system == "Windows"' don't match your environment
A coworker thinks that message looks ugly and would like it not to be visible (but personally I don't care). A call to poetry show --tree reveals that the Colorama dependency is required by pytest and is used to make terminal colors work on Windows. Omitting the library as a requirement when installing on Linux is not likely a problem in this context.

Related

ERROR: Could not find a version that satisfies the requirement while dockerizing a python project

I am trying to dockerize a python project. The problem is that docker is unable to find many of the packages that I am importing from requirements.txt file.
My OS is Xubuntu 20.04.
My docker version informations are as follow:
Client:
Version: 20.10.7
API version: 1.41
Go version: go1.13.8
Git commit: 20.10.7-0ubuntu1~20.04.1
Built: Wed Aug 4 22:52:25 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.7
API version: 1.41 (minimum version 1.12)
Go version: go1.13.8
Git commit: 20.10.7-0ubuntu1~20.04.1
Built: Wed Aug 4 19:07:47 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.2-0ubuntu1~20.04.2
GitCommit:
runc:
Version: 1.0.0~rc95-0ubuntu1~20.04.2
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:
This is my dockerfile:
FROM pypy:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "pypy3", "./index.py" ]
The requirements.txt file is :
apturl==0.5.2
attrs==20.3.0
Automat==20.2.0
beautifulsoup4==4.9.3
blinker==1.4
blis==0.7.4
bs4==0.0.1
catalogue==2.0.3
catfish==1.4.13
certifi==2019.11.28
cffi==1.14.5
chardet==3.0.4
click==7.1.2
colorama==0.4.3
command-not-found==0.3
constantly==15.1.0
cryptography==3.4.7
cssselect==1.1.0
cupshelpers==1.0
cymem==2.0.5
dbus-python==1.2.16
defer==1.0.6
distro==1.4.0
distro-info===0.23ubuntu1
elasticsearch==7.13.0
entrypoints==0.3
h2==3.2.0
hpack==3.0.0
httplib2==0.14.0
hyperframe==5.2.0
hyperlink==21.0.0
idna==2.8
incremental==21.3.0
itemadapter==0.2.0
itemloaders==1.0.4
Jinja2==2.11.3
jmespath==0.10.0
joblib==1.0.1
keyring==18.0.1
language-selector==0.1
launchpadlib==1.10.13
lazr.restfulclient==0.14.2
lazr.uri==1.0.3
lightdm-gtk-greeter-settings==1.2.2
lxml==4.6.3
MarkupSafe==1.1.1
menulibre==2.2.1
mugshot==0.4.2
murmurhash==1.0.5
netifaces==0.10.4
numpy==1.20.2
oauthlib==3.1.0
olefile==0.46
onboard==1.4.1
packaging==20.9
pandas==1.2.4
parsel==1.6.0
pathy==0.5.2
pexpect==4.6.0
Pillow==7.0.0
preshed==3.0.5
priority==1.3.0
Protego==0.1.16
psutil==5.5.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycairo==1.16.2
pycparser==2.20
pycups==1.9.73
pydantic==1.7.3
PyDispatcher==2.0.5
PyGObject==3.36.0
PyJWT==1.7.1
pymacaroons==0.13.0
pymongo==3.11.3
PyNaCl==1.3.0
pyOpenSSL==20.0.1
pyparsing==2.4.7
python-apt==2.0.0+ubuntu0.20.4.5
python-dateutil==2.7.3
python-debian===0.1.36ubuntu1
pytz==2021.1
PyYAML==5.3.1
queuelib==1.6.1
reportlab==3.5.34
requests==2.22.0
requests-unixsocket==0.2.0
scikit-learn==0.24.1
scipy==1.6.3
Scrapy==2.5.0
screen-resolution-extra==0.0.0
SecretStorage==2.3.1
service-identity==18.1.0
sgt-launcher==0.2.5
simplejson==3.16.0
six==1.14.0
sklearn==0.0
smart-open==3.0.0
soupsieve==2.2.1
spacy==3.0.6
spacy-legacy==3.0.5
srsly==2.4.1
systemd-python==234
thinc==8.0.3
threadpoolctl==2.1.0
tqdm==4.60.0
Twisted==21.2.0
typer==0.3.2
ubuntu-advantage-tools==27.0
ubuntu-drivers-common==0.0.0
ufw==0.36
unattended-upgrades==0.1
urllib3==1.25.8
w3lib==1.22.0
wadllib==1.3.3
wasabi==0.8.2
xcffib==0.8.1
xkit==0.0.0
zope.interface==5.4.0
I receive the error ERROR: Could not find a version that satisfies the requirement for the following packages:
apturl==0.5.2
catfish==1.4.13
command-not-found==0.3
cupshelpers==1.0
defer==1.0.6
distro-info===0.23ubuntu1
language-selector==0.1
lightdm-gtk-greeter-settings==1.2.2
menulibre==2.2.1
mugshot==0.4.2
onboard==1.4.1
PyGObject==3.36.0
python-apt==2.0.0+ubuntu0.20.4.5
python-debian===0.1.36ubuntu1
screen-resolution-extra==0.0.0
sgt-launcher==0.2.5
systemd-python==234
ubuntu-advantage-tools==27.0
ubuntu-drivers-common==0.0.0
ufw==0.36
unattended-upgrades==0.1
xkit==0.0.0
(I tried to eliminate them one by one until the container could be compiled, but of course it did not run because of the absence of these packages).
I also tried to replace pypy by the normal python image, but I received the same error.
I tried to use the following dockerfile based on an ubuntu image :
FROM ubuntu:focal
SHELL ["/bin/bash", "-xe", "-c"]
ARG DEBIAN_FRONTEND=noninteractive
COPY . /code
ADD requirements.txt ./
RUN apt-get update -q \
&& apt-get install -y -q --no-install-recommends \
python3-wheel \
python3-pip \
gunicorn \
&& if [ -e requirements.txt ]; then \
python3 -m pip install --no-cache-dir \
--disable-pip-version-check \
-r requirements.txt; \
fi \
&& python3 -m pip install \
--no-cache-dir --disable-pip-version-check \
/code/ \
&& apt-get remove -y python3-pip python3-wheel \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/* \
&& useradd _gunicorn --no-create-home --user-group
USER _gunicorn
WORKDIR /code
CMD ["gunicorn", \
"--bind", "0.0.0.0:8000", \
"hello_world:app"]
I also got the same result.
I tried to edit the docker DNS options by :
Adding DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4" to the file /etc/default/docker.
Adding { "dns": ["192.168.1.254", "8.8.8.8"] } to the file /etc/docker/daemon.json.
I feel that I ran out of propositions :(. Does anyone have an idea of what can I do to make pip install all these packages in a python image container?
Thanks.
I think David Maze nailed it in the comments for some of the failures: a lot of the Python packages in the fail-list are installed via apt together with the Ubuntu packages they are part of. If you look them up (e.g. distro-info, ufw, command-not-found) on https://packages.ubuntu.com/, you'll see that many apt package ships with a Python library. (In fact, python-apt seems to be an outlier for being in PyPI as most of them are missing completely, and this answer starts with explaining why.)
The above only applies to packages that fail with the following exact message:
ERROR: Could not find a version that satisfies the
requirement <package-name> (from versions: none)
ERROR: No matching distribution found for <package-name>
Found it important to emphasize this because in my case (see at the bottom) this was the issue, and once I simply omitted the "Ubuntu-shipped packages", I just had to resolve other dependency issues, such as PyGObject (which was a tough one).
Also, there are other variants of the above error message (e.g., here and here), and this Stackoverflow thread, Could not find a version that satisfies the requirement <package>, has a lot of suggestions for those cases.
My case
Inherited a legacy Django app, and the dev did a pip freeze before leaving the project 2 years ago. I simply removed all the qualifiers (i.e., == + parts until the end of the line), and started the pip install process when I got these errors.
I have a theory that needs to be confirmed, but it was true in my case:
The pip freeze was issued on the production server, and it caught every Python package that could be pulled into the project (regardless whether it was used by the project or not).
So kind of like capturing the output of pip list vs. pip list --local.
I also had ufw on my list (among many others) that didn't make any sense as it is a firewall and this project was a simple, internal CMS. From then on, I kept hammering pip install, and crossed out errored out packages which were there because of (probably) unrelated Ubuntu installs.
(As for PyGObject and co., I was lucky that I used nix-shell to resolve dependency issues, so I only needed to issue
nix-shell -p cairo gobject-introspection
and it found the needed headers automatically. Even the database was dropped in a nix-shell.)

How do I interpret this python dependency tree?

We are using conda to maintain a python environment and I'd like to understand why google-cloud-bigquery==1.22.0 is being installed when the latest available version is https://pypi.org/project/google-cloud-bigquery/2.16.1/ and the latest vaailable version on conda-forge (https://anaconda.org/conda-forge/google-cloud-bigquery) is 2.15.0
Here's a Dockerfile that builds our conda environment:
FROM debian:stretch-slim
RUN apt-get update && apt-get install curl gnupg -y && rm -rf /var/lib/apt/lists/*
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN curl https://repo.anaconda.com/pkgs/misc/gpgkeys/anaconda.asc | gpg --dearmor > conda.gpg && \
install -o root -g root -m 644 conda.gpg /usr/share/keyrings/conda-archive-keyring.gpg && \
gpg --keyring /usr/share/keyrings/conda-archive-keyring.gpg --no-default-keyring \
--fingerprint 34161F5BF5EB1D4BFBBB8F0A8AEB4F8B29D82806 && \
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/conda-archive-keyring.gpg] https://repo.anaconda.com/pkgs/misc/debrepo/conda stable main" \
> /etc/apt/sources.list.d/conda.list
WORKDIR /tmp
RUN MINICONDA_VERSION=4.9.2 && \
CONDA_VERSION='4.9.*' && \
CONDA_DIR=/opt/conda && \
curl -O https://repo.anaconda.com/miniconda/Miniconda3-py37_${MINICONDA_VERSION}-Linux-x86_64.sh && \
/bin/bash Miniconda3-py37_${MINICONDA_VERSION}-Linux-x86_64.sh -f -b -p $CONDA_DIR && \
rm Miniconda3-py37_${MINICONDA_VERSION}-Linux-x86_64.sh && \
$CONDA_DIR/bin/conda config --system --set auto_update_conda false && \
$CONDA_DIR/bin/conda config --system --set show_channel_urls true && \
$CONDA_DIR/bin/conda config --system --remove channels defaults && \
$CONDA_DIR/bin/conda config --system --add channels main && \
$CONDA_DIR/bin/conda config --system --set env_prompt '({name}) ' && \
$CONDA_DIR/bin/conda config --system --append envs_dirs /opt/conda/envs/ && \
$CONDA_DIR/bin/conda config --system --append pkgs_dirs /opt/conda/pkgs/ && \
$CONDA_DIR/bin/conda update --quiet --yes --all conda="${CONDA_VERSION}" && \
$CONDA_DIR/bin/conda config --system --append channels conda-forge && \
$CONDA_DIR/bin/conda create -n py3 python=3.8
RUN bash -c "source /opt/conda/bin/activate /opt/conda/envs/py3 && conda install \
invoke \
apache-beam \
sh \
pytest \
pytest-xdist \
ipython \
behave \
black \
pylint \
flake8 \
jinja2 \
tenacity \
responses \
tqdm \
google-api-python-client \
google-auth-oauthlib \
google-cloud-monitoring \
google-cloud-bigquery \
google-cloud-storage \
google-cloud-pubsub \
google-cloud-secret-manager \
ipdb \
rope \
pipdeptree"
I build it using docker build . -t conda-env and then use pipdeptree inside the container to give me my dependency tree for google-cloud-bigquery:
docker run \
--rm \
--entrypoint bash \
conda-env \
-c "source /opt/conda/bin/activate /opt/conda/envs/py3 && pipdeptree --packages google-cloud-bigquery"
which gives me this:
google-cloud-bigquery==1.22.0
- google-cloud-core [required: >=1.0.3,<2.0dev, installed: 1.6.0]
- google-api-core [required: >=1.21.0,<2.0.0dev, installed: 1.25.1]
- google-auth [required: >=1.21.1,<2.0dev, installed: 1.28.1]
- cachetools [required: >=2.0.0,<5.0, installed: 4.2.1]
- pyasn1-modules [required: >=0.2.1, installed: 0.2.8]
- pyasn1 [required: >=0.4.6,<0.5.0, installed: 0.4.8]
- rsa [required: >=3.1.4,<5, installed: 4.7.2]
- pyasn1 [required: >=0.1.3, installed: 0.4.8]
- setuptools [required: >=40.3.0, installed: 52.0.0.post20210125]
- six [required: >=1.9.0, installed: 1.15.0]
- googleapis-common-protos [required: >=1.6.0,<2.0dev, installed: 1.53.0]
- protobuf [required: >=3.12.0, installed: 3.14.0]
- six [required: >=1.9, installed: 1.15.0]
- protobuf [required: >=3.12.0, installed: 3.14.0]
- six [required: >=1.9, installed: 1.15.0]
- pytz [required: Any, installed: 2021.1]
- requests [required: >=2.18.0,<3.0.0dev, installed: 2.25.1]
- certifi [required: >=2017.4.17, installed: 2020.12.5]
- chardet [required: >=3.0.2,<5, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.10]
- urllib3 [required: >=1.21.1,<1.27, installed: 1.26.4]
- setuptools [required: >=40.3.0, installed: 52.0.0.post20210125]
- six [required: >=1.13.0, installed: 1.15.0]
- google-auth [required: >=1.24.0,<2.0dev, installed: 1.28.1]
- cachetools [required: >=2.0.0,<5.0, installed: 4.2.1]
- pyasn1-modules [required: >=0.2.1, installed: 0.2.8]
- pyasn1 [required: >=0.4.6,<0.5.0, installed: 0.4.8]
- rsa [required: >=3.1.4,<5, installed: 4.7.2]
- pyasn1 [required: >=0.1.3, installed: 0.4.8]
- setuptools [required: >=40.3.0, installed: 52.0.0.post20210125]
- six [required: >=1.9.0, installed: 1.15.0]
- six [required: >=1.12.0, installed: 1.15.0]
- google-resumable-media [required: >=0.3.1,<0.6.0dev,!=0.4.0, installed: 0.5.1]
- six [required: Any, installed: 1.15.0]
- protobuf [required: >=3.6.0, installed: 3.14.0]
- six [required: >=1.9, installed: 1.15.0]
I have to hold my hand up and admit I simply don't know how to interpret that. For example, does this:
google-cloud-bigquery==1.22.0
- google-cloud-core [required: >=1.0.3,<2.0dev, installed: 1.6.0]
mean that google-cloud-core is forcing google-cloud-bigquery to be ">=1.0.3,<2.0dev" or does it mean google-cloud-core is being forced by something else to be ">=1.0.3,<2.0dev". I basically don't understand that information being presented to me so I'm hoping someone can enlighten me. better still, if someone can tell me what I can do to get a later version of google-cloud-bigquery installed I'd be very grateful, because there are known bugs in 1.22.0.
To answer your last question first:
google-cloud-bigquery==1.22.0
- google-cloud-core [required: >=1.0.3,<2.0dev, installed: 1.6.0]
means, that google-cloud-bigquery is installed with version 1.22.0, which requires google-cloud-core to be installed with a version between 1.0.3 and 2.0 and you have version 1.6.0 of it installed.
To check the constraints of google-cloud-bigquery - which you are probably trying to do, add the --reverse flag like that: pipdeptree --reverse --packages google-cloud-bigquery. The output isn't useful though, because "you're looking at the wrong side of the tree":
Warning!!! Possibly conflicting dependencies found:
* pylint==2.7.4
- astroid [required: >=2.5.2,<2.7, installed: 2.5]
* flake8==3.9.0
- pycodestyle [required: >=2.7.0,<2.8.0, installed: 2.6.0]
- pyflakes [required: >=2.3.0,<2.4.0, installed: 2.2.0]
------------------------------------------------------------------------
google-cloud-bigquery==1.22.0
So to see the actual restraint, run pipdeptree --reverse and look for google-cloud-bigquery. Then you'll find, that urllib3 in version 1.26.4 constraints requests to version 2.25.1, which constraints google-api-core to 1.25.1, which constraints google-cloud-core to 1.6.0, which constraints google-cloud-bigquery to 1.22.0.
If I had to guess, I'd say that one of those mentioned packages is already installed (EDIT: even before you're installing your specific packages) in their respective version, which leads to what you're seeing.
I ran pip install google-cloud-bigquery --upgrade on top of your build, which worked perfectly fine, so you could either run that at the end or just upgrade all packages before installing your specific stuff (which I'd personally recommend, updating your base is always a good idea. That being said, if your own constraints are "too old", that wouldn't work and you should fall back to upgrading after you've installed your specific stuff).

Docker failing to find anaconda-client

I'm very new to docker and am having a bug when trying to build a docked app. I have a python script that I want to wrap with docker. My requirements.txt file begins as such:
alabaster==0.7.12
anaconda-client==1.7.2
anaconda-navigator==1.9.7
anaconda-project==0.8.3
asn1crypto==1.2.0
astroid==2.3.2
astropy==3.2.2
atomicwrites==1.3.0
attrs==19.3.0
...
and my Dockerfile is:
FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./python_script.py
On running docker build --tag python_app ., I get the following output:
Sending build context to Docker daemon 1.097GB
Step 1/6 : FROM python:alpine3.7
---> 00be2573e9f7
Step 2/6 : COPY . /app
---> 6f46c90dbc6f
Step 3/6 : WORKDIR /app
---> Running in 9458595eba85
Removing intermediate container 9458595eba85
---> 0f1fb57bba19
Step 4/6 : RUN pip install -r requirements.txt
---> Running in 8eb7b6f86dff
Collecting alabaster==0.7.12 (from -r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/10/ad/00b090d23a222943eb0eda509720a404f531a439e803f6538f35136cae9e/alabaster-0.7.12-py2.py3-none-any.whl
Collecting anaconda-client==1.7.2 (from -r requirements.txt (line 2))
Could not find a version that satisfies the requirement anaconda-client==1.7.2 (from -r requirements.txt (line 2)) (from versions: 1.1.1, 1.2.2)
No matching distribution found for anaconda-client==1.7.2 (from -r requirements.txt (line 2))
You are using pip version 19.0.1, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
Can I just drop anaconda-client from requirements.txt? I built the file just using pip freeze, and don't directly import it in the code, though I would like to keep the whole list if possible for simplicity..
That is because there is no anaconda-client==1.7.2 the last version is 1.2.2
see this
I think you refer to this conda:
then you need to use conda to install it and there is the version 1.7.2
and I suggest you to use this image
I think python:anaconda-client and anaconda:anaconda-client No matching
https://pypi.org/project/anaconda-client/
https://anaconda.org/anaconda/anaconda-client
so,try down version pip install anaconda-client==1.2.2, it's work

Where did tox look when searching for "basepython"?

Currently, we have this in the Build Environment on cloudbees
curl -s -o use-python https://repository-cloudbees.forge.cloudbees.com/distributions/ci-addons/python/use-python
chmod u+x use-python
# Get all Python distributions from CloudBees
supported=("2.7.14" "3.4.7" "3.5.4" "3.6.4")
for version in "${supported[#]}"; do
export PYTHON_VERSION=$version
. ./use-python
done
# Sanity checks.
python --version
python2 --version
python2.7 --version
python3 --version
python3.4 --version
python3.5 --version
python3.6 --version
which python
which python2
which python3
which python2.7
which python3.4
which python3.5
which python3.6
In the tox.ini, we've declared the basepython=3.4 at line https://github.com/nltk/nltk/blob/develop/tox.ini#L84
[testenv:py34-jenkins]
basepython = python3.4
commands = {toxinidir}/jenkins.sh
And our setup on the cloudbees CI server could find our Python3.4 interpreter, as we see with which python: https://nltk.ci.cloudbees.com/job/nltk/808/TOXENV=py34-jenkins,jdk=jdk8latestOnlineInstall/console
But when it ran tox, it threw and InterpreterNotFound:
+ which python3.4
/scratch/jenkins/python/python-3.4.7-x86_64/bin/python3.4
+ which python3.5
/scratch/jenkins/python/python-3.5.4-x86_64/bin/python3.5
+ which python3.6
/scratch/jenkins/python/python-3.6.4-x86_64/bin/python3.6
+ mkdir -p /home/jenkins/lib
+ '[' -f /home/jenkins/lib/libblas.so ']'
+ ln -sf /usr/lib64/libblas.so.3 /home/jenkins/lib/libblas.so
+ '[' -f /home/jenkins/lib/liblapack.so ']'
+ ln -sf /usr/lib64/liblapack.so.3 /home/jenkins/lib/liblapack.so
[EnvInject] - Script executed successfully.
[jdk8latestOnlineInstall] $ /scratch/jenkins/shiningpanda/jobs/280e15ca/tools/bin/python -c "import pip; pip.main();" install --upgrade tox
Requirement already up-to-date: tox in /scratch/jenkins/shiningpanda/jobs/280e15ca/tools/lib/python2.7/site-packages
Requirement already up-to-date: virtualenv>=1.11.2 in /scratch/jenkins/shiningpanda/jobs/280e15ca/tools/lib/python2.7/site-packages (from tox)
Requirement already up-to-date: six in /scratch/jenkins/shiningpanda/jobs/280e15ca/tools/lib/python2.7/site-packages (from tox)
Requirement already up-to-date: py>=1.4.17 in /scratch/jenkins/shiningpanda/jobs/280e15ca/tools/lib/python2.7/site-packages (from tox)
Requirement already up-to-date: pluggy<1.0,>=0.3.0 in /scratch/jenkins/shiningpanda/jobs/280e15ca/tools/lib/python2.7/site-packages (from tox)
You are using pip version 7.1.2, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[jdk8latestOnlineInstall] $ /scratch/jenkins/shiningpanda/jobs/280e15ca/tools/bin/python -c "import tox; tox.cmdline();" -c tox.ini
GLOB sdist-make: /scratch/jenkins/workspace/nltk/TOXENV/py34-jenkins/jdk/jdk8latestOnlineInstall/setup.py
py34-jenkins create: /scratch/jenkins/workspace/nltk/TOXENV/py34-jenkins/jdk/jdk8latestOnlineInstall/.tox/py34-jenkins
ERROR: InterpreterNotFound: python3.4
Couple of related questions:
Where did tox look when searching for basepython?
Is it trying to look for PYTHONPATH?
How can I setup the tox such that I append custom path(s) to where it should be looking for basepython?
python3.4 is inside the PATH environmental variable, why isn't tox looking there?

travis secure env variables not used in tox

I can see in my travis build log that env variables are exported correctly :
Setting environment variables from .travis.yml
$ export K_API_KEY=[secure]
$ export K_PRIVATE_KEY=[secure]
$ export TOXENV=py27
However they aren't picked in my tests which use a basic config.py file that just should get the env variables this way (API_KEY = os.environ['K_API_KEY']), see relevant travis log:
$ source ~/virtualenv/python2.7/bin/activate
$ python --version
Python 2.7.9
$ pip --version
pip 6.0.7 from /home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages (python 2.7)
install
1.34s$ pip install -U tox
Collecting tox
Downloading tox-2.3.1-py2.py3-none-any.whl (40kB)
100% |################################| 40kB 1.2MB/s
Collecting virtualenv>=1.11.2 (from tox)
Downloading virtualenv-15.0.2-py2.py3-none-any.whl (1.8MB)
100% |################################| 1.8MB 262kB/s
Collecting py>=1.4.17 from https://pypi.python.org/packages/19/f2/4b71181a49a4673a12c8f5075b8744c5feb0ed9eba352dd22512d2c04d47/py-1.4.31-py2.py3-none-any.whl#md5=aa18874c9b4d1e5ab53e025008e43387 (from tox)
Downloading py-1.4.31-py2.py3-none-any.whl (81kB)
100% |################################| 86kB 3.5MB/s
Collecting pluggy<0.4.0,>=0.3.0 (from tox)
Downloading pluggy-0.3.1-py2.py3-none-any.whl
Installing collected packages: pluggy, py, virtualenv, tox
Found existing installation: py 1.4.26
Uninstalling py-1.4.26:
Successfully uninstalled py-1.4.26
Successfully installed pluggy-0.3.1 py-1.4.31 tox-2.3.1 virtualenv-15.0.2
$ tox
GLOB sdist-make: /home/travis/build/euri10/pykraken/setup.py
py27 create: /home/travis/build/euri10/pykraken/.tox/py27
py27 installdeps: -r/home/travis/build/euri10/pykraken/requirements_dev.txt
py27 inst: /home/travis/build/euri10/pykraken/.tox/dist/pykraken-0.1.0.zip
py27 installed: alabaster==0.7.8,argh==0.26.2,Babel==2.3.4,bumpversion==0.5.3,cffi==1.6.0,coverage==4.0,cryptography==1.3.2,docutils==0.12,enum34==1.1.6,flake8==2.4.1,idna==2.1,ipaddress==1.0.16,Jinja2==2.8,MarkupSafe==0.23,mccabe==0.3.1,pathtools==0.1.2,pep8==1.7.0,pluggy==0.3.1,py==1.4.31,pyasn1==0.1.9,pycparser==2.14,pyflakes==0.8.1,Pygments==2.1.3,pykraken==0.1.0,pytest==2.8.3,pytz==2016.4,PyYAML==3.11,requests==2.10.0,six==1.10.0,snowballstemmer==1.2.1,Sphinx==1.3.1,sphinx-rtd-theme==0.1.9,tox==2.1.1,virtualenv==15.0.2,watchdog==0.8.3
py27 runtests: PYTHONHASHSEED='2032885705'
py27 runtests: commands[0] | py.test --basetemp=/home/travis/build/euri10/pykraken/.tox/py27/tmp
============================= test session starts ==============================
platform linux2 -- Python 2.7.9, pytest-2.8.3, py-1.4.31, pluggy-0.3.1
rootdir: /home/travis/build/euri10/pykraken, inifile:
collected 0 items / 2 errors
==================================== ERRORS ====================================
____________________ ERROR collecting tests/test_private.py ____________________
tests/test_private.py:5: in <module>
from pykraken.config import PROXY, API_KEY, PRIVATE_KEY
pykraken/config.py:4: in <module>
API_KEY = os.environ['K_API_KEY']
.tox/py27/lib/python2.7/UserDict.py:23: in __getitem__
raise KeyError(key)
E KeyError: 'K_API_KEY'
I suspect it's my tox.ini (see below) that doesn't pick those variables, but not sure, any idea ?
[tox]
envlist = py27, flake8
; envlist = py26, py27, py33, py34, py35
[flake8]
max-line-length= 100
exclude= tests/*
[testenv]
setenv =
PYTHONPATH = {toxinidir}:{toxinidir}/pykraken
deps =
-r{toxinidir}/requirements_dev.txt
commands =
py.test --basetemp={envtmpdir}
As righlty pointed by #jonrsharpe the solution is to use the passenv option in tox.ini as described in the documentation
[testenv]
setenv =
PYTHONPATH = {toxinidir}:{toxinidir}/pykraken
passenv =
K_API_KEY
K_PRIVATE_KEY

Categories

Resources