Cannot cache dependencies on Github Actions using Pipenv - python

I am trying to cache dependencies for a Github Action workflow. I use Pipenv.
this is my config:
- uses: actions/cache#v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/Pipfile') }}
restore-keys: |
${{ runner.os }}-pip-
I got this config from Github's own examples for using pip. I have only changed requirements.txt to Pipfile since we don't use a requirements.txt. But even with the requirements.txt I get the same issue anyway.
the Cache Dependencies step always give this issue:
and then after running the tests:
There's no error on the workflow and it finishes as normal, however, it never seems to be able to find or update the dependency cache.

pipenv needed to be installed before the cache step...
- name: Install pipenv, libpq, and pandoc
run: |
sudo apt-get install libpq-dev -y
pip install pipenv

Related

Can I have an editable entry in my requirements.txt?

I have an environment which I previously installed into an editable package:
virtualenv venv
. venv/bin/activate
pip install -e ...
pip freeze | grep <pkg_name>
-e git+ssh://git#bitbucket.org/SPACE/REPO.git#HASH#egg=NAME&subdirectory=PATH
I copeid the pip freeze result to a req.txt file and installed it into a new environment, and it works.
My question is - how can I make it pull the code to build and install, not from a remote server, but from my local project (like done when running pip install -e)
It would obviously only work on my machine, assuming that project still is there, but this is what I want...
According to pip documentation (1, 2), yes, you can have an entry like this in your requirements.txt:
-e git+ssh://git#example.com/repo.git
Also as chepner has pointed out, one could just specify local URL:
-e file:///home/someone/repo
-e file://C:\Users\someone\repo

How to install local python packages when building jobs under Github Actions?

I am building a python project -- potion. I want to use Github actions to automate some linting & testing before merging a new branch to master.
To do that, I am using a slight modification of a Github recommended python actions starter workflow -- Python Application.
During the step of "Install dependencies" within the job, I am getting an error. This is because pip is trying to install my local package potion and failing.
The code that is failing if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
The corresponding error is:
ERROR: git+https#github.com:<github_username>/potion.git#82210990ac6190306ab1183d5e5b9962545f7714#egg=potion is not a valid editable requirement. It should either be a path to a local project or a VCS URL (beginning with bzr+http, bzr+https, bzr+ssh, bzr+sftp, bzr+ftp, bzr+lp, bzr+file, git+http, git+https, git+ssh, git+git, git+file, hg+file, hg+http, hg+https, hg+ssh, hg+static-http, svn+ssh, svn+http, svn+https, svn+svn, svn+file).
Error: Process completed with exit code 1.
Most likely, the job is not able install the package potion because it is not able to find it. I installed it on my own computer using pip install -e . and later used pip freeze > requirements.txt to create the requirements file.
Since I use this package for testing therefore I need to install this package so that pytest can run its tests properly.
How can I install a local package (which is under active development) on Github Actions?
Here is part of the Github workflow file python-app.yml
...
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Lint with flake8
...
Note 1: I have already tried changing from git+git#github.com:<github_username>... to git_git#github.com/<github_username>.... Pay attention to / instead of :.
Note 2: I have also tried using other protocols such as git+https, git+ssh, etc.
Note 3: I have also tried to remove the alphanumeric #8221... after git url ...potion.git
The "package under test", potion in your case, should not be part of the requirements.txt. Instead, simply add your line
pip install -e .
after the line with pip install -r requirements.txt in it. That installs the already checked out package in development mode and makes it available locally for an import.
Alternatively, you could put that line at the latest needed point, i.e. right before you run pytest.

Circleci does not work with my config.yml file when trying to run 'pytest --html=pytest_report.html', producing the error 'no such option: --html'

What I tried:
placing 'pip install --user -r requirements.txt' in the second run command
placing 'pip install pytest' in the second run command along with 'pip install pytest-html'
both followed by,
pytest --html=pytest_report.html
I am new to CircleCI and using pytest as well
Here is the steps portion of the config.yml file
version: 2.1
jobs:
run_tests:
docker:
- image: circleci/python:3.9.1
steps:
- checkout
- run:
name: Install Python Dependencies
command:
echo 'export PATH=~$PATH:~/.local/bin' >> $BASH_ENV && source $BASH_ENV
pip install --user -r requirements.txt
- run:
name: Run Unit Tests
command:
pip install --user -r requirements.txt
pytest --html=pytest_report.html
- store_test_results:
path: test-reports
- store_artifacts:
path: test-reports
workflows:
build_test:
jobs:
- run_tests
--html is not one of the builtin options for pytest -- it likely comes from a plugin
I believe you're looking for pytest-html -- make sure that's listed in your requirements
it's also possible / likely that the pip install --user is installing another copy of pytest into the image which'll only be available at ~/.local/bin/pytest instead of whatever pytest comes with the circle ci image
disclaimer, I'm one of the pytest core devs

Cache is not being correctly loaded in Github actions

I'm trying to cache the python dependencies of my project. To do that, I have this configuration in my workflow:
- uses: actions/cache#v2
id: cache
with:
path: ~/.cache/pip
key: pip-${{ runner.os }}-${{ hashFiles('**/requirements.txt') }}-${{ hashFiles('**/requirements_dev.txt') }}
restore-keys: pip-${{ runner.os }}
- name: Install apt dependencies
run: |
sudo apt-get update
sudo apt-get install gdal-bin
- name: Install dependencies
if: steps.cache.outputs.cache-hit != 'true'
run: |
pip install --upgrade pip==9.0.1
pip install -r requirements.txt
pip install -r requirements_dev.txt
This works, by 'works' I mean that it loads the cache and skip the 'Install dependencies step' and it restores the ~/.cache/pip directory. The problem is that when I try to run the tests, the following error appears:
File "manage.py", line 7, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
Error: Process completed with exit code 1.
Am I caching the incorrect directory? Or what am I doing wrong?
Note: this project is using python2.7 on Ubuntu 16.04
As it explains here, you can cache the whole virtual environment:
- uses: actions/cache#v2
with:
path: ${{ env.pythonLocation }}
key: ${{ env.pythonLocation }}-${{ hashFiles('setup.py') }}-${{ hashFiles('dev-requirements.txt') }}

How to cache pip packages within Azure Pipelines

Although this source provides a lot of information on caching within Azure pipelines, it is not clear how to cache Python pip packages for a Python project.
How to proceed if one is willing to cache Pip packages on an Azure pipelines build?
According to this, it may be so that pip cache will be enabled by default in the future. As far as I know it is not yet the case.
I used the pre-commit documentation as inspiration:
https://pre-commit.com/#azure-pipelines-example
https://github.com/asottile/azure-pipeline-templates/blob/master/job--pre-commit.yml
and configured the following Python pipeline with Anaconda:
pool:
vmImage: 'ubuntu-latest'
variables:
CONDA_ENV: foobar-env
CONDA_HOME: /usr/share/miniconda/envs/$(CONDA_ENV)/
steps:
- script: echo "##vso[task.prependpath]$CONDA/bin"
displayName: Add conda to PATH
- task: Cache#2
displayName: Use cached Anaconda environment
inputs:
key: conda | environment.yml
path: $(CONDA_HOME)
cacheHitVar: CONDA_CACHE_RESTORED
- script: conda env create --file environment.yml
displayName: Create Anaconda environment (if not restored from cache)
condition: eq(variables.CONDA_CACHE_RESTORED, 'false')
- script: |
source activate $(CONDA_ENV)
pytest
displayName: Run unit tests
To cache a standard pip install use this:
variables:
# variables are automatically exported as environment variables
# so this will override pip's default cache dir
- name: pip_cache_dir
value: $(Pipeline.Workspace)/.pip
steps:
- task: Cache#2
inputs:
key: 'pip | "$(Agent.OS)" | requirements.txt'
restoreKeys: |
pip | "$(Agent.OS)"
path: $(pip_cache_dir)
displayName: Cache pip
- script: |
pip install -r requirements.txt
displayName: "pip install"
I wasn't very happy with the standard pip cache implementation that is mentioned in the official documentation. You basically always install your dependencies normally, which means that pip will perform loads of checks that take up time. Pip will find the cached builds (*.whl, *.tar.gz) eventually, but it all takes up time. You can opt to use venv or conda instead, but for me it lead to buggy situations with unexpected behaviour. What I ended up doing instead was using pip download and pip install separately:
variables:
pipDownloadDir: $(Pipeline.Workspace)/.pip
steps:
- task: Cache#2
displayName: Load cache
inputs:
key: 'pip | "$(Agent.OS)" | requirements.txt'
path: $(pipDownloadDir)
cacheHitVar: cacheRestored
- script: pip download -r requirements.txt --dest=$(pipDownloadDir)
displayName: "Download requirements"
condition: eq(variables.cacheRestored, 'false')
- script: pip install -r requirements.txt --no-index --find-links=$(pipDownloadDir)
displayName: "Install requirements"

Categories

Resources