I'm building stage pipeline (env prp, tests, code). Currently have faced the blocker. Seems like each stage is kindda individual process. My requirements.txt is being installed correctly but then test stage raises the ModuleNotFoundError. Appreciate any hints how to make it working :)
yaml:
trigger: none
parameters:
- name: "input_files"
type: object
default: ['a-rg', 't-rg', 'd-rg', 'p-rg']
stages:
- stage: 'Env_prep'
jobs:
- job: "install_requirements"
steps:
- script: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
- stage: 'Tests'
jobs:
- job: 'Run_tests'
steps:
- script: |
python -m pytest -v tests/variableGroups_tests.py
Different jobs and stages are capable of being executed on different agents in Azure Pipelines. In your case, the installation requirements are a direct prerequisite of the tests being run, so everything should be in one job:
trigger: none
parameters:
- name: "input_files"
type: object
default: ['a-rg', 't-rg', 'd-rg', 'p-rg']
stages:
- stage: Test
jobs:
- job:
steps:
- script: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
displayName: Install Required Components
- script: |
python -m pytest -v tests/variableGroups_tests.py
displayName: Run Tests
Breaking those into separate script steps isn't even necessary unless you want the log output to be separate in the console.
Related
I'm trying to create a Gitlab CI pipeline that will install python packages via pip, which I can then cache and use in later stages without the need to re install them each time.
I have followed the CI docs on how to do this, but I'm facing issue. They say to create a virtual environment and install via pip there, however after creating and activating the venv the packages aren't installed in the venv. Instead they're installed in
/builds//ca-cert-checker/venv/lib/python3.10/site-packages
Also, in a separate stage when the cache has been downloaded the stage is looking for a dir that doesn't exist
venv/bin/python
gitlab-ci.yml
stages:
- setup
- before_test
variables:
PYTHON_IMG: "python:3.10"
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- venv/
python_install:
image: $PYTHON_IMG
stage: setup
script:
- python3 --version
- pip install virtualenv
- virtualenv venv
- source venv/bin/activate
- pip install -r requirements.txt
- pytest --version #debug message
- pip show pytest #debug message
pytest_ver:
stage: before_test
image: $PYTHON_IMG
script:
- ls venv #debug message
- ls .cache/pip #debug message
- pytest --version #debug message
In the pipeline this causes the setup stage to run and cache successfully
Successful cache creation:
Creating cache default... .cache/pip: found 185 matching files and
directories venv/: found 3292 matching files and directories
Output from pip show pytest:
Location: /builds//ca-cert-checker/venv/lib/python3.10/site-packages
So pip isn't installing within venv, which I think is the issue.
When I run the before_test stage I get the following output:
// Ommited for brevity
Checking cache for default...
Downloading cache.zip from https://storage.googleapis.com/gitlab-com-runners-cache/project/<project-number>/default
WARNING: venv/bin/python: chmod venv/bin/python: no such file or directory (suppressing repeats)
Successfully extracted cache
Executing "step_script" stage of the job script 00:00
Using docker image sha256:33ceb4320f06dbd22ca43809042a31851df207827b4fc45cd6c9323013dff7c7 for python:3.10 with digest python#sha256:b58c3f2846e201f5fc6b654e43f131f5a8702f8d568130302d77fbdfd9230362 ...
$ ls venv
bin
lib
lib64
pyvenv.cfg
share
$ ls .cache/pip
http
selfcheck
$ pytest --version
/bin/bash: line 128: pytest: command not found
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: exit code 1
Any help / advice on how to get the pip dependencies to install in the venv and cache properly would be appreciated!
I see that your installation is done in different jobs (and even two different steps), but each job / step is a different thread / instance so the installation step does not retain anything for the next step.
You should do the installation in a before_script, pip will find the cache
image: python:3.10
stages:
- setup
- before_test
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- venv/
before_script:
- python3 --version
- pip install virtualenv
- virtualenv venv
- source venv/bin/activate
- pip install -r requirements.txt
- pytest --version #debug message
- pip show pytest #debug message
pytest_ver:
stage: before_test
script:
- ls venv #debug message
- ls .cache/pip #debug message
- pytest --version #debug message
What I tried:
placing 'pip install --user -r requirements.txt' in the second run command
placing 'pip install pytest' in the second run command along with 'pip install pytest-html'
both followed by,
pytest --html=pytest_report.html
I am new to CircleCI and using pytest as well
Here is the steps portion of the config.yml file
version: 2.1
jobs:
run_tests:
docker:
- image: circleci/python:3.9.1
steps:
- checkout
- run:
name: Install Python Dependencies
command:
echo 'export PATH=~$PATH:~/.local/bin' >> $BASH_ENV && source $BASH_ENV
pip install --user -r requirements.txt
- run:
name: Run Unit Tests
command:
pip install --user -r requirements.txt
pytest --html=pytest_report.html
- store_test_results:
path: test-reports
- store_artifacts:
path: test-reports
workflows:
build_test:
jobs:
- run_tests
--html is not one of the builtin options for pytest -- it likely comes from a plugin
I believe you're looking for pytest-html -- make sure that's listed in your requirements
it's also possible / likely that the pip install --user is installing another copy of pytest into the image which'll only be available at ~/.local/bin/pytest instead of whatever pytest comes with the circle ci image
disclaimer, I'm one of the pytest core devs
Although this source provides a lot of information on caching within Azure pipelines, it is not clear how to cache Python pip packages for a Python project.
How to proceed if one is willing to cache Pip packages on an Azure pipelines build?
According to this, it may be so that pip cache will be enabled by default in the future. As far as I know it is not yet the case.
I used the pre-commit documentation as inspiration:
https://pre-commit.com/#azure-pipelines-example
https://github.com/asottile/azure-pipeline-templates/blob/master/job--pre-commit.yml
and configured the following Python pipeline with Anaconda:
pool:
vmImage: 'ubuntu-latest'
variables:
CONDA_ENV: foobar-env
CONDA_HOME: /usr/share/miniconda/envs/$(CONDA_ENV)/
steps:
- script: echo "##vso[task.prependpath]$CONDA/bin"
displayName: Add conda to PATH
- task: Cache#2
displayName: Use cached Anaconda environment
inputs:
key: conda | environment.yml
path: $(CONDA_HOME)
cacheHitVar: CONDA_CACHE_RESTORED
- script: conda env create --file environment.yml
displayName: Create Anaconda environment (if not restored from cache)
condition: eq(variables.CONDA_CACHE_RESTORED, 'false')
- script: |
source activate $(CONDA_ENV)
pytest
displayName: Run unit tests
To cache a standard pip install use this:
variables:
# variables are automatically exported as environment variables
# so this will override pip's default cache dir
- name: pip_cache_dir
value: $(Pipeline.Workspace)/.pip
steps:
- task: Cache#2
inputs:
key: 'pip | "$(Agent.OS)" | requirements.txt'
restoreKeys: |
pip | "$(Agent.OS)"
path: $(pip_cache_dir)
displayName: Cache pip
- script: |
pip install -r requirements.txt
displayName: "pip install"
I wasn't very happy with the standard pip cache implementation that is mentioned in the official documentation. You basically always install your dependencies normally, which means that pip will perform loads of checks that take up time. Pip will find the cached builds (*.whl, *.tar.gz) eventually, but it all takes up time. You can opt to use venv or conda instead, but for me it lead to buggy situations with unexpected behaviour. What I ended up doing instead was using pip download and pip install separately:
variables:
pipDownloadDir: $(Pipeline.Workspace)/.pip
steps:
- task: Cache#2
displayName: Load cache
inputs:
key: 'pip | "$(Agent.OS)" | requirements.txt'
path: $(pipDownloadDir)
cacheHitVar: cacheRestored
- script: pip download -r requirements.txt --dest=$(pipDownloadDir)
displayName: "Download requirements"
condition: eq(variables.cacheRestored, 'false')
- script: pip install -r requirements.txt --no-index --find-links=$(pipDownloadDir)
displayName: "Install requirements"
I have a Python project for which I use tox to run the pytest-based tests. I am attempting to configure the project to build on CircleCI.
The tox.ini lists both Python 3.6 and 3.7 as environments:
envlist = py{36,37,},coverage
I can successfully run tox on a local machine within a conda virtual environment that uses Python version 3.7.
On CircleCI I am using a standard Python virtual environment since that is what is provided in the example ("getting started") configuration. The tox tests fail when tox attempts to create the Python 3.6 environment:
py36 create: /home/circleci/repo/.tox/py36
ERROR: InterpreterNotFound: python3.6
It appears that when you use this kind of virtual environment then tox can only find an interpreter of the same version, whereas if using a conda virtual environment it somehow knows how to cook up the environments as long as they're lower versions. At least for my case (Python 3.6 and 3.7 environments for tox running in a Python 3.7 conda environment), this works fine.
The CircleCI configuration file I'm currently using looks like this:
# Python CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-python/ for more details
#
version: 2
jobs:
build:
docker:
# specify the version you desire here
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
- image: circleci/python:3.7
working_directory: ~/repo
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: install dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip install -e .
pip install tox
- save_cache:
paths:
- ./venv
key: v1-dependencies-{{ checksum "requirements.txt" }}
# run tests with tox
- run:
name: run tests
command: |
. venv/bin/activate
tox
- store_artifacts:
path: test-reports
destination: test-reports
What is the best practice for testing for multiple environments with tox on CircleCI? Should I move to using conda rather than venv within CircleCI, and if so how would I add this? Or is there a way to stay with venv, maybe by modifying its environment creation command?
Edit
I have now discovered that this is not specific to CircleCI, as I get a similar error when running this tox on Travis CI. Also, I have confirmed that this works as advertised using a Python 3.7 virtual environment created using venv on my local machine, both py36 and py37 environments run successfully.
If you use the multi-python docker image it allows you to still use tox for testing in multiple different environments, for example
version: 2
jobs:
test:
docker:
- image: fkrull/multi-python
steps:
- checkout
- run:
name: Test
command: 'tox'
workflows:
version: 2
test:
jobs:
- test
I have worked this out although not completely as it involves abandoning tox and instead using pytest for running tests for each Python version environment in separate workflow jobs. The CircleCI configuration (.circleci/config.yml) looks like this:
version: 2
workflows:
version: 2
test:
jobs:
- test-3.6
- test-3.7
- test-3.8
jobs:
test-3.6: &test-template
docker:
- image: circleci/python:3.6
working_directory: ~/repo
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
- v1-dependencies-
- run:
name: install dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip install -e .
pip install coverage
pip install pytest
- save_cache:
paths:
- ./venv
key: v1-dependencies-{{ checksum "requirements.txt" }}
- run:
name: run tests
command: |
. venv/bin/activate
coverage run -m pytest tests
# store artifacts (for example logs, binaries, etc)
# to be available in the web app or through the API
- store_artifacts:
path: test-reports
test-3.7:
<<: *test-template
docker:
- image: circleci/python:3.7
test-3.8:
<<: *test-template
docker:
- image: circleci/python:3.8
I am having an issue when deploying my application in that my tests are not running. Its a simple script but still code build is bypassing my test.
I have specified unittest and placed the path to my unittest-buildspec in the console
my app looks like:
-Chalice
--.chalice
-- BuildSpec
---- build.sh
---- unittest-buildspec.ym
-- Tests
---- test_app.py
---- test-database.py
-- app.py
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install -r requirements_test.txt
build:
commands:
- echo Build started on `date` ---
- pip install -r requirements_test.txt
- ./build.sh
- pytest --pep8 --flakes
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
discard-paths: yes
My build.sh is in the same folder as well
#!/bin/bash
pip install --upgrade awscli
aws --version
cd ..
pip install virtualenv
virtualenv /tmp/venv
. /tmp/venv/bin/activate
export PYTHONPATH=.
py.test tests/ || exit 1
There are a few issues in the buildspec you shared:
'install' and 'build' phases' indentation is incorrect. They should come under 'phases'.
Set '+x' on the build.sh before running it.
Fixed buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install -r requirements_test.txt
build:
commands:
- echo Build started on `date` ---
- pip install -r requirements_test.txt
- chmod +x ./build.sh
- ./build.sh
- pytest --pep8 --flakes
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
discard-paths: yes
Also please note that your 'build.sh' uses "/bin/bash" interpreter, while the script will work, the shell is not technically "bash", so any bash specific functionality will not work. The CodeBuild's shell is mysterious, it will run the usual scripts, it is just not bash.