Issues caching Python pip packages in Gitlab CI - python

I'm trying to create a Gitlab CI pipeline that will install python packages via pip, which I can then cache and use in later stages without the need to re install them each time.
I have followed the CI docs on how to do this, but I'm facing issue. They say to create a virtual environment and install via pip there, however after creating and activating the venv the packages aren't installed in the venv. Instead they're installed in
/builds//ca-cert-checker/venv/lib/python3.10/site-packages
Also, in a separate stage when the cache has been downloaded the stage is looking for a dir that doesn't exist
venv/bin/python
gitlab-ci.yml
stages:
- setup
- before_test
variables:
PYTHON_IMG: "python:3.10"
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- venv/
python_install:
image: $PYTHON_IMG
stage: setup
script:
- python3 --version
- pip install virtualenv
- virtualenv venv
- source venv/bin/activate
- pip install -r requirements.txt
- pytest --version #debug message
- pip show pytest #debug message
pytest_ver:
stage: before_test
image: $PYTHON_IMG
script:
- ls venv #debug message
- ls .cache/pip #debug message
- pytest --version #debug message
In the pipeline this causes the setup stage to run and cache successfully
Successful cache creation:
Creating cache default... .cache/pip: found 185 matching files and
directories venv/: found 3292 matching files and directories
Output from pip show pytest:
Location: /builds//ca-cert-checker/venv/lib/python3.10/site-packages
So pip isn't installing within venv, which I think is the issue.
When I run the before_test stage I get the following output:
// Ommited for brevity
Checking cache for default...
Downloading cache.zip from https://storage.googleapis.com/gitlab-com-runners-cache/project/<project-number>/default
WARNING: venv/bin/python: chmod venv/bin/python: no such file or directory (suppressing repeats)
Successfully extracted cache
Executing "step_script" stage of the job script 00:00
Using docker image sha256:33ceb4320f06dbd22ca43809042a31851df207827b4fc45cd6c9323013dff7c7 for python:3.10 with digest python#sha256:b58c3f2846e201f5fc6b654e43f131f5a8702f8d568130302d77fbdfd9230362 ...
$ ls venv
bin
lib
lib64
pyvenv.cfg
share
$ ls .cache/pip
http
selfcheck
$ pytest --version
/bin/bash: line 128: pytest: command not found
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: exit code 1
Any help / advice on how to get the pip dependencies to install in the venv and cache properly would be appreciated!

I see that your installation is done in different jobs (and even two different steps), but each job / step is a different thread / instance so the installation step does not retain anything for the next step.
You should do the installation in a before_script, pip will find the cache
image: python:3.10
stages:
- setup
- before_test
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- venv/
before_script:
- python3 --version
- pip install virtualenv
- virtualenv venv
- source venv/bin/activate
- pip install -r requirements.txt
- pytest --version #debug message
- pip show pytest #debug message
pytest_ver:
stage: before_test
script:
- ls venv #debug message
- ls .cache/pip #debug message
- pytest --version #debug message

Related

Python on Gitlab has ModuleNotFound, But Not When I Run Locally

The following snapshot shows the file structure:
When I run on Gitlab CI, here is what I am seeing:
Why is this error occurring when Gitlab runs it but not when I run locally?
Here is my .gitlab-ci.yml file.
Note that this had been working before.
I recently made win_perf_counters a Git submodule instead of being an actual subdirectory. (Again, it works locally.)
test:
before_script:
- python -V
- pip install virtualenv
- virtualenv venv
- .\venv\Scripts\activate.ps1
- refreshenv
script:
- python -V
- echo "******* installing pip ***********"
- python -m pip install --upgrade pip
- echo "******* installing locust ********"
- python -m pip install locust
- locust -V
- python -m pip install multipledispatch
- python -m pip install pycryptodome
- python -m pip install pandas
- python -m pip install wmi
- python -m pip install pywin32
- python -m pip install influxdb_client
- set LOAD_TEST_CONF=load_test.conf
- echo "**** about to run locust ******"
- locust -f ./src/main.py --host $TARGET_HOST -u $USERS -t $TEST_DURATION -r $RAMPUP -s 1800 --headless --csv=./LoadTestsData_VPOS --csv-full-history --html=./LoadTestsReport_VPOS.html --stream-file ./data/stream_jsons/streams_vpos.json --database=csv
- Start-Sleep -s $SLEEP_TIME
variables:
LOAD_TEST_CONF: load_test.conf
PYTHON_VERSION: 3.8.0
TARGET_HOST: http://10.10.10.184:9000
tags:
- win2019
artifacts:
paths:
- ./LoadTests*
- public
only:
- schedules
after_script:
- ls src -r
- mkdir .public
- cp -r ./LoadTests* .public
- cp metrics.csv .public -ErrorAction SilentlyContinue
- mv .public public
When I tried with changing the Gitlab CI file to use requirements.txt:
Probably the python libraries you are using in your local environment are not the same you are using in gitlab. Run a pip list or pip freeze in your local machine and see which versions do you have there. Then pip install those in your gitlab script. A good practice is to have a requirements.txt or a setup.py file with specific versions rather than pulling the latest versions every time.
Probably the module you are developing doesn't have the __init__.py file and thus it cannot be found when imported from the external.

Can't find installed modules in azure devops pipeline

I'm building stage pipeline (env prp, tests, code). Currently have faced the blocker. Seems like each stage is kindda individual process. My requirements.txt is being installed correctly but then test stage raises the ModuleNotFoundError. Appreciate any hints how to make it working :)
yaml:
trigger: none
parameters:
- name: "input_files"
type: object
default: ['a-rg', 't-rg', 'd-rg', 'p-rg']
stages:
- stage: 'Env_prep'
jobs:
- job: "install_requirements"
steps:
- script: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
- stage: 'Tests'
jobs:
- job: 'Run_tests'
steps:
- script: |
python -m pytest -v tests/variableGroups_tests.py
Different jobs and stages are capable of being executed on different agents in Azure Pipelines. In your case, the installation requirements are a direct prerequisite of the tests being run, so everything should be in one job:
trigger: none
parameters:
- name: "input_files"
type: object
default: ['a-rg', 't-rg', 'd-rg', 'p-rg']
stages:
- stage: Test
jobs:
- job:
steps:
- script: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
displayName: Install Required Components
- script: |
python -m pytest -v tests/variableGroups_tests.py
displayName: Run Tests
Breaking those into separate script steps isn't even necessary unless you want the log output to be separate in the console.

Circleci does not work with my config.yml file when trying to run 'pytest --html=pytest_report.html', producing the error 'no such option: --html'

What I tried:
placing 'pip install --user -r requirements.txt' in the second run command
placing 'pip install pytest' in the second run command along with 'pip install pytest-html'
both followed by,
pytest --html=pytest_report.html
I am new to CircleCI and using pytest as well
Here is the steps portion of the config.yml file
version: 2.1
jobs:
run_tests:
docker:
- image: circleci/python:3.9.1
steps:
- checkout
- run:
name: Install Python Dependencies
command:
echo 'export PATH=~$PATH:~/.local/bin' >> $BASH_ENV && source $BASH_ENV
pip install --user -r requirements.txt
- run:
name: Run Unit Tests
command:
pip install --user -r requirements.txt
pytest --html=pytest_report.html
- store_test_results:
path: test-reports
- store_artifacts:
path: test-reports
workflows:
build_test:
jobs:
- run_tests
--html is not one of the builtin options for pytest -- it likely comes from a plugin
I believe you're looking for pytest-html -- make sure that's listed in your requirements
it's also possible / likely that the pip install --user is installing another copy of pytest into the image which'll only be available at ~/.local/bin/pytest instead of whatever pytest comes with the circle ci image
disclaimer, I'm one of the pytest core devs

CircleCI for simple Python project with tox: how to test multiple Python environments?

I have a Python project for which I use tox to run the pytest-based tests. I am attempting to configure the project to build on CircleCI.
The tox.ini lists both Python 3.6 and 3.7 as environments:
envlist = py{36,37,},coverage
I can successfully run tox on a local machine within a conda virtual environment that uses Python version 3.7.
On CircleCI I am using a standard Python virtual environment since that is what is provided in the example ("getting started") configuration. The tox tests fail when tox attempts to create the Python 3.6 environment:
py36 create: /home/circleci/repo/.tox/py36
ERROR: InterpreterNotFound: python3.6
It appears that when you use this kind of virtual environment then tox can only find an interpreter of the same version, whereas if using a conda virtual environment it somehow knows how to cook up the environments as long as they're lower versions. At least for my case (Python 3.6 and 3.7 environments for tox running in a Python 3.7 conda environment), this works fine.
The CircleCI configuration file I'm currently using looks like this:
# Python CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-python/ for more details
#
version: 2
jobs:
build:
docker:
# specify the version you desire here
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
- image: circleci/python:3.7
working_directory: ~/repo
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: install dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip install -e .
pip install tox
- save_cache:
paths:
- ./venv
key: v1-dependencies-{{ checksum "requirements.txt" }}
# run tests with tox
- run:
name: run tests
command: |
. venv/bin/activate
tox
- store_artifacts:
path: test-reports
destination: test-reports
What is the best practice for testing for multiple environments with tox on CircleCI? Should I move to using conda rather than venv within CircleCI, and if so how would I add this? Or is there a way to stay with venv, maybe by modifying its environment creation command?
Edit
I have now discovered that this is not specific to CircleCI, as I get a similar error when running this tox on Travis CI. Also, I have confirmed that this works as advertised using a Python 3.7 virtual environment created using venv on my local machine, both py36 and py37 environments run successfully.
If you use the multi-python docker image it allows you to still use tox for testing in multiple different environments, for example
version: 2
jobs:
test:
docker:
- image: fkrull/multi-python
steps:
- checkout
- run:
name: Test
command: 'tox'
workflows:
version: 2
test:
jobs:
- test
I have worked this out although not completely as it involves abandoning tox and instead using pytest for running tests for each Python version environment in separate workflow jobs. The CircleCI configuration (.circleci/config.yml) looks like this:
version: 2
workflows:
version: 2
test:
jobs:
- test-3.6
- test-3.7
- test-3.8
jobs:
test-3.6: &test-template
docker:
- image: circleci/python:3.6
working_directory: ~/repo
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
- v1-dependencies-
- run:
name: install dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip install -e .
pip install coverage
pip install pytest
- save_cache:
paths:
- ./venv
key: v1-dependencies-{{ checksum "requirements.txt" }}
- run:
name: run tests
command: |
. venv/bin/activate
coverage run -m pytest tests
# store artifacts (for example logs, binaries, etc)
# to be available in the web app or through the API
- store_artifacts:
path: test-reports
test-3.7:
<<: *test-template
docker:
- image: circleci/python:3.7
test-3.8:
<<: *test-template
docker:
- image: circleci/python:3.8

CodeBuild is still passing the Test stage

I am having an issue when deploying my application in that my tests are not running. Its a simple script but still code build is bypassing my test.
I have specified unittest and placed the path to my unittest-buildspec in the console
my app looks like:
-Chalice
--.chalice
-- BuildSpec
---- build.sh
---- unittest-buildspec.ym
-- Tests
---- test_app.py
---- test-database.py
-- app.py
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install -r requirements_test.txt
build:
commands:
- echo Build started on `date` ---
- pip install -r requirements_test.txt
- ./build.sh
- pytest --pep8 --flakes
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
discard-paths: yes
My build.sh is in the same folder as well
#!/bin/bash
pip install --upgrade awscli
aws --version
cd ..
pip install virtualenv
virtualenv /tmp/venv
. /tmp/venv/bin/activate
export PYTHONPATH=.
py.test tests/ || exit 1
There are a few issues in the buildspec you shared:
'install' and 'build' phases' indentation is incorrect. They should come under 'phases'.
Set '+x' on the build.sh before running it.
Fixed buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install -r requirements_test.txt
build:
commands:
- echo Build started on `date` ---
- pip install -r requirements_test.txt
- chmod +x ./build.sh
- ./build.sh
- pytest --pep8 --flakes
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
discard-paths: yes
Also please note that your 'build.sh' uses "/bin/bash" interpreter, while the script will work, the shell is not technically "bash", so any bash specific functionality will not work. The CodeBuild's shell is mysterious, it will run the usual scripts, it is just not bash.

Categories

Resources