Python unit testing with Docker and environment variables - python

I am trying to run unit tests for my application with a Docker container (and possibly in a GitHub workflow), but I can't figure out how to correctly pass env variables to it.
So normally for the building process I have a pretty standard Dockerfile
FROM python:3.7-alpine3.15
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY src/ .
CMD [ "python3", "main.py" ]
and a workflow that builds it and pushes the image to Docker Hub. Then of course the usual docker run --env-file=.env ... command to run the application fetching the variables from a file.
Now I am adding tests to the code. The application needs some env variables to function properly (auth keys and other stuff), and so of course also to run the tests. I don't want to export the variables in my system and run the test from my terminal, so I want to use Docker. But I'm not really sure how to properly do it.
My goal is to be able to run the tests locally and to also have a workflow that runs on PRs, without committing the variables in the repo.
This is what I've tried so far:
Add test to the Dockerfile: adding RUN python -m unittest -s tests doesn't really work because at build time Docker doesn't have access to the .env file
Add GitHub workflow with the test command: even using a GitHub environment to store secrets and deploying the job into that for some reason doesn't fetch the variables. Plus I would like to be able to test the code before pushing the changes, and have this workflow run only on PRs.
jobs:
test:
runs-on: ubuntu-latest
environment: test
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Set up Python 3.7
uses: actions/setup-python#v2
with:
python-version: 3.7
cache: 'pip'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run tests
shell: bash
env:
EMAIL: ${{ secrets.EMAIL }}
AUTH_KEY: ${{ secrets.AUTH_KEY }}
ZONE_NAME: ${{ secrets.ZONE_NAME }}
RECORD_ID: ${{ secrets.RECORD_ID }}
CHECK_INTERVAL: ${{ secrets.CHECK_INTERVAL }}
SENDER_ADDRESS: ${{ secrets.SENDER_ADDRESS }}
SENDER_PASSWORD: ${{ secrets.SENDER_PASSWORD }}
RECEIVER_ADDRESS: ${{ secrets.RECEIVER_ADDRESS }}
run: |
python -m unittest discover -t src -s tests
Here you can find the full source code if needed.

Change
RUN python -m unittest -s tests
to
CMD python -m unittest -s tests
and unittest will be launched not at build stage but at tests container start, when you can use your env file

Related

Unrecognized arguments YAML error when running pytest

I have a github action which runs some unit tests when a push is made to the repository. All the commands in the YAML execute successfully such as installing requirments.txt but then returns the following error when it tries to run the pytest command
python3 -m pytest verify/test.py --ds myapp.settings_pytest.
ERROR: usage: __main__.py [options] [file_or_dir] [file_or_dir] [...]
__main__.py: error: unrecognized arguments: --ds myapp.settings_pytest.
Strangely the command runs fine locally so I am confused as to why I only encounter this when it is ran from my YAML file. I am also encountering the same error when the same YAML file runs on my AWS build server.
test.yml
name: Run tests
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
django_secret_key: ${{ secrets.DJANGO_SECRET_KEY }}
jobs:
build:
runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
python-version: [3.8]
steps:
- uses: actions/checkout#v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run Tests
run: |
export DJANGO_SETTINGS_MODULE="myapp.settings_pytest"
python3 -m pytest verify/test.py --ds myapp.settings_pytest
It has nothing to do with YML - you have to install Diango to get access to --ds options.
pip install Django==4.0.2
Or better - keep it in your requirements.txt

Adding dependencies to default Github Actions script

I have a Python web app on Azure that gets deployed via Github actions. I use the default deployment script that is created by the Azure deployment center (full script shown below). In order for my application to work, I must SSH into the deployment machine after each deployment and manually activate the virtual environment and install packages that aren't available via pip.
Is there a way to include the manual installations in the pre-generated deployment script that Azure created for me?
These are the manual commands I must run when I SSH into the machine after every deployment...
source env/bin/activate
sudo apt-get install build-essential python3-dev python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
Here is the deployment script I'm currently using...
name: Build and deploy Python app to Azure Web App
on:
push:
branches:
- master
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.6'
- name: Build using AppService-Build
uses: azure/appservice-build#v2
with:
platform: python
platform-version: '3.6'
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
with:
app-name: {{applicationname}}
slot-name: {{slotname}}
publish-profile: ${{ secrets.AzureAppService_PublishProfile_HIDDEN }}
If you really need to install more packages to the system than those installed by default, you'll need to create your own docker image, publish it to your private Azure Registry and use them as in the example:
- uses: azure/docker-login#v1
with:
login-server: contoso.azurecr.io
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- run: |
docker build . -t contoso.azurecr.io/nodejssampleapp:${{ github.sha }}
docker push contoso.azurecr.io/nodejssampleapp:${{ github.sha }}
- uses: azure/webapps-deploy#v2
with:
app-name: 'node-rnc'
publish-profile: ${{ secrets.azureWebAppPublishProfile }}
images: 'contoso.azurecr.io/nodejssampleapp:${{ github.sha }}'

Trouble installing Weasyprint & Cairo on Linux Web App

I have a Django app that uses Weasyprint to generate PDF outputs. This works fine on my local development machine.
I am able to successfully deploy to Azure Web Apps, but get the following error message:
2020-11-17T07:34:14.287002623Z OSError: no library called "cairo" was found
2020-11-17T07:34:14.287006223Z no library called "libcairo-2" was found
2020-11-17T07:34:14.287009823Z cannot load library 'libcairo.so.2': libcairo.so.2: cannot open shared
object file: No such file or directory
2020-11-17T07:34:14.287016323Z cannot load library 'libcairo.2.dylib': libcairo.2.dylib: cannot open
shared object file: No such file or directory
2020-11-17T07:34:14.287020123Z cannot load library 'libcairo-2.dll': libcairo-2.dll: cannot open
shared object file: No such file or directory
Per Weasyprint's documentation (https://weasyprint.readthedocs.io/en/stable/install.html#debian-ubuntu), I have attempted to make the reccommended installations via a custom deployment script which looks like such:
jobs:
build:
name: Build and Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python#v2
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install dependencies
run: |
sudo apt-get install build-essential python3-dev python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Deploy Web App using GH Action azure/webapps-deploy
uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
However, my problem persists and I still receive the same message.
Does anybody have experience installing Weasyprint & Cairo on a Linux-based Web App?
I appreciate any help in advance.
UPDATE
Currently, I am able to deploy using the default deployment script created by Azure (shown below). I am then able to SSH into the deployment machine and manually activate the virtual environment & install the requisite packages. This process works and my application now works as expected.
I'd like to roll this command into the deployment process somehow (either as part of the default script or via a post deployment action).
GITHUB ACTIONS DEPLOYMENT SCRIPT
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.6'
- name: Build using AppService-Build
uses: azure/appservice-build#v2
with:
platform: python
platform-version: '3.6'
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
with:
app-name: {{appname}}
slot-name: {{slotname}}
publish-profile: {{profilename}}
MANUAL VIRTUAL ENV ACTIVATION & INSTALLS
source /home/site/wwwroot/pythonenv3.6/bin/activate
sudo apt-get install {{ additional packages }}
The required dependencies and the things you want to install can be successfully added to the .yml file, but whether it takes effect for your webapp, you still need to test, and specific problems need to be analyzed in detail.
If that doesn't work, it is recommended to install ssh manually.
I add linux command in .yml file to apt-get install xxx.
For more details,
Below is my .yml file. It works fine.
# Docs for the Azure Web Apps Deploy action: https://github.com/Azure/webapps-deploy
# More GitHub Actions for Azure: https://github.com/Azure/actions
name: Build and deploy Python app to Azure Web App - pyodbcInstallENV
on:
push:
branches:
- master
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.8'
- name: Install custom env
run: |
cd /home
sudo apt-get update
sudo apt-get install g++
sudo apt-get install unixodbc-dev
pip install pyodbc
- name: Build using AppService-Build
uses: azure/appservice-build#v2
with:
platform: python
platform-version: '3.8'
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
with:
app-name: 'pyodbcInstallENV'
slot-name: 'production'
publish-profile: ${{ secrets.AzureAppService_PublishProfile_d712769***2017c9521 }}

How to assign "app":"python app.py" in cdk.json for python or python3

I am fairly new to cdk. How do I set up cdk or cdk.json to run where the python executable may be named 'python' or 'python3' depending on the platform ?
cdk init --language python creates the cdk.json on my local Windows PC with the line
"app": "python app.py"
The failure occurs when Jenkins CI/CD executes the application. Jenkins build fails because the linux based Jenkins expects 'python3'.
Current solution is to edit cdk.json when we commit to github and Jenkins auto builds the lower environments. Is there a better way?
Using python3 directly in cdk.json:
{
"app": "python3 app.py",
"context": {
"#aws-cdk/core:enableStackNameDuplicates": "true",
"aws-cdk:enableDiffNoFail": "true",
"#aws-cdk/core:stackRelativeExports": "true",
"#aws-cdk/aws-ecr-assets:dockerIgnoreSupport": true
}
}
Or alias python symlink to python3:
lrwxrwxrwx 1 root root 18 Nov 8 14:20 /usr/bin/python -> /usr/bin/python3.8
I had a few problems, but in the end, using python3 in the cdk.json file made no difference. I have a windows OS. The pre-requisites were:
Have all the configuration files (config and credentials with the correct parameters in .aws)
Node.js and AWS Cli installed (https://docs.aws.amazon.com/cdk/latest/guide/work-with.html#work-with-prerequisites)
Having this, I executed the line below in my windows terminal
npm install -g aws-cdk
Next step in my project (I use vscode), I created a folder to execute cdk (and I named it "cdk" but it could be anything).
mkdir cdk (to create my folder in the project)
cd cdk (to enter in my folder)
cdk init app --language python (to set the language)
source .venv/bin/activate (to create and activate the virtual environment)
add "aws_cdk.aws_s3" in the requirements.txt
Before executing cdk deploy, execute the pip install -r requirements.txt and use cdk synth to test if everything is ok or if some error must be corrected.
When using git actions, use sudo before the npm command and add in the run command the cd so git can navigate until the cdk folder. Without those, I had the error that follows.
--app is required either in command-line, in cdk.json or in ~/.cdk.json
Here's how the deploy job was configurated in my git actions file:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python
uses: actions/setup-python#v2
with:
python-version: '3.x'
- uses: actions/setup-node#v2-beta
with:
node-version: '12'
- name: Install dependencies
run: |
sudo npm install -g aws-cdk
cd 2_continuous_integration_and_tests/CDK
pip install -r requirements.txt
- name: Deploy
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
run: |
cd 2_continuous_integration_and_tests/CDK
cdk synth
cdk deploy
Kind regards
;)

CircleCI for simple Python project with tox: how to test multiple Python environments?

I have a Python project for which I use tox to run the pytest-based tests. I am attempting to configure the project to build on CircleCI.
The tox.ini lists both Python 3.6 and 3.7 as environments:
envlist = py{36,37,},coverage
I can successfully run tox on a local machine within a conda virtual environment that uses Python version 3.7.
On CircleCI I am using a standard Python virtual environment since that is what is provided in the example ("getting started") configuration. The tox tests fail when tox attempts to create the Python 3.6 environment:
py36 create: /home/circleci/repo/.tox/py36
ERROR: InterpreterNotFound: python3.6
It appears that when you use this kind of virtual environment then tox can only find an interpreter of the same version, whereas if using a conda virtual environment it somehow knows how to cook up the environments as long as they're lower versions. At least for my case (Python 3.6 and 3.7 environments for tox running in a Python 3.7 conda environment), this works fine.
The CircleCI configuration file I'm currently using looks like this:
# Python CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-python/ for more details
#
version: 2
jobs:
build:
docker:
# specify the version you desire here
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
- image: circleci/python:3.7
working_directory: ~/repo
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: install dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip install -e .
pip install tox
- save_cache:
paths:
- ./venv
key: v1-dependencies-{{ checksum "requirements.txt" }}
# run tests with tox
- run:
name: run tests
command: |
. venv/bin/activate
tox
- store_artifacts:
path: test-reports
destination: test-reports
What is the best practice for testing for multiple environments with tox on CircleCI? Should I move to using conda rather than venv within CircleCI, and if so how would I add this? Or is there a way to stay with venv, maybe by modifying its environment creation command?
Edit
I have now discovered that this is not specific to CircleCI, as I get a similar error when running this tox on Travis CI. Also, I have confirmed that this works as advertised using a Python 3.7 virtual environment created using venv on my local machine, both py36 and py37 environments run successfully.
If you use the multi-python docker image it allows you to still use tox for testing in multiple different environments, for example
version: 2
jobs:
test:
docker:
- image: fkrull/multi-python
steps:
- checkout
- run:
name: Test
command: 'tox'
workflows:
version: 2
test:
jobs:
- test
I have worked this out although not completely as it involves abandoning tox and instead using pytest for running tests for each Python version environment in separate workflow jobs. The CircleCI configuration (.circleci/config.yml) looks like this:
version: 2
workflows:
version: 2
test:
jobs:
- test-3.6
- test-3.7
- test-3.8
jobs:
test-3.6: &test-template
docker:
- image: circleci/python:3.6
working_directory: ~/repo
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
- v1-dependencies-
- run:
name: install dependencies
command: |
python3 -m venv venv
. venv/bin/activate
pip install -e .
pip install coverage
pip install pytest
- save_cache:
paths:
- ./venv
key: v1-dependencies-{{ checksum "requirements.txt" }}
- run:
name: run tests
command: |
. venv/bin/activate
coverage run -m pytest tests
# store artifacts (for example logs, binaries, etc)
# to be available in the web app or through the API
- store_artifacts:
path: test-reports
test-3.7:
<<: *test-template
docker:
- image: circleci/python:3.7
test-3.8:
<<: *test-template
docker:
- image: circleci/python:3.8

Categories

Resources