I have a Python web app on Azure that gets deployed via Github actions. I use the default deployment script that is created by the Azure deployment center (full script shown below). In order for my application to work, I must SSH into the deployment machine after each deployment and manually activate the virtual environment and install packages that aren't available via pip.
Is there a way to include the manual installations in the pre-generated deployment script that Azure created for me?
These are the manual commands I must run when I SSH into the machine after every deployment...
source env/bin/activate
sudo apt-get install build-essential python3-dev python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
Here is the deployment script I'm currently using...
name: Build and deploy Python app to Azure Web App
on:
push:
branches:
- master
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.6'
- name: Build using AppService-Build
uses: azure/appservice-build#v2
with:
platform: python
platform-version: '3.6'
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
with:
app-name: {{applicationname}}
slot-name: {{slotname}}
publish-profile: ${{ secrets.AzureAppService_PublishProfile_HIDDEN }}
If you really need to install more packages to the system than those installed by default, you'll need to create your own docker image, publish it to your private Azure Registry and use them as in the example:
- uses: azure/docker-login#v1
with:
login-server: contoso.azurecr.io
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- run: |
docker build . -t contoso.azurecr.io/nodejssampleapp:${{ github.sha }}
docker push contoso.azurecr.io/nodejssampleapp:${{ github.sha }}
- uses: azure/webapps-deploy#v2
with:
app-name: 'node-rnc'
publish-profile: ${{ secrets.azureWebAppPublishProfile }}
images: 'contoso.azurecr.io/nodejssampleapp:${{ github.sha }}'
Related
I am trying to run unit tests for my application with a Docker container (and possibly in a GitHub workflow), but I can't figure out how to correctly pass env variables to it.
So normally for the building process I have a pretty standard Dockerfile
FROM python:3.7-alpine3.15
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY src/ .
CMD [ "python3", "main.py" ]
and a workflow that builds it and pushes the image to Docker Hub. Then of course the usual docker run --env-file=.env ... command to run the application fetching the variables from a file.
Now I am adding tests to the code. The application needs some env variables to function properly (auth keys and other stuff), and so of course also to run the tests. I don't want to export the variables in my system and run the test from my terminal, so I want to use Docker. But I'm not really sure how to properly do it.
My goal is to be able to run the tests locally and to also have a workflow that runs on PRs, without committing the variables in the repo.
This is what I've tried so far:
Add test to the Dockerfile: adding RUN python -m unittest -s tests doesn't really work because at build time Docker doesn't have access to the .env file
Add GitHub workflow with the test command: even using a GitHub environment to store secrets and deploying the job into that for some reason doesn't fetch the variables. Plus I would like to be able to test the code before pushing the changes, and have this workflow run only on PRs.
jobs:
test:
runs-on: ubuntu-latest
environment: test
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Set up Python 3.7
uses: actions/setup-python#v2
with:
python-version: 3.7
cache: 'pip'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run tests
shell: bash
env:
EMAIL: ${{ secrets.EMAIL }}
AUTH_KEY: ${{ secrets.AUTH_KEY }}
ZONE_NAME: ${{ secrets.ZONE_NAME }}
RECORD_ID: ${{ secrets.RECORD_ID }}
CHECK_INTERVAL: ${{ secrets.CHECK_INTERVAL }}
SENDER_ADDRESS: ${{ secrets.SENDER_ADDRESS }}
SENDER_PASSWORD: ${{ secrets.SENDER_PASSWORD }}
RECEIVER_ADDRESS: ${{ secrets.RECEIVER_ADDRESS }}
run: |
python -m unittest discover -t src -s tests
Here you can find the full source code if needed.
Change
RUN python -m unittest -s tests
to
CMD python -m unittest -s tests
and unittest will be launched not at build stage but at tests container start, when you can use your env file
Summarize.yml
name: Build and deploy Python app to Azure Web App - summarize1
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.6'
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: |
pip install -r requirements.txt
sudo apt-get update
sudo apt install autoconf autogen automake build-essential libasound2-dev \
libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \
libmpg123-dev pkg-config python
sudo apt-get install ffmpeg
requirements.txt
numba==0.48.0
git+https://github.com/librosa/librosa
Consider writing a script (i.e. apt-get install -y libsndfile1) that will install the dependencies manually (via apt-get when the App Service starts up. Configure the App Service in Azure to call that script (i.e. startup.sh) when it starts up. Check out: https://pypi.org/project/SoundFile/
I have a Django app that uses Weasyprint to generate PDF outputs. This works fine on my local development machine.
I am able to successfully deploy to Azure Web Apps, but get the following error message:
2020-11-17T07:34:14.287002623Z OSError: no library called "cairo" was found
2020-11-17T07:34:14.287006223Z no library called "libcairo-2" was found
2020-11-17T07:34:14.287009823Z cannot load library 'libcairo.so.2': libcairo.so.2: cannot open shared
object file: No such file or directory
2020-11-17T07:34:14.287016323Z cannot load library 'libcairo.2.dylib': libcairo.2.dylib: cannot open
shared object file: No such file or directory
2020-11-17T07:34:14.287020123Z cannot load library 'libcairo-2.dll': libcairo-2.dll: cannot open
shared object file: No such file or directory
Per Weasyprint's documentation (https://weasyprint.readthedocs.io/en/stable/install.html#debian-ubuntu), I have attempted to make the reccommended installations via a custom deployment script which looks like such:
jobs:
build:
name: Build and Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python#v2
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install dependencies
run: |
sudo apt-get install build-essential python3-dev python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Deploy Web App using GH Action azure/webapps-deploy
uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
However, my problem persists and I still receive the same message.
Does anybody have experience installing Weasyprint & Cairo on a Linux-based Web App?
I appreciate any help in advance.
UPDATE
Currently, I am able to deploy using the default deployment script created by Azure (shown below). I am then able to SSH into the deployment machine and manually activate the virtual environment & install the requisite packages. This process works and my application now works as expected.
I'd like to roll this command into the deployment process somehow (either as part of the default script or via a post deployment action).
GITHUB ACTIONS DEPLOYMENT SCRIPT
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.6'
- name: Build using AppService-Build
uses: azure/appservice-build#v2
with:
platform: python
platform-version: '3.6'
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
with:
app-name: {{appname}}
slot-name: {{slotname}}
publish-profile: {{profilename}}
MANUAL VIRTUAL ENV ACTIVATION & INSTALLS
source /home/site/wwwroot/pythonenv3.6/bin/activate
sudo apt-get install {{ additional packages }}
The required dependencies and the things you want to install can be successfully added to the .yml file, but whether it takes effect for your webapp, you still need to test, and specific problems need to be analyzed in detail.
If that doesn't work, it is recommended to install ssh manually.
I add linux command in .yml file to apt-get install xxx.
For more details,
Below is my .yml file. It works fine.
# Docs for the Azure Web Apps Deploy action: https://github.com/Azure/webapps-deploy
# More GitHub Actions for Azure: https://github.com/Azure/actions
name: Build and deploy Python app to Azure Web App - pyodbcInstallENV
on:
push:
branches:
- master
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.8'
- name: Install custom env
run: |
cd /home
sudo apt-get update
sudo apt-get install g++
sudo apt-get install unixodbc-dev
pip install pyodbc
- name: Build using AppService-Build
uses: azure/appservice-build#v2
with:
platform: python
platform-version: '3.8'
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
with:
app-name: 'pyodbcInstallENV'
slot-name: 'production'
publish-profile: ${{ secrets.AzureAppService_PublishProfile_d712769***2017c9521 }}
I am fairly new to cdk. How do I set up cdk or cdk.json to run where the python executable may be named 'python' or 'python3' depending on the platform ?
cdk init --language python creates the cdk.json on my local Windows PC with the line
"app": "python app.py"
The failure occurs when Jenkins CI/CD executes the application. Jenkins build fails because the linux based Jenkins expects 'python3'.
Current solution is to edit cdk.json when we commit to github and Jenkins auto builds the lower environments. Is there a better way?
Using python3 directly in cdk.json:
{
"app": "python3 app.py",
"context": {
"#aws-cdk/core:enableStackNameDuplicates": "true",
"aws-cdk:enableDiffNoFail": "true",
"#aws-cdk/core:stackRelativeExports": "true",
"#aws-cdk/aws-ecr-assets:dockerIgnoreSupport": true
}
}
Or alias python symlink to python3:
lrwxrwxrwx 1 root root 18 Nov 8 14:20 /usr/bin/python -> /usr/bin/python3.8
I had a few problems, but in the end, using python3 in the cdk.json file made no difference. I have a windows OS. The pre-requisites were:
Have all the configuration files (config and credentials with the correct parameters in .aws)
Node.js and AWS Cli installed (https://docs.aws.amazon.com/cdk/latest/guide/work-with.html#work-with-prerequisites)
Having this, I executed the line below in my windows terminal
npm install -g aws-cdk
Next step in my project (I use vscode), I created a folder to execute cdk (and I named it "cdk" but it could be anything).
mkdir cdk (to create my folder in the project)
cd cdk (to enter in my folder)
cdk init app --language python (to set the language)
source .venv/bin/activate (to create and activate the virtual environment)
add "aws_cdk.aws_s3" in the requirements.txt
Before executing cdk deploy, execute the pip install -r requirements.txt and use cdk synth to test if everything is ok or if some error must be corrected.
When using git actions, use sudo before the npm command and add in the run command the cd so git can navigate until the cdk folder. Without those, I had the error that follows.
--app is required either in command-line, in cdk.json or in ~/.cdk.json
Here's how the deploy job was configurated in my git actions file:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python
uses: actions/setup-python#v2
with:
python-version: '3.x'
- uses: actions/setup-node#v2-beta
with:
node-version: '12'
- name: Install dependencies
run: |
sudo npm install -g aws-cdk
cd 2_continuous_integration_and_tests/CDK
pip install -r requirements.txt
- name: Deploy
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
run: |
cd 2_continuous_integration_and_tests/CDK
cdk synth
cdk deploy
Kind regards
;)
I have some test data that I use for unit tests with pytest. I set their location with environment variables. Looking at my pytest logs the build sees the environment vars but the locations they reference don't exist. In the GitHub Actions docs the repo should be in /home/runner/Repo/. Below is my folder structure. Anyone see any obvious issues?
Repo/
notebooks/
repo/
__init__.py
tests/
tests_db.hdf5
Sample_Raw/
...
__init__.py
test_obj1.py
test_obj2.py
obj1.py
obj2.py
utils.py
build yaml
name: build-test
on:
push:
branches:
- '*' # all branches for now
jobs:
build-and-run:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
python-version: [3.8]
steps:
- uses: actions/checkout#v2
- uses: actions/setup-python#v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Generate coverage report
env:
DB_URL: /home/runner/work/Repo/repo/tests/test_db.hdf5
RAW_FOLDER: /home/runner/work/Repo/repo/tests/Sample_Raw/
run: |
pip install pytest
pip install pytest-cov
pytest --cov=./ --cov-report=xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ./coverage.xml
name: codecov-umbrella