I'm trying to cache the python dependencies of my project. To do that, I have this configuration in my workflow:
- uses: actions/cache#v2
id: cache
with:
path: ~/.cache/pip
key: pip-${{ runner.os }}-${{ hashFiles('**/requirements.txt') }}-${{ hashFiles('**/requirements_dev.txt') }}
restore-keys: pip-${{ runner.os }}
- name: Install apt dependencies
run: |
sudo apt-get update
sudo apt-get install gdal-bin
- name: Install dependencies
if: steps.cache.outputs.cache-hit != 'true'
run: |
pip install --upgrade pip==9.0.1
pip install -r requirements.txt
pip install -r requirements_dev.txt
This works, by 'works' I mean that it loads the cache and skip the 'Install dependencies step' and it restores the ~/.cache/pip directory. The problem is that when I try to run the tests, the following error appears:
File "manage.py", line 7, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
Error: Process completed with exit code 1.
Am I caching the incorrect directory? Or what am I doing wrong?
Note: this project is using python2.7 on Ubuntu 16.04
As it explains here, you can cache the whole virtual environment:
- uses: actions/cache#v2
with:
path: ${{ env.pythonLocation }}
key: ${{ env.pythonLocation }}-${{ hashFiles('setup.py') }}-${{ hashFiles('dev-requirements.txt') }}
Related
My file structure is as follows:
Dockerfile
.gitlab-ci.yml
Here is my Dockerfile:
FROM python:3
RUN apt-get update && apt-get install make
RUN apt-get install -y python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install pygdbmi
RUN pip3 install pyyaml
RUN pip3 install Path
And here is my .gitlab-ci.yml file:
test-job:
stage: test
image: runners:test-harness
script:
- cd test-harness
# - pip3 install pygdbmi
# - pip3 install pyyaml
- python3 main.py
artifacts:
untracked: false
when: on_success
expire_in: "30 days"
paths:
- test-harness/script.log
For some reason the pip3 install in the Dockerfile doesn't seem to be working as I get the error:
python3 main.py
Traceback (most recent call last):
File "/builds/username/test-harness/main.py", line 6, in <module>
from pygdbmi.gdbcontroller import GdbController
ModuleNotFoundError: No module named 'pygdbmi'
When I uncomment the two commented lines in .gitlab-ci.yml:
# - pip3 install pygdbmi
# - pip3 install pyyaml
It works fine but ideally, I want those 2 packages to be installed in the Dockerfile not the .gitlab-ci.yml pipeline stage
I've tried changing the WORKDIR as well as USER and it doesn't seem to have any effect.
Any ideas/solutions?
Here is my workflow file
jobs:
build:
runs-on: windows-latest
environment: Main
env:
MAINAPI: ${{secrets.MAINAPI }}
TESTAPI: ${{secrets.TESTAPI }}
BRAINID: ${{secrets.BRAINID }}
BRAINKEY: ${{secrets.BRAINKEY }}
steps:
- uses: actions/checkout#v3
- name: Set up Python 3.10
uses: actions/setup-python#v3
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
python -m pip install PyAudio-0.2.11-cp311-cp311-win_amd64.whl
pip install -r requirements.txt --no-deps
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest
env:
MAINAPI: ${{secrets.MAINAPI }}
TESTAPI: ${{secrets.TESTAPI }}
BRAINID: ${{secrets.BRAINID }}
BRAINKEY: ${{secrets.BRAINKEY }}
Here is my code
mainapi = str(os.environ["MAINAPI"])
apiurl = str(os.environ["TESTAPI"])
I have set the secrets as Environment secrets, repository sercrets and even as an environment. However, none of them seems to work. Pls help
I have a Django app that uses Weasyprint to generate PDF outputs. This works fine on my local development machine.
I am able to successfully deploy to Azure Web Apps, but get the following error message:
2020-11-17T07:34:14.287002623Z OSError: no library called "cairo" was found
2020-11-17T07:34:14.287006223Z no library called "libcairo-2" was found
2020-11-17T07:34:14.287009823Z cannot load library 'libcairo.so.2': libcairo.so.2: cannot open shared
object file: No such file or directory
2020-11-17T07:34:14.287016323Z cannot load library 'libcairo.2.dylib': libcairo.2.dylib: cannot open
shared object file: No such file or directory
2020-11-17T07:34:14.287020123Z cannot load library 'libcairo-2.dll': libcairo-2.dll: cannot open
shared object file: No such file or directory
Per Weasyprint's documentation (https://weasyprint.readthedocs.io/en/stable/install.html#debian-ubuntu), I have attempted to make the reccommended installations via a custom deployment script which looks like such:
jobs:
build:
name: Build and Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python#v2
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install dependencies
run: |
sudo apt-get install build-essential python3-dev python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Deploy Web App using GH Action azure/webapps-deploy
uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
However, my problem persists and I still receive the same message.
Does anybody have experience installing Weasyprint & Cairo on a Linux-based Web App?
I appreciate any help in advance.
UPDATE
Currently, I am able to deploy using the default deployment script created by Azure (shown below). I am then able to SSH into the deployment machine and manually activate the virtual environment & install the requisite packages. This process works and my application now works as expected.
I'd like to roll this command into the deployment process somehow (either as part of the default script or via a post deployment action).
GITHUB ACTIONS DEPLOYMENT SCRIPT
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.6'
- name: Build using AppService-Build
uses: azure/appservice-build#v2
with:
platform: python
platform-version: '3.6'
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
with:
app-name: {{appname}}
slot-name: {{slotname}}
publish-profile: {{profilename}}
MANUAL VIRTUAL ENV ACTIVATION & INSTALLS
source /home/site/wwwroot/pythonenv3.6/bin/activate
sudo apt-get install {{ additional packages }}
The required dependencies and the things you want to install can be successfully added to the .yml file, but whether it takes effect for your webapp, you still need to test, and specific problems need to be analyzed in detail.
If that doesn't work, it is recommended to install ssh manually.
I add linux command in .yml file to apt-get install xxx.
For more details,
Below is my .yml file. It works fine.
# Docs for the Azure Web Apps Deploy action: https://github.com/Azure/webapps-deploy
# More GitHub Actions for Azure: https://github.com/Azure/actions
name: Build and deploy Python app to Azure Web App - pyodbcInstallENV
on:
push:
branches:
- master
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.8'
- name: Install custom env
run: |
cd /home
sudo apt-get update
sudo apt-get install g++
sudo apt-get install unixodbc-dev
pip install pyodbc
- name: Build using AppService-Build
uses: azure/appservice-build#v2
with:
platform: python
platform-version: '3.8'
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
with:
app-name: 'pyodbcInstallENV'
slot-name: 'production'
publish-profile: ${{ secrets.AzureAppService_PublishProfile_d712769***2017c9521 }}
I am trying to cache dependencies for a Github Action workflow. I use Pipenv.
this is my config:
- uses: actions/cache#v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/Pipfile') }}
restore-keys: |
${{ runner.os }}-pip-
I got this config from Github's own examples for using pip. I have only changed requirements.txt to Pipfile since we don't use a requirements.txt. But even with the requirements.txt I get the same issue anyway.
the Cache Dependencies step always give this issue:
and then after running the tests:
There's no error on the workflow and it finishes as normal, however, it never seems to be able to find or update the dependency cache.
pipenv needed to be installed before the cache step...
- name: Install pipenv, libpq, and pandoc
run: |
sudo apt-get install libpq-dev -y
pip install pipenv
I'm trying to setup Circle-CI for the first time for my application. It's a python 3.7.0 based app with a few tests. The app builds just fine, but fails when running the test job. Locally the tests work fine, so I assume I'm missing some Circle-CI configuration?
This is my yaml:
version: 2.0
jobs:
build:
docker:
- image: circleci/python:3.7.0
steps:
- checkout
- run:
name: "Run tests"
command: python -m unittest
This is the error:
======================================================================
ERROR: tests.test_auth (unittest.loader._FailedTest)
ImportError: Failed to import test module: tests.test_auth
Traceback (most recent call last):
File "/usr/local/lib/python3.7/unittest/loader.py", line 434, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/local/lib/python3.7/unittest/loader.py", line 375, in _get_module_from_name
import(name)
File "/home/circleci/project/tests/test_auth.py", line 5, in
from werkzeug.datastructures import MultiDict
ModuleNotFoundError: No module named 'werkzeug'
What am I missing?
EDIT:
I have added now pip install -r requirements.txt but I get now:
Could not install packages due to an EnvironmentError: Errno 13] Permission denied: '/usr/local/lib/python3.7/site-packages/MarkupSafe-1.1.1.dist-info'
EDIT:
In addition to the answer, here is complete yaml configuration working:
version: 2.0
jobs:
build:
docker:
- image: circleci/python:3.7.0
steps:
- checkout
- run:
name: "Install dependencies"
command: |
python3 -m venv venv
. venv/bin/activate
pip install --upgrade pip
pip install --no-cache-dir -r requirements.txt
- run:
name: "Run tests"
command: |
. venv/bin/activate
python -m unittest
It simply means that a dependency 'werkzeug' is not installed. You might need to install additional packages which are required separately.
Consider adding the dependency installations to the Dockerfile something like below
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
If you get permission denied issues, then your tests are started with a user who have no privileges to manage python. But its unlikely to be so.