Cookiecutter Django fails Github Actions on Run pre-commit - python

***EDIT Issue resolved. Several things needed to be done to setup pre-commit in my environment prior to commit/push.
'git init'
'pre-commit install'
open new terminal, activate venv
'pre-commit run —all-files'
Hope this helps someone in the future.
Junior Web Developer making his first post!
I'm setting up a Python Django project using the Cookie-cutter template https://github.com/cookiecutter/cookiecutter-django
However, upon any commit to GitHub, the GitHub Actions test fails on linter at the Run pre-commit step. I receive the error message: "Error: The process '/opt/hostedtoolcache/Python/3.9.10/x64/bin/pre-commit' failed with exit code 1"
My research indicates this is a generic python linter failure, but it's not clear why it's failing. The slug repo doesn't have any issues and I've setup the project many times, on 3 different machines (2 Macs and 1 Windows), pushing to new repositories each time, slightly modifying the settings each time, not modifying any code after project initialization, sadly receiving the same results.
I'm completely stumped and my progress has come to a stop on this issue. Please note I don't have much experience with GitHub Actions. Part of using this project template is having these nice features work to support future development.
One thing I'm confused about is why python 3.9.10 is being used in the pre-commit step when command "python3 --version" on all my machines I've used to initialize the project is returning python 3.9.9. A simple search in the project for "3.9.10" returns no results.
Below are my relevant project build settings. I've attempted many different combinations, but this is the result I'd like to get to production. Everything builds and runs via Docker-compose locally without issues. Thanks massively for your help and guidance! Some seemingly non-relevant links removed in the code due to reputation limit.
https://github.com/TElphee01/django1
version: 0.1.0
open_source_license: 2 - BSD
timezone: EST
windows: n
use_pycharm: n
use_docker: y
postgresql_version: 14.1
js_task_runner: gulp
cloud_provider: AWS
Mail_service: Mailgun
Use_async:n
Use_drf: y
custom_bootstrap_compilation: n
use_compressor: y
use_celery: y
use_mailhog: y
use_sentry: y
Use_whitenoise: y
use_heroku: y
ci_tool: Github
Keep_local_envs_in_vcs: y
debug: n
Error message found at: https://github.com/TElphee01/django1/runs/5325567134?check_suite_focus=true under Run pre-commit step
Run pre-commit/action#v2.0.3
install pre-commit
/opt/hostedtoolcache/Python/3.9.10/x64/bin/pre-commit run --show-diff-on-failure --color=always --all-files
[INFO] Initializing environment for github.com/pre-commit/pre-commit-hooks.
[INFO] Initializing environment for github.com/psf/black.
[INFO] Initializing environment for github.com/PyCQA/isort.
[INFO] Initializing environment for github.com/PyCQA/flake8.
[INFO] Initializing environment for github.com/PyCQA/flake8:flake8-isort.
[INFO] Installing environment for github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for github.com/psf/black.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for github.com/PyCQA/isort.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for github.com/PyCQA/flake8.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
trim trailing whitespace.................................................Passed
fix end of files.........................................................Failed
- hook id: end-of-file-fixer
- exit code: 1
- files were modified by this hook
Fixing README.md
check yaml...............................................................Passed
black....................................................................Passed
isort....................................................................Failed
- hook id: isort
- files were modified by this hook
Fixing /home/runner/work/django1/django1/tefrontend/users/tests/test_views.py
flake8...................................................................Passed
pre-commit hook(s) made changes.
If you are seeing this message in CI, reproduce locally with: `pre-commit run --all-files`.
To run `pre-commit` as part of git workflow, use `pre-commit install`.
All changes made by hooks:
diff --git a/README.md b/README.md
index a64ec04..c4c8a44 100644
Binary files a/README.md and b/README.md differ
diff --git a/tefrontend/users/tests/test_views.py b/tefrontend/users/tests/test_views.py
index ebdc864..4fe526a 100644
--- a/tefrontend/users/tests/test_views.py
+++ b/tefrontend/users/tests/test_views.py
## -11,11 +11,7 ## from django.urls import reverse
from tefrontend.users.forms import UserAdminChangeForm
from tefrontend.users.models import User
from tefrontend.users.tests.factories import UserFactory
-from tefrontend.users.views import (
- UserRedirectView,
- UserUpdateView,
- user_detail_view,
-)
+from tefrontend.users.views import UserRedirectView, UserUpdateView, user_detail_view
pytestmark = pytest.mark.django_db
Error: The process '/opt/hostedtoolcache/Python/3.9.10/x64/bin/pre-commit' failed with exit code 1

It looks like the pre-commit checks have detected some specific issues in your project:
the "end of file" check, which ensures that a file is either empty, or ends with one newline, and
the "isort" check, which has to do with the ordering and formatting of import statements.
In both cases, the tooling is also correcting the problem.
If you were to install and run pre-commit locally, you could just commit the changes it makes and then push your code when the commits pass the checks locally.
If you install pre-commit and then run pre-commit install in your repository, it will be installed as a pre-commit hook so these checks will be run for you automatically when you commit changes.

Related

Use pre-commit hook for black with multiple language versions for python

We are using pre-commit to format our Python code using black with the following configuration in .pre-commit-config.yaml:
repos:
- repo: https://github.com/ambv/black
rev: 20.8b1
hooks:
- id: black
language_version: python3.7
As our packages are tested against and used in different Python versions (e.g. 3.7, 3.8, 3.9) I want to be able to use the pre-commit Hook on different Python versions. But when committing Code e.g. on Python 3.8, I get an error due to the language_version in my configuration (see above):
C:\Users\FooBar\Documents\Programmierung\foo (dev -> origin)
λ git commit -m "Black file with correct black version"
[INFO] Initializing environment for https://github.com/ambv/black.
[INFO] Installing environment for https://github.com/ambv/black.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: command: ('c:\\users\\FooBar\\anaconda\\python.exe', '-mvirtualenv', 'C:\\Users\\FooBar\\.cache\\pre-commit\\repobmlg3b_m\\py_env-python3.7', '-p', 'python3.7')
return code: 1
expected return code: 0
stdout:
RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.7'
stderr: (none)
Check the log at C:\Users\FooBar\.cache\pre-commit\pre-commit.log
How can I enable the pre-commit Hook on different Python-Versions e.g. only on Python 3?
Thanks in advance!
one way would be to set language_version: python3 (this used to be the default for black) -- the actual language_version you use there doesn't matter all that much as black doesn't use it to pick the formatted language target (that's a separate option)
generally though, you shouldn't need to set language_version as either (1) the hook itself will set a proper one or (2) it will default to your currently running python
note also: you're using the twice-deprecated url for black -- it is now psf/black
__
disclaimer: I created pre-commit and I'm a black contributor

How to resolve the error "an error occured: no such file or directory for a Django/Python3 app"

I followed this tutorial: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html
I was able to stand up my Django project on my local MacBook pro, but when deploying to AWS EB, it failed, logs below:
2021/02/03 23:50:45.548154 [INFO] Executing instruction: StageApplication
2021/02/03 23:50:45.807282 [INFO] extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/
2021/02/03 23:50:45.807307 [INFO] Running command /bin/sh -c /usr/bin/unzip -q -o /opt/elasticbeanstalk/deployment/app_source_bundle -d /var/app/staging/
2021/02/03 23:50:46.552599 [INFO] finished extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/ successfully
2021/02/03 23:50:46.553257 [ERROR] An error occurred during execution of command [app-deploy] - [StageApplication]. Stop running the command. Error: chown /var/app/staging/bin/python: no such file or directory
2021/02/03 23:50:46.553265 [INFO] Executing cleanup logic
2021/02/03 23:50:46.553350 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1612396246,"severity":"ERROR"}]}]}
My research led me to this post: AWS Elastic Beanstalk chown PythonPath error, but when I tried the suggested command: git rm -r --cached venv in my project directory, it returned:
fatal: pathspec 'venv' did not match any files
If you type eb logs into the command line and look for the eb-engine.log section you might be able to get more details.
Also maybe verify that you set up you eb cli correctly. Here is the link to setting it up, look at section 2.3 and make sure you have all the requirements installed appropriately.
https://github.com/aws/aws-elastic-beanstalk-cli-setup
Finally double check your python version(s) you have installed. the tutorial is calling python-3.6 but you may not have that version installed.

ElasticBeanstalk suddenly starts failing to deploy Django App with "Cannot use ImageField because Pillow is not installed" exception

I am using elastic beanstalk to deploy my Django application. Today it suddenly stopped working without any breaking changes from the application side (I've changed some templates, nothing more).
The deployment time outs after 10 minutes of trying to deploy the app and nothing happens.
The only more or less useful hints I can see in the log is this:
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update .../postbuild_0_myproject/Command 01_migrate] : Activity execution failed, because: SystemCheckError: System check identified some issues:
ERRORS:
education.Author.photo: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "pip install Pillow".
education.Course.cover_image: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "pip install Pillow".
education.CourseCategory.icon_image: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "pip install Pillow".
Using staging settings
App receivers connected
(ElasticBeanstalk::ExternalInvocationError)
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update .../postbuild_0_myproject/Command 01_migrate] : Activity failed.
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update .../postbuild_0_myproject] : Activity failed.
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update ...] : Activity failed.
[2020-02-20T15:00:20.507Z] INFO [19057] - [Application update app-9a24-200220_145942-stage-200220_145942#142/AppDeployStage0/EbExtensionPostBuild] : Activity failed.
[2020-02-20T15:00:20.507Z] INFO [19057] - [Application update app-9a24-200220_145942-stage-200220_145942#142/AppDeployStage0] : Activity failed.
[2020-02-20T15:00:20.508Z] INFO [19057] - [Application update app-9a24-200220_145942-stage-200220_145942#142] : Completed activity. Result:
Application update - Command CMD-AppDeploy failed
But I already have Pillow in requirements.txt and the log above says:
Requirement already satisfied: Pillow==6.2.1 in /opt/python/run/venv/lib64/python3.6/site-packages (from -r /opt/python/ondeck/app/requirements.txt (line 51))
How can I troubleshoot and fix this? And how can I avoid similar issues in the future? I am really frightened that the same problem may randomly pop out on production environment.
Here's some more info about the configuration:
Here's what I have in .ebextensions:
01_packages.config:
packages:
yum:
git: []
postgresql93-devel: []
db-migrate.config
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: myproject.settings
django.config
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: myproject/wsgi.py
wsgi_custom.config
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On
This one is a pain and a known issue with Django when using the ImageField model/form. Due to Pythons dynamic import system it will suddenly appear and annoyed the hell out of me when I first came across it.
The way I normally fix this is by using conda and its equivalent of a virtualenv to ensure the right interpreter (the one with my packages) is used.
If you are not using a virtualenv or equivalent, set one up now, if you are already using one then check you are installing pillow with pip3 install pillow - the pip3 being important here as on debian (and many other) systems normal pip will only install for python 2.x.
Using conda will ensure this doesnt happen in production, but I would also add it to your checklist of things to test when deploying - check correct version of pillow setup and working.
I had two Elastic Beanstalk environments with the same issue (one web tier env and a worker env).
On one of them the issue was resolved by restarting the environment.
The other one failed to restart and timed out every time on any operation. This one I managed to fix by going to configuration > capacity and changing the minimum and maximum number of instances to 0. I've applied the changes, waited for them to apply and then returned the previous values for min and max instance numbers.
That fixed the issue.
I still have no idea what caused the issue in the first place and would love to receive some comment on that.

pygradle: Multi-Project-Example: buildPex FAILED when using project dependency

i would like to use pygradle in a multi-project setup with project dependencies. I created two gradle sub-projects. A python-cli project (example-app) and a python-sdist project (example-lib) on which the cli project depends on.
But currently i'm facing the following error (gist) when i try to build the app:
multi-project-example/example-app> gradle build --info
> Task :example-app:buildPex FAILED
Task ':example-app:buildPex' is not up-to-date because:
Task has not declared any outputs despite executing actions.
Starting process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''. Working directory: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app Command: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/pip freeze --all --disable-pip-version-check
Successfully started process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''
/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/deployable/bin/example-app.pex
Starting process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''. Working directory: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app Command: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/pex --no-pypi --cache-dir /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/pex-cache --output-file /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/deployable/bin/example-app.pex --repo /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/wheel-cache --python-shebang /home/kkdh/.anaconda3/bin/python UNKNOWN==0.0.0 example-app==0.3.0a1
Successfully started process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''
Could not satisfy all requirements for example-lib:
example-lib(from: example-app==0.3.0a1)
:example-app:buildPex (Thread[Execution worker for ':',5,main]) completed. Took 1.165 secs.
You will find the example in my fork of pygradle: https://github.com/kKdH/pygradle/tree/master/examples/multi-project-example
I opened an issue about this problem but without any response from the project maintainers. So now i'am asking here for any pointers to a solution or further troubleshoot steps.

Local GitLab runner freezes while Shared GitLab.com runner succeeds

EDIT: As Rekovni pointed out, using a GitLab runner with Docker on a Windows machine is a problem. Installing the runner in a Linux-based virtual machine solved the problem.
I am developing a Python program using a conda environment. It is hosted on GitLab.com and I am using GitLab-CI to generate the documentation.
I configured the following .gitlab-ci.yml file for it:
image: continuumio/miniconda3:latest
before_script:
# Update conda and create environment, which is then activated.
- conda update -vvv -y -c conda-forge conda
- conda env create -f helpers/NAME.yml
- source activate NAME
# Correct installation.
- conda install -q -y gsl=2.2.1
pages:
script:
# Install make.
- apt-get update
- apt-get install -q -y build-essential
# Install Spinx-related packages.
- conda install -q -y sphinx sphinx_rtd_theme
# Create documentation.
- cd REPO/doc
- sphinx-apidoc -o source/ ../REPO --force --separate
- make html
# Transfer documentation to public pages folder.
- mv build/html/ ../../public/
artifacts:
paths:
- public
# only:
# - master
Running this script with a shared GitLab runner that is supplied with GitLab.com works and the documentation is generated and placed in the public folder.
For future unit tests (which take much longer), I want to provide a local runner on a Win 10 machine in my network. For this, I installed the gitlab-runner.exe and Docker Desktop. I successfully registered the runner with the project on GitLab.com.
The runner is using the following config.toml configuration file:
concurrent = 1
check_interval = 0
log_level = "info"
[session_server]
session_timeout = 1800
[[runners]]
name = "NAME"
url = "https://gitlab.com"
token = "TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
The problem is now that the local runner freezes during the execution of the above script without producing any error messages and I am at a loss on how to debug it. What I have is
The log of the script that is shown on the Job page on GitLab.com; and
The console output of the gitlab-runner.exe on the local machine.
Regarding 1., I see
[0KRunning with gitlab-runner 11.10.0 (3001a600)
...
[32;1mChecking out COMMIT_HASH as BRANCH_NAME...[0;m
...
[0K[32;1m$ conda update -vvv -y -c conda-forge conda[0;m
DEBUG conda.gateways.logging:set_verbosity(148): verbosity set to 3
...
...
...
TRACE conda.gateways.disk.update:rename(52): renaming /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_new.html => /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_new.html.c~
TRACE conda.core.path_actions:execute(1041): renaming share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html => share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html.c~
TRACE conda.gateways.disk.update:rename(52): renaming /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html => /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html.c~
TRACE conda.core.path_actions:execute(1041): renaming share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_ctrl.html => share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_ctrl.html.c~
where it abruptly stops without reaching the - conda env create -f helpers/NAME.yml line.
Regarding 2., I see
C:\GitLab-Runner>gitlab-runner.exe --debug run
Runtime platform arch=amd64 os=windows pid=14116 revision=3001a600 version=11.10.0Starting multi-runner from C:\GitLab-Runner\config.toml ... builds=0
Checking runtime mode GOOS=windows uid=-1
Configuration loaded builds=0
...
Feeding runners to channel builds=0
Checking for jobs... nothing runner=TOKEN
Feeding runners to channel builds=0
Checking for jobs... received job=203033130 repo_url=REPO_URL.git runner=TOKEN
...
Attaching to container HASH ... job=203033130 project=6249897 runner=TOKEN
Starting container HASH ... job=203033130 project=6249897 runner=TOKEN
Waiting for attach to finish HASH ... job=203033130 project=6249897 runner=TOKEN
Waiting for container HASH ... job=203033130 project=6249897 runner=TOKEN
Appending trace to coordinator... ok code=202 job=203033130 job-log=0-10348 job-status=running runner=TOKEN sent-log=1801-10347 status=202 Accepted
Appending trace to coordinator... ok code=202 job=203033130 job-log=0-19445 job-status=running runner=TOKEN sent-log=10348-19444 status=202 Accepted
...
Appending trace to coordinator... ok code=202 job=203033130 job-log=0-933150 job-status=running runner=TOKEN sent-log=241860-933149 status=202 Accepted
Submitting job to coordinator... ok code=200 job=203033130 job-status= runner=TOKEN
Submitting job to coordinator... ok code=200 job=203033130 job-status= runner=TOKEN
where it seems that the switch from Appending trace to coordinator to Submitting job to coordinator happens around the time when it gets stuck.
After this, 1. is not updated with any further information and 2. is stuck in a Submitting job to coordinator loop.
Does anyone know:
What the reason for the failure with a local runner could be (when the same script works with a shared runner)?
What I could do to debug this problem?
Thanks and all the best,
Thomas
GitLab CI doesn't currently offer a solution for using its runner with Docker on a Windows environment, however there is an epic at the moment which is tracking progress for this.
In one of the issues of the epic, a contributer has managed to get a working version of a gitlab-runner which uses Docker for Windows, with which more details can be found here.
A more common (and potentially easier) way of using Docker in a Windows environment, would be to install the gitlab-runner as a Shell runner, and call the Docker commands manually to run your tests.
Conversely, if you just want to keep using the same CI script, you could install a Linux VM on your Windows 10 machine, and have that host the docker runner!

Categories

Resources