Poetry install error while running it on github workflow - python

Encountering an error message while installing dependencies using Poetry with GitHub Actions. It works fine when I install it locally. However, it gives out the runtime error of ['Key "files" does not exist.'] missing when Action runs on GitHub for the line [Install dependencies].
Poetry Lock file URL
https://pastebin.com/nmLmpYLh
Error Message
4s
Run poetry install
Creating virtualenv ailyssa-backend in /home/runner/work/ailyssa_backend/ailyssa_backend/.venv
Installing dependencies from lock file
[NonExistentKey]
'Key "files" does not exist.'
Error: Process completed with exit code 1.
Github Workflow
name: Linting
on:
pull_request:
branches: [ master ]
types: [opened, synchronize]
jobs:
linting:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.9]
os: [ubuntu-latest]
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up Poetry
uses: abatilo/actions-poetry#v2.0.0
- name: Install dependencies
run: poetry install
- name: Run code quality check
run: poetry run pre-commit run -a
Toml File
[tool.poetry]
name = "ailyssa_backend"
version = "4.0.9"
description = "The RESTful API for Ailytics using the Django Rest Framework"
authors = ["Shan Tharanga <63629580+ShanWeera#users.noreply.github.com>"]
[tool.poetry.dependencies]
python = "^3.9"
Django = "4.1.3"
djangorestframework = "^3.13.1"
django-environ = "^0.8.1"
djangorestframework-simplejwt = "^5.0.0"
drf-spectacular = {version = "0.25.1", extras = ["sidecar"]}
django-cors-headers = "^3.10.0"
uvicorn = "^0.16.0"
django-filter = "^21.1"
psycopg2 = "^2.9.3"
django-auditlog = "2.2.1"
boto3 = "^1.22.4"
django-allow-cidr = "^0.4.0"
pyppeteer = "^1.0.2"
plotly = "^5.8.1"
pandas = "^1.4.2"
psutil = "^5.9.1"
requests = "^2.28.0"
opencv-python = "^4.6.0"
tzdata = "^2022.1"
pyotp = "^2.6.0"
channels = "3.0.5"
dictdiffer = "^0.9.0"
deepmerge = "^1.0.1"
django-oauth-toolkit = "^2.1.0"
drf-writable-nested = "^0.7.0"
Twisted = {version = "22.10.0", extras = ["tls,http2"]}
pathvalidate = "^2.5.2"
humanize = "^4.4.0"
[tool.poetry.dev-dependencies]
pre-commit = "^2.15.0"
mypy = "^0.971"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
[tool.poetry.scripts]
manage = 'manage:main'
[tool.black]
line-length = 120
skip-string-normalization = true
target-version = ['py38']
include = '''
/(
\.pyi?$
| \.pyt?$
)/
'''
exclude = '''
/(
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \.venv
| _build
| buck-out
| build
| dist
| tests/.*/setup.py
)/
'''

Related

Github Actions not accessing download from Newspaper3k

I've been trying to use Github Actions to run a python script. Everything seems to run fine, except a specific function that uses the Newspaper3k package. The article appears to download fine (article.html works ok), but Article.parse() does not work. This works fine on my local server, but not in Github. Is this related to being able to access file locations, that are different on Github? It's a private repository, in case that makes a difference.
My yaml script is as follows:
build:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v3 # checkout the repository content to github runner.
- name: setup python
uses: actions/setup-python#v4
with:
python-version: '3.10' #install the python needed
cache: 'pip'
- name: install python packages
run: |
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: execute py script # run file
env:
WORDPRESS_USER: ${{ secrets.WORDPRESS_USER }}
WORDPRESS_PASSWORD: ${{ secrets.WORDPRESS_PASSWORD }}
run: |
python main.py
The function in question is provided below:
def generate_article_summary(supplied_links):
summary_list = ""
for news_article in supplied_links[:5]:
try:
url = news_article
article = Article(url, config=config)
article.download()
article.parse()
article.nlp()
except:
summary_list = summary_list + "\n"
pass
summary_list = summary_list + "\n" + article.summary
return summary_list
Any help would be much appreciated.

Deploy app to heroku using terraform in Gitlab CI/CD

I have a problem with deploying my Django app on Heroku using Terraform. An application can be found in a public repository here: https://gitlab.com/Draqun/pa-forum
Locally when I use terraform init and terraform apply all works fine. Sadly these same steps ran in CICD fail.
Error: Provider produced inconsistent final plan
When expanding the plan for heroku_build.pa_forum_build to include new
values learned so far during apply, provider
"registry.terraform.io/heroku/heroku" produced an invalid new value
for .local_checksum: was
cty.StringVal("SHA256:87bc408ce0cd8e1466c2d836cdfe2564b06d7ac8defd946c103b05918994ce49"),
but now
cty.StringVal("SHA256:dd8a0aaf8adc091ef32bf06cae1bd645dbbd8f549f692a4ba4d1c385ed94fc6b").
This is a bug in the provider, which should be reported in the
provider's own issue tracker.
I had these error before when application source code was directly in root directory. I moved code from root directory into src directory and it resolved problems with deploy app from local machine. Sadly it does not helped with CICD deploy.
My CICD configuration looks like that:
image: python:3.9-slim-buster
variables:
PYTHONPATH: src/
TF_PLAN: app.tfplan
TF_STATE: app.tf_state
TF_VAR_source_version: $CI_COMMIT_SHA
cache:
key: "project-${CI_COMMIT_REF_SLUG}"
paths:
- .cache/pip
- .venv
- src/requirements.txt
before_script:
- python -V # Print out python version for debugging
- apt-get update
- apt-get upgrade -y
- apt-get install -y make gettext
- pip install --upgrade pip
- pip install virtualenv poetry
- poetry config virtualenvs.in-project true
- poetry install --remove-untracked
- make init-env
- make create-env
- make freeze
- source .venv/bin/activate
stages:
- build
- tests
- deploy-plan
- deploy
- cleanup
- pages
- destroy
.tf-job:
image:
name: hashicorp/terraform:1.0.6
entrypoint: [""]
before_script:
- cd terraform/
- terraform --version
- terraform providers
- terraform init
- terraform fmt
- terraform validate
.tf-destroy:
extends: .tf-job
dependencies:
- tf-apply-job
script:
- terraform destroy -state=$TF_STATE -auto-approve
generate-messages-job:
stage: build
script:
- python3 src/manage.py makemessages -l pl
- python3 src/manage.py compilemessages -l pl
cache:
paths:
- src/locale/
artifacts:
paths:
- src/locale/
tests-job:
services:
- postgres:12.2-alpine
stage: tests
variables:
POSTGRES_DB: custom_db
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
POSTGRES_USER: custom_user
POSTGRES_PASSWORD: custom_pass
POSTGRES_HOST_AUTH_METHOD: trust
DATABASE_URL: postgres://$POSTGRES_USER:$POSTGRES_PASSWORD#postgres:5432/$POSTGRES_DB
script:
- python3 src/manage.py migrate
- .cicd/scripts/run_tests.sh src/manage.py
artifacts:
paths:
- report/
static-analysis-job:
stage: tests
script:
- .cicd/scripts/static_analysis.sh src/forum/
tf-plan-job:
extends: .tf-job
stage: deploy-plan
script:
- terraform plan -state=$TF_STATE -out=$TF_PLAN
cache:
paths:
- terraform/.terraform.lock.hcl
- terraform/.terraform
- terraform/$TF_STATE
- terraform/$TF_PLAN
artifacts:
paths:
- terraform/$TF_STATE
- terraform/$TF_PLAN
expire_in: 7 days
tf-apply-job:
extends: .tf-job
stage: deploy
script:
- ls -al .
- ls -al ../src/
- ls -al ../src/locale/pl/LC_MESSAGES
- terraform apply -auto-approve -state=$TF_STATE $TF_PLAN
after_script:
- terraform show
dependencies:
- tf-plan-job
only:
- master
cleanup-job:
extends: .tf-destroy
stage: cleanup
when: on_failure
pages:
stage: pages
script:
- mkdir doc
- cd doc; make html
- mv _build/html/ ../public/
artifacts:
paths:
- public
when: manual
only:
- master
tf-destroy-job:
extends: .tf-destroy
stage: destroy
when: manual
main.tf looks like that
terraform {
required_providers {
heroku = {
source = "heroku/heroku"
version = "~> 4.6.0"
}
}
required_version = ">= 0.14"
}
resource "heroku_app" "pa_forum" {
name = "python-academy-forum-${var.environment}"
region = "eu"
config_vars = {
LOG_LEVEL = "info"
DEBUG = false
VERSION = var.source_version
}
}
resource "heroku_addon" "postgres" {
app = heroku_app.pa_forum.id
plan = "heroku-postgresql:hobby-dev"
}
resource "heroku_build" "pa_forum_build" {
app = heroku_app.pa_forum.id
source {
path = var.source_dir
version = var.source_version
}
}
resource "heroku_config" "common" {
sensitive_vars = {
SECRET_KEY = "1234%^&*()someRandomData)(*&^%$##!"
}
}
resource "heroku_formation" "pa_forum_formation" {
app = heroku_app.pa_forum.id
quantity = var.app_quantity
size = "Free"
type = "web"
depends_on = [heroku_build.pa_forum_build]
}
I need solution for a problem with failing deploy.
Also any proposition to improve CI/CD process or Terraform scripts are also welcome.
Any help will be appreciated.
Best regards.
Draqun

How do I access a git tag in a google cloud build?

I have a Cloud Source Repository where I maintain the code of my python package. I have set up two triggers:
A trigger that runs on every commit on every branch (this one installs my python package and tests the code.
A trigger that runs on a pushed git tag (install the package, test, build artifacts, and deploy them to my private pypi repo).
During the second trigger, I want to verify that my Version number matches the git tag. In the setup.py file, I have added the code:
#!/usr/bin/env python
import sys
import os
from setuptools import setup
from setuptools.command.install import install
VERSION = "v0.1.5"
class VerifyVersionCommand(install):
"""Custom command to verify that the git tag matches our version"""
description = 'verify that the git tag matches our version'
def run(self):
tag = os.getenv('TAG_NAME')
if tag != VERSION:
info = "Git tag: {0} does not match the version of this app: {1}".format(
tag, VERSION
)
sys.exit(info)
setup(
name="name",
version=VERSION,
classifiers=["Programming Language :: Python :: 3 :: Only"],
py_modules=["name"],
install_requires=[
[...]
],
packages=["name"],
cmdclass={
'verify': VerifyVersionCommand,
}
)
The beginning of my cloudbuild.yaml looks like this:
steps:
- name: 'docker.io/library/python:3.8.6'
id: Install
entrypoint: /bin/sh
args:
- -c
- |
python3 -m venv /workspace/venv &&
. /workspace/venv/bin/activate &&
pip install -e .
- name: 'docker.io/library/python:3.8.6'
id: Verify
entrypoint: /bin/sh
args:
- -c
- |
. /workspace/venv/bin/activate &&
python setup.py verify
This works flawlessly on CircleCi, but on Cloud Build I get the error message:
Finished Step #0 - "Install"
Starting Step #1 - "Verify"
Step #1 - "Verify": Already have image: docker.io/library/python:3.8.6
Step #1 - "Verify": running verify
Step #1 - "Verify": /workspace/venv/lib/python3.8/site-packages/setuptools/dist.py:458: UserWarning: Normalizing 'v0.1.5' to '0.1.5'
Step #1 - "Verify": warnings.warn(tmpl.format(**locals()))
Step #1 - "Verify": Git tag: None does not match the version of this app: v0.1.5
Finished Step #1 - "Verify"
ERROR
ERROR: build step 1 "docker.io/library/python:3.8.6" failed: step exited with non-zero status: 1
Therefore, the TAG_NAME variable as specified in the Cloud Build documentation seems to not contain the git tag.
How can I access the git tag to verify it?
The TAG_NAME is set as substitution variables but not as environment variables
You can do that
- name: 'docker.io/library/python:3.8.6'
id: Verify
entrypoint: /bin/sh
env:
- "TAG_NAME=$TAG_NAME"
args:
- -c
- |
. /workspace/venv/bin/activate &&
python setup.py verify

GitHub workflow for automating Python script with secrets not working

Following is my Python script that adds links from liked tweets to Pocket:
from dotenv import load_dotenv
load_dotenv()
import os
import re
import tweepy
from pocket import Pocket
#Twitter keys
consumer_key = os.environ.get('API_key')
consumer_secret = os.environ.get('API_secretkey')
#Pocket keys
p_consumer_key = os.environ.get('Pocket_consumer_key')
p_access_token = os.environ.get('Pocket_access_token')
#authenticate and call twitter api
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
p = Pocket(consumer_key=p_consumer_key, access_token=p_access_token)
#gets JSON of liked tweets
fav = api.favorites('importhuman', count=100, tweet_mode='extended')
links = []
for status in fav:
url_list = status['entities']['urls']
if url_list != []:
for item in url_list:
link = item['expanded_url']
if link not in links:
if re.search("//twitter.com/", link) is None:
links.append(link)
p.add(link)
I'm trying to automate this script to run every 5 minutes, for which I'm trying to design a GitHub workflow but it's not working so far.
name: run-five-minutes
on:
schedule:
- cron: '*/5 * * * *'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-python#v2
with:
python-version: '3.x'
- run: pip install -r requirements.txt
env:
API_key: ${{ secrets.API_KEY }}
API_secretkey: ${{ secrets.API_SECRETKEY }}
Pocket_consumer_key: ${{ secrets.POCKET_CONSUMER_KEY }}
Pocket_access_token: ${{ secrets.POCKET_ACCESS_TOKEN }}
- run: python3 app.py
The requirements.txt file has python-dotenv, tweepy, and pocket. Any help would be appreciated.
I figured it out. The problem was that I was setting up the secrets in an environment before the script ran, so it did not have access to the secrets. I just switched the order, and it works.
So, instead of
- run: pip install -r requirements.txt
env:
API_key: ${{ secrets.API_KEY }}
API_secretkey: ${{ secrets.API_SECRETKEY }}
Pocket_consumer_key: ${{ secrets.POCKET_CONSUMER_KEY }}
Pocket_access_token: ${{ secrets.POCKET_ACCESS_TOKEN }}
- run: python3 app.py
I did this:
- run: pip install -r requirements.txt
- run: python3 app.py
env:
API_key: ${{ secrets.API_KEY }}
API_secretkey: ${{ secrets.API_SECRETKEY }}
Pocket_consumer_key: ${{ secrets.POCKET_CONSUMER_KEY }}
Pocket_access_token: ${{ secrets.POCKET_ACCESS_TOKEN }}

FileNotFoundError: Github Actions Workflow fails when creating directory or file during test

I am using Github Python application workflow for CI. My application creates a folder to store temporary files. It works perfectly when testing on localhost but it will not let me create a new directory in Github actions. I get the below error:
#classmethod
def save_files(cls, files: list) -> str:
"""
saves a list of files in the "files"
folder in app
:param files: list of FileStorage objects
:return: directory name where files saved
"""
folder = time.strftime("%Y%m%d-%H%M%S")
folder_path = Path(__file__).parent / "files" / folder
os.mkdir(folder_path)
E FileNotFoundError: [Errno 2] No such file or directory: /home/runner/work/DocumentAnalysisTool/DocumentAnalysisTool/app/files/20200430-235749
Here is my workflow pythonapp.yml file:
name: Python application
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v1
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Lint with flake8
run: |
pip install flake8
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pip install pytest
pytest
Thank you in advance

Categories

Resources