I have a problem with deploying my Django app on Heroku using Terraform. An application can be found in a public repository here: https://gitlab.com/Draqun/pa-forum
Locally when I use terraform init and terraform apply all works fine. Sadly these same steps ran in CICD fail.
Error: Provider produced inconsistent final plan
When expanding the plan for heroku_build.pa_forum_build to include new
values learned so far during apply, provider
"registry.terraform.io/heroku/heroku" produced an invalid new value
for .local_checksum: was
cty.StringVal("SHA256:87bc408ce0cd8e1466c2d836cdfe2564b06d7ac8defd946c103b05918994ce49"),
but now
cty.StringVal("SHA256:dd8a0aaf8adc091ef32bf06cae1bd645dbbd8f549f692a4ba4d1c385ed94fc6b").
This is a bug in the provider, which should be reported in the
provider's own issue tracker.
I had these error before when application source code was directly in root directory. I moved code from root directory into src directory and it resolved problems with deploy app from local machine. Sadly it does not helped with CICD deploy.
My CICD configuration looks like that:
image: python:3.9-slim-buster
variables:
PYTHONPATH: src/
TF_PLAN: app.tfplan
TF_STATE: app.tf_state
TF_VAR_source_version: $CI_COMMIT_SHA
cache:
key: "project-${CI_COMMIT_REF_SLUG}"
paths:
- .cache/pip
- .venv
- src/requirements.txt
before_script:
- python -V # Print out python version for debugging
- apt-get update
- apt-get upgrade -y
- apt-get install -y make gettext
- pip install --upgrade pip
- pip install virtualenv poetry
- poetry config virtualenvs.in-project true
- poetry install --remove-untracked
- make init-env
- make create-env
- make freeze
- source .venv/bin/activate
stages:
- build
- tests
- deploy-plan
- deploy
- cleanup
- pages
- destroy
.tf-job:
image:
name: hashicorp/terraform:1.0.6
entrypoint: [""]
before_script:
- cd terraform/
- terraform --version
- terraform providers
- terraform init
- terraform fmt
- terraform validate
.tf-destroy:
extends: .tf-job
dependencies:
- tf-apply-job
script:
- terraform destroy -state=$TF_STATE -auto-approve
generate-messages-job:
stage: build
script:
- python3 src/manage.py makemessages -l pl
- python3 src/manage.py compilemessages -l pl
cache:
paths:
- src/locale/
artifacts:
paths:
- src/locale/
tests-job:
services:
- postgres:12.2-alpine
stage: tests
variables:
POSTGRES_DB: custom_db
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
POSTGRES_USER: custom_user
POSTGRES_PASSWORD: custom_pass
POSTGRES_HOST_AUTH_METHOD: trust
DATABASE_URL: postgres://$POSTGRES_USER:$POSTGRES_PASSWORD#postgres:5432/$POSTGRES_DB
script:
- python3 src/manage.py migrate
- .cicd/scripts/run_tests.sh src/manage.py
artifacts:
paths:
- report/
static-analysis-job:
stage: tests
script:
- .cicd/scripts/static_analysis.sh src/forum/
tf-plan-job:
extends: .tf-job
stage: deploy-plan
script:
- terraform plan -state=$TF_STATE -out=$TF_PLAN
cache:
paths:
- terraform/.terraform.lock.hcl
- terraform/.terraform
- terraform/$TF_STATE
- terraform/$TF_PLAN
artifacts:
paths:
- terraform/$TF_STATE
- terraform/$TF_PLAN
expire_in: 7 days
tf-apply-job:
extends: .tf-job
stage: deploy
script:
- ls -al .
- ls -al ../src/
- ls -al ../src/locale/pl/LC_MESSAGES
- terraform apply -auto-approve -state=$TF_STATE $TF_PLAN
after_script:
- terraform show
dependencies:
- tf-plan-job
only:
- master
cleanup-job:
extends: .tf-destroy
stage: cleanup
when: on_failure
pages:
stage: pages
script:
- mkdir doc
- cd doc; make html
- mv _build/html/ ../public/
artifacts:
paths:
- public
when: manual
only:
- master
tf-destroy-job:
extends: .tf-destroy
stage: destroy
when: manual
main.tf looks like that
terraform {
required_providers {
heroku = {
source = "heroku/heroku"
version = "~> 4.6.0"
}
}
required_version = ">= 0.14"
}
resource "heroku_app" "pa_forum" {
name = "python-academy-forum-${var.environment}"
region = "eu"
config_vars = {
LOG_LEVEL = "info"
DEBUG = false
VERSION = var.source_version
}
}
resource "heroku_addon" "postgres" {
app = heroku_app.pa_forum.id
plan = "heroku-postgresql:hobby-dev"
}
resource "heroku_build" "pa_forum_build" {
app = heroku_app.pa_forum.id
source {
path = var.source_dir
version = var.source_version
}
}
resource "heroku_config" "common" {
sensitive_vars = {
SECRET_KEY = "1234%^&*()someRandomData)(*&^%$##!"
}
}
resource "heroku_formation" "pa_forum_formation" {
app = heroku_app.pa_forum.id
quantity = var.app_quantity
size = "Free"
type = "web"
depends_on = [heroku_build.pa_forum_build]
}
I need solution for a problem with failing deploy.
Also any proposition to improve CI/CD process or Terraform scripts are also welcome.
Any help will be appreciated.
Best regards.
Draqun
Related
I am testing the application within the GitLab pipeline. There is such a script:
stages:
- test
- report
run_ui_tests:
tags:
- est
stage: test
before_script:
- echo "Prepairing enviroment..."
- python --version
- pip install -r requirements.txt
script:
- echo "Executing ui tests with Pytest..."
- cd cio_tests
- dir
- pytest -v authorize_test.py
allow_failure: true
artifacts:
when: always
paths:
- cio_tests/allure-results/
expire_in: 5 mins 30 sec
reporting:
tags:
- est
stage: report
needs:
- run_ui_tests
script:
- cd cio_tests
- dir
- allure generate --clean cio_tests/allure-report/
artifacts:
when: always
paths:
- cio_tests/allure-report/
expire_in: 5 days
The pipeline ends successfully, and Allure report is saved locally on disk. However, when the report is called in the browser, there is no data in it:
What's wrong?
Solution found. To open the built report locally, you need to raise the server in the directory with the allure-report folder.
For this :
Go to the directory with the allure-report folder in cmd or shell
Run the command: allure open -p allure-report
Allure report will be open in Edge browser
I have two repositories A & B.
Azure Repository A - Contains a python app
Azure Repository B - Contains .yml templates and .py scripts I want to run in the .yml templates
According to the documentations.. I cannot do this because when I expand the template into the calling repository A's pipeline.. it will be like a code directive and just inject the code.. it will not know or care about the .py files in the respoitory.
What are my options without doing all my .py routines as inline ?
Azure Repo A's Pipeline Yaml file
trigger: none
resources:
pipelines:
- pipeline: my_project_a_pipeline
source: trigger_pipeline
trigger:
branches:
include:
- master
repositories:
- repository: template_repo_b
type: git
name: template_repo_b
ref: main
stages:
- template: pipelines/some_template.yml#template_repo_b
parameters:
SOME_PARAM_KEY: "some_param_value"
Azure Repo B's some_template.yml
parameters:
- name: SOME_PARAM_KEY
type: string
stages:
- stage: MyStage
displayName: "SomeStage"
jobs:
- job: "MyJob"
displayName: "MyJob"
steps:
- bash: |
echo Bashing
ls -la
displayName: 'Execute Warmup'
- task: PythonScript#0
inputs:
scriptSource: "filePath"
scriptPath: /SOME_PATH_ON_REPO_B/my_dumb_script.py
script: "my_dumb_script.py"
Is there an option to wire in the .py files into a completely separate repo C... add C to resources of B templates.. and be on my way ?
EDIT:
I can see In Azure templates repository, is there a way to mention repository for a filePath parameter of azure task 'pythonScript'? but then how do I consume the python package.. can I still use the PythonScript task ? sounds like I would then need to call my pip packaged code straight from bash ??
I figured it out.. how to pip install py files in azure devops pipelines.. using azure repositories.. via a template in the same repo
just add a reference to yourself at the top of any template
In the consuming repo
repositories:
- repository: this_template_repo
type: git
name: this_template_repo
ref: master
then add a job, referencing yourself by that name
- job: "PIP_INSTALL_LIBS"
displayName: "pip install libraries to agent"
steps:
- checkout: this_template_repo
path: this_template_repo
- bash: |
python3 -m pip install setuptools
python3 -m pip install -e $(Build.SourcesDirectory)/somepypimodule/src --force-reinstall --no-deps
displayName: 'pip install pip package'
Gitlab version is 13.6.6
Gitlab-runner version is 11.2.0
my .gitlab-ci.yml:
image: "python:3.7"
before_script:
- pip install flake8
flake8:
stage: test
script:
- flake8 -max-line-length=79
tags:
- test
The only information obtained from Pipelines is script failure and the output of failed job is No job log. How can I get more detailed error output?
Using artifacts can help you.
image: "python:3.7"
before_script:
- pip install flake8
flake8:
stage: test
script:
- flake8 -max-line-length=79
- cd path/to
tags:
- test
artifacts:
when: on_failure
paths:
- path/to/test.log
The log file can be downloaded via the web interface.
Note:- Using when: on_failure will ensure that test.log will only be collected if the build fails, saving disk space on successful builds.
I am trying to run a apache beam pipeline with DirectRunner in cloudbuild and by doing that I need to install the requirements for the python script, but I am facing some errors.
This is part of my cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: [ '-c', "gcloud secrets versions access latest --secret=env --format='get(payload.data)' | tr '_-' '/+' | base64 -d > .env" ]
id: GetSecretEnv
# - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
# entrypoint: 'bash'
# args: ['-c', 'gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy --quiet tweepy-to-pubsub/app.yaml']
- name: gcr.io/cloud-builders/gcloud
id: Access id_github
entrypoint: 'bash'
args: [ '-c', 'gcloud secrets versions access latest --secret=id_github> /root/.ssh/id_github' ]
volumes:
- name: 'ssh'
path: /root/.ssh
# Set up git with key and domain
- name: 'gcr.io/cloud-builders/git'
id: Set up git with key and domain
entrypoint: 'bash'
args:
- '-c'
- |
chmod 600 /root/.ssh/id_github
cat <<EOF >/root/.ssh/config
Hostname github.com
IdentityFile /root/.ssh/id_github
EOF
ssh-keyscan -t rsa github.com > /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
- name: 'gcr.io/cloud-builders/git'
# Connect to the repository
id: Connect and clone repository
dir: workspace
args:
- clone
- --recurse-submodules
- git#github.com:x/repo.git
volumes:
- name: 'ssh'
path: /root/.ssh
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
args: [ '-c',
'source /venv/bin/activate' ]
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
dir: workspace
args: ['pip', 'install','-r', '/dir1/dir2/requirements.txt']
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: 'python'
dir: workspace
args: [ 'dir1/dir2/script.py',
'--runner=DirectRunner' ]
timeout: "1600s"
Without the step where I install the requirements this works but I need the libs, because I have python error for missing libs, and on the second step (5th actually in the original form of cloud build) the cloud build fails with this error
Step #5: Already have image (with digest): gcr.io/x/dataflow-python3
Step #5: import-im6.q16: unable to open X server `' # error/import.c/ImportImageCommand/360.
Step #5: import-im6.q16: unable to open X server `' # error/import.c/ImportImageCommand/360.
Step #5: /usr/local/bin/pip: line 5: from: command not found
Step #5: /usr/local/bin/pip: pip: line 7: syntax error near unexpected token `('
Step #5: /usr/local/bin/pip: pip: line 7: ` sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])'
How do I fix this? I also tried some examples on the internet and it doesn't work
Edit: First I deploy on app engine and then I download the repo in cloud build vm, install requirements and try to run it the python script
I think that the issue comes from your path definition
'source /venv/bin/activate'
and
'pip', 'install','-r', '/dir1/dir2/requirements.txt'
You use the full path definition and it doesn't work on Cloud Build. The current working directory is /workspace/. If you use relative path, add simply a dot . before the path, it should works better.
Or not... Indeed, you have the venv activation in a step, and the pip install in the following step. From one step to another, the runtime environment is offloaded and reloaded with the other container. Thus, your source command that set up environment variable, disappear in the pip step.
In addition, your cloud build environment is built for the build and destroy then. You don't need to use venv in this case and you can simplify the 3 last steps like this
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
args:
- '-c'
- |
pip install -r ./dir1/dir2/requirements.txt
python ./dir1/dir2/script.py --runner=DirectRunner
I have a Python script named app.py that has the value of the project ID,
project_id = "p007-999"
I hard code it inside the .gitlab-ci.yml file provided below,
# list of enabled stages, the default should be built, test, publish
stages:
- build
- publish
before_script:
- export WE_PROJECT_ID="p007-999"
- docker login -u "$WELANCE_REGISTRY_USER" -p "$WELANCE_REGISTRY_TOKEN" registry.welance.com
build:
stage: build
services:
- docker:dind
variables:
DOCKER_HOST: docker:2375
script:
- echo $WE_PROJECT_ID
- cd templates && pwd && yarn install && yarn prod && cd ..
- docker build -t registry.welance.com/$WE_PROJECT_ID:$CI_COMMIT_REF_SLUG.$CI_COMMIT_SHA -f ./build/ci/Dockerfile .
I would like to automate this. I think the steps for that will be,
a. write the project_id value from the Python script to a shell
script variables.sh.
b. In the before_script: of the YML file,
execute the variables.sh and read the value from there.
How do I achieve it correctly?
You can do this with ruamel.yaml, which was specfically developed to do these
kind of round-trip updates (disclaimer: I am the author of that package).
Assuming your input is:
# list of enabled stages, the default should be built, test, publish
stages:
- build
- publish
before_script:
- PID_TO_REPLACE
- docker login -u "$WELANCE_REGISTRY_USER" -p "$WELANCE_REGISTRY_TOKEN" registry.welance.com
build:
stage: build
services:
- docker:dind
variables:
DOCKER_HOST: docker:2375
script:
- echo $WE_PROJECT_ID
- cd templates && pwd && yarn install && yarn prod && cd ..
- docker build -t registry.welance.com/$WE_PROJECT_ID:$CI_COMMIT_REF_SLUG.$CI_COMMIT_SHA -f ./build/ci/Dockerfile .
And your code something like:
import sys
from pathlib import Path
import ruamel.yaml
def update_project_id(path, pid):
yaml = ruamel.yaml.YAML()
yaml.indent(sequence=4, offset=2) # non-standard indent of 4 for sequences
yaml.preserve_quotes = True
data = yaml.load(path)
data['before_script'][0] = 'export WE_PROJECT_ID="' + pid + '"'
yaml.dump(data, path)
file_name = Path('.gitlab-ci.yml')
project_id = "p007-999"
update_project_id(file_name, project_id)
which gives as output:
# list of enabled stages, the default should be built, test, publish
stages:
- build
- publish
before_script:
- export WE_PROJECT_ID="p007-999"
- docker login -u "$WELANCE_REGISTRY_USER" -p "$WELANCE_REGISTRY_TOKEN" registry.welance.com
build:
stage: build
services:
- docker:dind
variables:
DOCKER_HOST: docker:2375
script:
- echo $WE_PROJECT_ID
- cd templates && pwd && yarn install && yarn prod && cd ..
- docker build -t registry.welance.com/$WE_PROJECT_ID:$CI_COMMIT_REF_SLUG.$CI_COMMIT_SHA -f ./build/ci/Dockerfile .
(including the comment, which you lose when using by most other YAML loader/dumpers)
This is almost definitely inappropriate, but I really can't help myself.
WARNING: This is destructive, and will overwrite .gitlab-ci.yml.
awk '
NR==FNR && $1=="project_id" {pid=$NF}
/WE_PROJECT_ID=/ {sub(/\".*\"/, pid)}
NR!=FNR {print > FILENAME}
' app.py .gitlab-ci.yml
In the first file only, assign the last column to pid only if the first column is exactly "project_id".
On any line in any file that assigns the variable WE_PROJECT_ID, replace the first quoted string with pid.
In any files other than the first, print all records to the current file. This is possible due to awk's nifty buffers. If you have to be told to make a back-up, don't run this.