I have environment secrets set up in a Python GitHub actions project:
I can access the secrets from the actions file, because the following:
jobs:
log-the-inputs:
runs-on: ubuntu-latest
steps:
- run: |
echo "Log level: $LEVEL"
echo "Tags: $TAGS"
echo "Environment: $ENVIRONMENT"
echo ${{ secrets.EMAIL_USER }}
will output
Run echo "Log level: $LEVEL"
Log level: warning
Tags: false
Environment: novi
***
I expected the secrets to be available from the environment variables, but when I use os.environ EMAIL_USER and EMAIL_PASSWORD are not in there.
How to access the secrets from the python script?
When you use an expression like ${{ secrets.EMAIL_USER }}, you're not referencing an environment variable. That value is substituted by the workflow engine before your script runs.
If you want the secrets to be available as environment variables, you need to set those environment variables explicitly using the env section of a step or workflow. For example:
name: Workflow with secrets
on:
workflow_dispatch:
jobs:
show-secrets:
runs-on: ubuntu-latest
env:
EMAIL_USER: ${{ secrets.EMAIL_USER }}
EMAIL_PASSWORD: ${{ secrets.EMAIL_PASSWORD }}
steps:
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '^3.9'
- name: Show environment
run: |
env | grep EMAIL
- name: Create python script
run: |
cat > showenv.py <<'EOF'
import os
print(f'Email username is {os.environ.get("EMAIL_USER", "<unknown")}')
print(f'Email password is {os.environ.get("EMAIL_PASSWORD", "<unknown")}')
EOF
- name: Run python script
run: |
python showenv.py
Related
My repo contains a main.py that generates a html map and save results in a csv. I want the action to:
execute the python script (-> this seems to be ok)
that the file generated would then be in the repo, hence having the file generated to be added, commited and pushed to the main branch to be available in the page associated with the repo.
name: refresh map
on:
schedule:
- cron: "30 11 * * *" #runs at 11:30 UTC everyday
jobs:
getdataandrefreshmap:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v3 # checkout the repository content to github runner.
- name: setup python
uses: actions/setup-python#v4
with:
python-version: 3.8 #install the python needed
- name: Install dependencies
run: |
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: execute py script
uses: actions/checkout#v3
run: |
python main.py
git config user.name github-actions
git config user.email github-actions#github.com
git add .
git commit -m "crongenerated"
git push
The github-action does not pass when I include the 2nd uses: actions/checkout#v3 and the git commands.
Thanks in advance for your help
If you want to run a script, then you don't need an additional checkout step for that. There is a difference between steps that use workflows and those that execute shell scripts directly. You can read more about it here.
In your configuration file, you kind of mix the two in the last step. You don't need an additional checkout step because the repo from the first step is still checked out. So you can just use the following workflow:
name: refresh map
on:
schedule:
- cron: "30 11 * * *" #runs at 11:30 UTC everyday
jobs:
getdataandrefreshmap:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v3 # checkout the repository content to github runner.
- name: setup python
uses: actions/setup-python#v4
with:
python-version: 3.8 #install the python needed
- name: Install dependencies
run: |
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: execute py script
run: |
python main.py
git config user.name github-actions
git config user.email github-actions#github.com
git add .
git commit -m "crongenerated"
git push
I tested it with a dummy repo and everything worked.
I have below pipeline and in this i want to run lint and lint-test in parallel. Earlier I had one job with multiple steps but as I checked that if we create different jobs then It can run in parallel. I have one runner for this, But still its running one after another..
Here is a piece of code
name: CI
defaults:
run:
working-directory: ./
on:
push:
tags:
- v*
pull_request:
branches:
- "**"
env:
AZURE_REGISTRY_LOGIN_SERVER: ${{ secrets.AZURE_REGISTRY_LOGIN_SERVER }}
jobs:
build:
name: 'Setup and Build'
environment: non-prod
runs-on: ["self-hosted", "linux", "X64", "myr"]
outputs:
version: ${{ steps.setbuildenv.outputs.VERSION }}
module: ${{ steps.setbuildenv.outputs.MODULE }}
steps:
- name: Checkout
uses: actions/checkout#v1
- id: setbuildenv
env:
GITHUB_SHA: ${{ github.sha }}
GITHUB_REF: ${{ github.ref }}
GITHUB_REPO: ${{ github.repository }}
run: |
MODULE=$(echo -n ${GITHUB_REPO} | sed -e 's/.*\///')
if [[ $GITHUB_REF =~ refs/tags ]]; then
VERSION=$(echo -n ${GITHUB_REF} | sed -e 's/refs\/tags\///')
else
VERSION=${GITHUB_SHA:0:7}
fi
echo "VERSION=${VERSION}" >> $GITHUB_ENV
echo "::set-output name=VERSION::${VERSION}"
echo "MODULE=${MODULE}" >> $GITHUB_ENV
echo "::set-output name=MODULE::${MODULE}"
- name: Build
env:
GIT_TOKEN: ${{ secrets.PAT }}
run: |
docker build -t ${{ env.AZURE_REGISTRY_LOGIN_SERVER }}/${MODULE}:${VERSION} --build-arg GIT_TOKEN="${GIT_TOKEN}" -f container/smpl/Dockerfile .
docker build -t ${{ env.AZURE_REGISTRY_LOGIN_SERVER }}/${MODULE}-tools:${VERSION} -f container/smpl/Dockerfile .
lint:
name: 'Lint'
needs: build
environment: non-prod
runs-on: ["self-hosted", "linux", "X64", "myr"]
steps:
- name: Lint
run: |
docker run --rm ${{ env.AZURE_REGISTRY_LOGIN_SERVER }}/${{ needs.build.outputs.module }}:${{ needs.build.outputs.version }} make lint
lint-tests:
name: 'Lint tests'
needs: build
environment: non-prod
runs-on: ["self-hosted", "linux", "X64", "myr"]
steps:
- name: Lint Tests
run: |
docker run --rm ${{ env.AZURE_REGISTRY_LOGIN_SERVER }}/${{ needs.build.outputs.module }}:${{ needs.build.outputs.version }} make lint-tests
.
.
.
.
.
.
cleanup:
name: 'Run Cleanup'
needs: push
environment: non-prod
runs-on: ["self-hosted", "linux", "X64", "myr"]
steps:
- name: Cleanup
run: |
docker rmi ${{ env.AZURE_REGISTRY_LOGIN_SERVER }}/${{ needs.build.outputs.module }}:${{ needs.build.outputs.version }}
docker rmi ${{ env.AZURE_REGISTRY_LOGIN_SERVER }}/${{ needs.build.outputs.module }}-tools:${{ needs.build.outputs.version }}
Attaching image of how it looks like
How can I use self hosted runners for parallel execution of lint and lint-tests?
One GitHub runner can only run one job at the time. Therefore, you would need to run multiple runners fulfilling the runs-on requirements of the parallel jobs.
I am utilizing https://www.npmjs.com/package/youtube-dl-exec through a simple JS lambda function on an AWS lambda (node 14).
The code is pretty simple and gathers info as per the URL given (and supported by YTDL). I have done testing with jest and it works well on my local where python 2.7 is installed.
My package.json dependencies look like
"dependencies": {
"youtube-dl": "^3.5.0",
"youtube-dl-exec": "^1.2.0"
},
"devDependencies": {
"jest": "^26.6.3"
}
I am using github action to deploy the code on push to master using main.yml file:
name: Deploy to AWS lambda
on: [push]
jobs:
deploy_source:
name: build and deploy lambda
strategy:
matrix:
node-version: [14.x]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: npm install and build
run: |
npm ci
npm run build --if-present
env:
CI: true
- uses: actions/setup-python#v2
with:
python-version: '3.x' # Version range or exact version of a Python version to use, using SemVer's version range syntax
architecture: 'x64' # optional x64 or x86. Defaults to x64 if not specified
- name: zip
uses: montudor/action-zip#v0.1.0
with:
args: zip -qq -r ./bundle.zip ./
- name: default deploy
uses: appleboy/lambda-action#master
with:
aws_access_key_id: ${{ secrets.AWS_EEEEE_ID }}
aws_secret_access_key: ${{ secrets.AWS_EEEEE_KEY }}
aws_region: us-EEEEE
function_name: DownloadEEEEE
zip_file: bundle.zip
I am getting a
INFO Error: Command failed with exit code 127: /var/task/node_modules/youtube-dl-exec/bin/youtube-dl https://www.EXQEEEE.com/p/XCCRXqXInEEZ4W4 --dump-json --no-warnings --no-call-home --no-check-certificate --prefer-free-formats --youtube-skip-dash-manifest
/usr/bin/env: python: No such file or directory
at makeError (/var/task/node_modules/execa/lib/error.js:59:11)
at handlePromise (/var/task/node_modules/execa/index.js:114:26)
at processTicksAndRejections (internal/process/task_queues.js:93:5) {
shortMessage: 'Command failed with exit code 127: /var/task/node_modules/youtube-dl-exec/bin/youtube-dl https://www.instagram.com/p/CCRq_InFZ44 --dump-json --no-warnings --no-call-home --no-check-certificate --prefer-free-formats --youtube-skip-dash-manifest',
command: '/var/task/node_modules/youtube-dl-exec/bin/youtube-dl https://www.EXQEEEE.com/p/XCCRXqXInEEZ4W4 --dump-json --no-warnings --no-call-home --no-check-certificate --prefer-free-formats --youtube-skip-dash-manifest',
exitCode: 127,
signal: undefined,
signalDescription: undefined,
stdout: '',
stderr: '/usr/bin/env: python: No such file or directory',
failed: true,
timedOut: false,
isCanceled: false,
killed: false
}
error.
I have tried adding a lambda layer, adding python in the main.yml file, and also installing through dependency but perhaps I am doing something wrong so that the library is not able to find python at /usr/bin/env.
How do I make python be available in that path?
Should I not use ubuntu-latest on lambda config (main.yml) since it doesn't come packed with python by default?
Any help would be appreciated.
Note: I have obfuscated the URLs for privacy purposes.
The new nodejs10.x Lambda runtime does not contain python anymore, and therefore, youtube-dl does not work
I am trying to publish a Python package to PyPI, from a Github workflow, but the authentication fails for "Test PyPI". I successfully published to Test PyPI from the command line, so my API token must be correct. I also checked for leading and trailing spaces in the secret value (i.e., on GitHub).
As the last commits show, I tried a few things without success.
I first tried to inline simple bash commands into the workflow as follows, but I have not been able to get my secrets into environment variables. Nothing showed up in the logs when I printed these variables.
- name: Publish on Test PyPI
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TEST_TOKEN }}
TWINE_REPOSITORY_URL: "https://test.pypi.org/legacy/"
run: |
echo "$TWINE_PASSWORD"
pip install twine
twine check dist/*
twine upload dist/*
I also tried to use a dedicated GitHub Action as follows, but it does not work either. I guess the problem comes from the secrets not being available in my workflow. What puzzled me is that my workflow uses another token/secret just fine! Though, if I put it in an environment variable, nothing is printed out. I also recreated my secrets under different names (PYPI_TEST_TOKEN and TEST_PYPI_API_TOKEN) but to no avail.
- name: Publish to Test PyPI
uses: pypa/gh-action-pypi-publish#release/v1
with:
user: __token__
password: ${{ secrets.TEST_PYPI_API_TOKEN }}
repository_url: https://test.pypi.org/legacy/
I guess I miss something obvious (as usual). Any help is highly appreciated.
I eventually figured it out. My mistake was that I defined my secrets within an environment and, by default, workflows do not run in any specific environment. For this to happen, I have to explicitly name the environment in the job description as follows:
jobs:
publish:
environment: CI # <--- /!\ Here is the link to the environment
needs: build
runs-on: ubuntu-latest
if: startsWith(github.ref, 'refs/tags/v')
steps:
- uses: actions/checkout#v2
# Some more steps here ...
- name: Publish to Test PyPI
env:
TWINE_USERNAME: "__token__"
TWINE_PASSWORD: ${{ secrets.TEST_PYPI_API_TOKEN }}
TWINE_REPOSITORY_URL: "https://test.pypi.org/legacy/"
run: |
echo KEY: '${TWINE_PASSWORD}'
twine check dist/*
twine upload --verbose --skip-existing dist/*
The documentation mentions it actually.
Thanks to those who commented for pointing me in the right direction.
This is the problem I struggled with, since I am working with multiple environments and they all share same named secrets with different values the following solution worked for me. Isolated pieces are described here and there, but it wasn't obvious how to piece it together.
At first I define that environment is selected during workflow_dispatch event:
on:
workflow_dispatch:
inputs:
environment:
type: choice
description: Select the environment
required: true
options:
- TEST
- UAT
I then reference it in jobs context:
jobs:
run-portal-tests:
runs-on: ubuntu-latest
environment: ${{ github.event.inputs.environment }}
Finally to be used in the step I need them in:
- name: Run tests
env:
ENDPOINT: ${{ secrets.ENDPOINT }}
TEST_USER: ${{ secrets.TEST_USER }}
TEST_USER_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }}
CLIENT_ID: ${{ secrets.CLIENT_ID }}
CLIENT_SECRET: ${{ secrets.CLIENT_SECRET }}
run: python3 main.py
I would like to comment on a PR if there are more than 100 flake8 errors but it's not going to disable the merge button.
My approach is something like this:
name: Flake8 Check
on: [pull_request]
jobs:
flake8:
name: Flake8 Check
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout#v2
- name: Install Python
uses: actions/setup-python#v1
with:
python-version: 3.6
- name: Install dependency
run: pip install flake8
- name: Flake8
id: flake
run: echo "::set-output name=number::$(flake8 --config=tools/dev/.flake8 --count -qq)"
- name: comment PR
uses: unsplash/comment-on-pr#master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
msg: "There are ${{ steps.flake.outputs.number }} Flake8 errors which is a lot :disappointed: \nThis will not block you to merge it but please try to fix them."
check_for_duplicate_msg: true
if: ${{ steps.flake.outputs.number }} > 100
However, it is commenting even though there is less than 100 errors. I've check the documentation and it looks correct to me.
What is that I am missing?
On the github actions page for contexts, they recommend not using ${{ }} in the condition of the if context, although they also show an if condition that uses the ${{ }} syntax, but I guess it does not actually work as you have shown here.
So in your case, you need to change your if to:
if: steps.flake.outputs.number > 100