I am trying to publish a Python package to PyPI, from a Github workflow, but the authentication fails for "Test PyPI". I successfully published to Test PyPI from the command line, so my API token must be correct. I also checked for leading and trailing spaces in the secret value (i.e., on GitHub).
As the last commits show, I tried a few things without success.
I first tried to inline simple bash commands into the workflow as follows, but I have not been able to get my secrets into environment variables. Nothing showed up in the logs when I printed these variables.
- name: Publish on Test PyPI
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TEST_TOKEN }}
TWINE_REPOSITORY_URL: "https://test.pypi.org/legacy/"
run: |
echo "$TWINE_PASSWORD"
pip install twine
twine check dist/*
twine upload dist/*
I also tried to use a dedicated GitHub Action as follows, but it does not work either. I guess the problem comes from the secrets not being available in my workflow. What puzzled me is that my workflow uses another token/secret just fine! Though, if I put it in an environment variable, nothing is printed out. I also recreated my secrets under different names (PYPI_TEST_TOKEN and TEST_PYPI_API_TOKEN) but to no avail.
- name: Publish to Test PyPI
uses: pypa/gh-action-pypi-publish#release/v1
with:
user: __token__
password: ${{ secrets.TEST_PYPI_API_TOKEN }}
repository_url: https://test.pypi.org/legacy/
I guess I miss something obvious (as usual). Any help is highly appreciated.
I eventually figured it out. My mistake was that I defined my secrets within an environment and, by default, workflows do not run in any specific environment. For this to happen, I have to explicitly name the environment in the job description as follows:
jobs:
publish:
environment: CI # <--- /!\ Here is the link to the environment
needs: build
runs-on: ubuntu-latest
if: startsWith(github.ref, 'refs/tags/v')
steps:
- uses: actions/checkout#v2
# Some more steps here ...
- name: Publish to Test PyPI
env:
TWINE_USERNAME: "__token__"
TWINE_PASSWORD: ${{ secrets.TEST_PYPI_API_TOKEN }}
TWINE_REPOSITORY_URL: "https://test.pypi.org/legacy/"
run: |
echo KEY: '${TWINE_PASSWORD}'
twine check dist/*
twine upload --verbose --skip-existing dist/*
The documentation mentions it actually.
Thanks to those who commented for pointing me in the right direction.
This is the problem I struggled with, since I am working with multiple environments and they all share same named secrets with different values the following solution worked for me. Isolated pieces are described here and there, but it wasn't obvious how to piece it together.
At first I define that environment is selected during workflow_dispatch event:
on:
workflow_dispatch:
inputs:
environment:
type: choice
description: Select the environment
required: true
options:
- TEST
- UAT
I then reference it in jobs context:
jobs:
run-portal-tests:
runs-on: ubuntu-latest
environment: ${{ github.event.inputs.environment }}
Finally to be used in the step I need them in:
- name: Run tests
env:
ENDPOINT: ${{ secrets.ENDPOINT }}
TEST_USER: ${{ secrets.TEST_USER }}
TEST_USER_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }}
CLIENT_ID: ${{ secrets.CLIENT_ID }}
CLIENT_SECRET: ${{ secrets.CLIENT_SECRET }}
run: python3 main.py
Related
I have a webscraper that runs locally, and puts the data in a mysql database hosted on filess.io. Wanted to set up a schedule on github actions to run it consistently, but the build fails here:
try:
with connect(
host=DB_HOST,
user=DB_USER,
password=DB_PASSWORD,
database=DB_DATABASE,
port=DB_PORT
) as connection:
print(connection)
With this error:
0s
Run python main.py
Traceback (most recent call last):
File "myscript.py", line 66, in <module>
with connect(
AttributeError: __enter__
Error: Process completed with exit code 1.
I have secrets set up in github, and the values are pulled into the code in this earlier section, with no errors:
try:
DB_HOST=os.environ["DB_HOST"]
DB_USER=os.environ["DB_USER"]
DB_PASSWORD=os.environ["DB_PASSWORD"]
DB_DATABASE=os.environ["DB_DATABASE"]
DB_PORT=os.environ["DB_PORT"]
This code works perfectly on my local machine, with secrets saved in .env file. I have double- and triple-checked that my secrets are set in github. Am I missing something?
I tried running locally (worked fine), logging the github secrets to verify they were stored correctly (was obscured, so that didn't work). Looked up the enter error, and it means some attribute has an error, but I can't figure out which.
Main point of confusion: it works locally. This leads me to believe it's an error with my github setup. Any ideas what's going on?
EDIT: adding github actions workflow code below:
name: Manual workflow
on: [workflow_dispatch]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v2 # checkout the repository content to github runner
- name: setup python
uses: actions/setup-python#v4
with:
python-version: '3.9' # install the python version needed
- name: install python packages
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: execute py script # run main.py
env:
DB_HOST: ${{ secrets.DB_HOST }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_DATABASE: ${{ secrets.DB_DATABASE }}
DB_PORT: ${{ secrets.DB_PORT }}
run: python main.py
After much digging, my requirements.txt had mysql-connector listed, which is deprecated. My local system had mysql-connector-python installed and was using that. Not sure how the requirements.txt file added the wrong one. Adding mysql-connector-python to the requirements.txt fixed this particular bug.
Thanks to #Azeem for your debugging help!
I am utilizing https://www.npmjs.com/package/youtube-dl-exec through a simple JS lambda function on an AWS lambda (node 14).
The code is pretty simple and gathers info as per the URL given (and supported by YTDL). I have done testing with jest and it works well on my local where python 2.7 is installed.
My package.json dependencies look like
"dependencies": {
"youtube-dl": "^3.5.0",
"youtube-dl-exec": "^1.2.0"
},
"devDependencies": {
"jest": "^26.6.3"
}
I am using github action to deploy the code on push to master using main.yml file:
name: Deploy to AWS lambda
on: [push]
jobs:
deploy_source:
name: build and deploy lambda
strategy:
matrix:
node-version: [14.x]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: npm install and build
run: |
npm ci
npm run build --if-present
env:
CI: true
- uses: actions/setup-python#v2
with:
python-version: '3.x' # Version range or exact version of a Python version to use, using SemVer's version range syntax
architecture: 'x64' # optional x64 or x86. Defaults to x64 if not specified
- name: zip
uses: montudor/action-zip#v0.1.0
with:
args: zip -qq -r ./bundle.zip ./
- name: default deploy
uses: appleboy/lambda-action#master
with:
aws_access_key_id: ${{ secrets.AWS_EEEEE_ID }}
aws_secret_access_key: ${{ secrets.AWS_EEEEE_KEY }}
aws_region: us-EEEEE
function_name: DownloadEEEEE
zip_file: bundle.zip
I am getting a
INFO Error: Command failed with exit code 127: /var/task/node_modules/youtube-dl-exec/bin/youtube-dl https://www.EXQEEEE.com/p/XCCRXqXInEEZ4W4 --dump-json --no-warnings --no-call-home --no-check-certificate --prefer-free-formats --youtube-skip-dash-manifest
/usr/bin/env: python: No such file or directory
at makeError (/var/task/node_modules/execa/lib/error.js:59:11)
at handlePromise (/var/task/node_modules/execa/index.js:114:26)
at processTicksAndRejections (internal/process/task_queues.js:93:5) {
shortMessage: 'Command failed with exit code 127: /var/task/node_modules/youtube-dl-exec/bin/youtube-dl https://www.instagram.com/p/CCRq_InFZ44 --dump-json --no-warnings --no-call-home --no-check-certificate --prefer-free-formats --youtube-skip-dash-manifest',
command: '/var/task/node_modules/youtube-dl-exec/bin/youtube-dl https://www.EXQEEEE.com/p/XCCRXqXInEEZ4W4 --dump-json --no-warnings --no-call-home --no-check-certificate --prefer-free-formats --youtube-skip-dash-manifest',
exitCode: 127,
signal: undefined,
signalDescription: undefined,
stdout: '',
stderr: '/usr/bin/env: python: No such file or directory',
failed: true,
timedOut: false,
isCanceled: false,
killed: false
}
error.
I have tried adding a lambda layer, adding python in the main.yml file, and also installing through dependency but perhaps I am doing something wrong so that the library is not able to find python at /usr/bin/env.
How do I make python be available in that path?
Should I not use ubuntu-latest on lambda config (main.yml) since it doesn't come packed with python by default?
Any help would be appreciated.
Note: I have obfuscated the URLs for privacy purposes.
The new nodejs10.x Lambda runtime does not contain python anymore, and therefore, youtube-dl does not work
I would like to comment on a PR if there are more than 100 flake8 errors but it's not going to disable the merge button.
My approach is something like this:
name: Flake8 Check
on: [pull_request]
jobs:
flake8:
name: Flake8 Check
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout#v2
- name: Install Python
uses: actions/setup-python#v1
with:
python-version: 3.6
- name: Install dependency
run: pip install flake8
- name: Flake8
id: flake
run: echo "::set-output name=number::$(flake8 --config=tools/dev/.flake8 --count -qq)"
- name: comment PR
uses: unsplash/comment-on-pr#master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
msg: "There are ${{ steps.flake.outputs.number }} Flake8 errors which is a lot :disappointed: \nThis will not block you to merge it but please try to fix them."
check_for_duplicate_msg: true
if: ${{ steps.flake.outputs.number }} > 100
However, it is commenting even though there is less than 100 errors. I've check the documentation and it looks correct to me.
What is that I am missing?
On the github actions page for contexts, they recommend not using ${{ }} in the condition of the if context, although they also show an if condition that uses the ${{ }} syntax, but I guess it does not actually work as you have shown here.
So in your case, you need to change your if to:
if: steps.flake.outputs.number > 100
I'm trying to create a GitHub workflow that runs a python script (which outputs three graphs), add those graphs to the readme.md then commit the changes to the repo and display the graphs on the readme page. I would like to trigger a new push happening.
as a bash script it would look like this:
git pull
python analysis_1.py
git add .
git commit -m "triggered on action"
git push
I'm not really sure where to start on it or how to set up the action. I tried setting up one but it wouldn't make any changes.
See this answer for how to commit back to your repository during a workflow.
In your case it might look something like this. Tweak it where necessary.
on:
push:
branches:
- master
jobs:
updateGraphs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-python#v1
with:
python-version: '3.x'
- name: Generate graphs
run: python analysis_1.py
- name: Update graphs
run: |
git config --global user.name 'Your Name'
git config --global user.email 'your-username#users.noreply.github.com'
git commit -am "Update graphs"
git push
Alternatively, raise a pull request instead of committing immediately using create-pull-request action.
on:
push:
branches:
- master
jobs:
updateGraphs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-python#v1
with:
python-version: '3.x'
- name: Generate graphs
run: python analysis_1.py
- name: Create Pull Request
uses: peter-evans/create-pull-request#v2
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: Update graphs
title: Update graphs
branch: update-graphs
I have 2 private GitHub repositories (say A and B) in the organization (say ORG). Repository A has repository B in requirements.txt:
-e git+git#github.com:ORG/B.git#egg=B
And I have the following workflow for A (in .github/workflows/test.yml):
name: Python package
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Install requirements
run: |
pip install -r requirements.txt
- name: Test with pytest
run: |
pytest ./tests
As B is private, it fails on installing it.
Is it possible to install B while testing A in this workflow if they are in the same organization? How?
Since access tokens are bound to an account and have write access to all its private repos, it's a very bad solution.
Instead, use deploy keys.
Deploy keys are simply SSH keys that you can use to clone a repo.
Create a new SSH key pair on your computer
Put the public key in the private dependency repo's Deploy keys
Put the private key in the app repo's Actions secrets
Delete the keys from your computer
Once it's set, you can set the private key in the GitHub Action's SSH Agent. There's no need to import a third-party GitHub Action, a 2-liner will suffice.
eval `ssh-agent -s`
ssh-add - <<< '${{ secrets.PRIVATE_SSH_KEY }}'
pip install -r requirements.txt
I found that ssh-add command here.
I did this way!
- uses: actions/checkout#v1
with:
repository: organization_name/repo_name
token: ${{ secrets.ACCESS_TOKEN }}
You need to provide a valid token, you can generate it following this guide
Using deployment keys you can do
- uses: actions/checkout#v2
with:
ssh-key: ${{ secrets.SSH_PRIVATE_KEY }}
repository: organization_name/repo_name
For this to work you need to
generate ssh keys locally
add pub key as deployment key to the private repo
add private key as a secret named SSH_PRIVATE_KEY
Either use an SSH key with no passphrase to access repo B, or create an access token for that repo and then use the access token as your password to access that repo over HTTPS: https://USERNAME:TOKEN#github.com/ORG/B.git.
Instead of check out twice, all you need is provided the TOKEN for pip to access repo B.
- name: Install requirements
run: |
git config --global url."https://${{ secrets.ACESS_TOKEN }}#github".insteadOf https://github
pip install -r requirements.txt
I added this line
git+https://YOUR_TOKEN_HERE#github.com/ORG/REPO_NAME.git#master#egg=REPO_NAME
to my requirements.txt and it worked. But as other people mentioned, your token will be exposed to anyone having access to this repository. It is probably best to use a secret in your repository.