I am using circleci to automatically deploy my python package to PyPi when a new tag is being pushed to GitHub. I've been following this tutorial
My workflow is failing with the error home/circleci/project/twitter_sentiment/bin/python: No module named twine
I've tried to make sure twine was installed prior to calling the twine command. I've also tried to call the twine command with python -m twine From my understanding, it seems that twine is not added to the path of the container - which causes the command not found error.
How would I go about solving this error?
config.yml file
version: 2
jobs:
build:
docker:
- image: circleci/python:3.6
working_directory: ~/twitter-sentiment
steps:
- checkout
- run:
name: install dependencies
command: |
python3 -m venv twitter_sentiment
. twitter_sentiment/bin/activate
pip install -r requirements.txt
- save_cache:
paths:
- ./twitter_sentiment
key: v1-dependencies-{{ checksum "setup.py" }}
runLibraryTest:
docker:
- image: circleci/python:3.6
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "setup.py" }}
- run:
name: run twitter_sentiment tests
command: |
python3 -m venv twitter_sentiment
. twitter_sentiment/bin/activate
pip install -r requirements.txt
cd test/
python3 -m unittest test_twitterSentiment
- save_cache:
paths:
- ./twitter_sentiment
key: v1-dependencies-{{ checksum "setup.py" }}
- store_artifacts:
path: test-reports
destination: test-reports
deploy:
docker:
- image: circleci/python:3.6
steps:
- checkout
- restore_cache:
key: v1-dependencies-{{ checksum "setup.py" }}
- run:
name: verify git tag vs version
command: |
python3 -m venv twitter_sentiment
. twitter_sentiment/bin/activate
pip install -r requirements.txt
python3 setup.py verify
- run:
name: create package files
command: |
python3 setup.py sdist bdist_wheel
- run:
name: create .pypirc file
command: |
echo -e "[pypi]" >> .pypirc
echo -e "repository = https://pypi.org/legacy/"
echo -e "username = TeddyCr" >> .pypirc
echo -e " = $PYPI_PASSWORD" >> .pypirc
- run:
name: upload package to pypi server
command: |
python3 -m venv twitter_sentiment
. twitter_sentiment/bin/activate
pip install --user --upgrade twine
python -m twine upload dist/*
- save_cache:
paths:
- ./twitter_sentiment
key: v1-dependencies-{{ checksum "setup.py" }}
workflows:
version: 2
build_and_deploy:
jobs:
- build:
filters:
tags:
only: /.*/
- runLibraryTest:
requires:
- build
filters:
tags:
only: /[0-9]+(\.[0-9]+)*/
branches:
ignore: /.*/
- deploy:
requires:
- runLibraryTest
filters:
tags:
only: /[0-9]+(\.[0-9]+)*/
branches:
ignore: /.*/
You're creating a virtual environment, activating it, and then installing twine outside of it.
Remove --user from pip install --user --upgrade twine
Related
Here is my workflow file
jobs:
build:
runs-on: windows-latest
environment: Main
env:
MAINAPI: ${{secrets.MAINAPI }}
TESTAPI: ${{secrets.TESTAPI }}
BRAINID: ${{secrets.BRAINID }}
BRAINKEY: ${{secrets.BRAINKEY }}
steps:
- uses: actions/checkout#v3
- name: Set up Python 3.10
uses: actions/setup-python#v3
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
python -m pip install PyAudio-0.2.11-cp311-cp311-win_amd64.whl
pip install -r requirements.txt --no-deps
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest
env:
MAINAPI: ${{secrets.MAINAPI }}
TESTAPI: ${{secrets.TESTAPI }}
BRAINID: ${{secrets.BRAINID }}
BRAINKEY: ${{secrets.BRAINKEY }}
Here is my code
mainapi = str(os.environ["MAINAPI"])
apiurl = str(os.environ["TESTAPI"])
I have set the secrets as Environment secrets, repository sercrets and even as an environment. However, none of them seems to work. Pls help
Does anyone know how I can upload a Python package through Azure DevOps/Pipelines to the artifact feed, without having to open the device login page, and enter in the auth code each time?
Currently, my pipeline runs fine where it builds the Python package, runs through the pipeline, and uploads in to the Artifacts feed.
The only problem is that every time, I have to monitor the "Upload Package" step, click on the "https://microsoft.com/devicelogin" and enter in the code to authenticate to upload the package.
Is there an automated way to do this?
Here is my .yml file below, thank you for your help!
trigger:
- master
- pipeline*
parameters:
- name: path
type: string
default: 'dist/*.whl'
pool:
vmImage: ubuntu-latest
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.10'
displayName: 'Use Python 3.10'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install twine keyring artifacts-keyring
python -m pip install --upgrade build setuptools twine
displayName: 'Install dependencies'
- script: |
python -m build
displayName: 'Build Python Package'
- task: TwineAuthenticate#1
inputs:
artifactFeed: 'MyApp/myapp-packages'
displayName: 'Authenticate Twine'
- script: |
python -m twine upload -r insite-packages --repository-url https://pkgs.dev.azure.com/kngwin/MyApp/_packaging/myapp-packages/pypi/upload/ --config-file $(PYPIRC_PATH) dist/*
displayName: 'Upload Package'
EDIT 1:
After following Kim's answer below, where I tried both methods of creating a .pypirc file in my ~Home directory, and also adding the token in the URL. I am still receiving a request for User Interaction, to open the device login page and enter in the code.
trigger:
- master
- pipeline*
parameters:
- name: path
type: string
default: 'dist/*.whl'
pool:
vmImage: ubuntu-latest
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.10'
displayName: 'Use Python 3.10'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install twine keyring artifacts-keyring
pip install wheel
pip install twine
python -m pip install --upgrade build setuptools twine
displayName: 'Install dependencies'
- script: |
python -m build
displayName: 'Build Python Package'
- task: TwineAuthenticate#1
inputs:
artifactFeed: 'MyApp/myapp-packages'
displayName: 'Authenticate Twine'
- script: |
echo $(PYPIRC_PATH)
python -m twine upload -r myapp-packages --repository-url https://myapp-packages:$(System.AccessToken)#pkgs.dev.azure.com/kngwin/MyApp/_packaging/myapp-packages/pypi/upload/ --config-file $(PYPIRC_PATH) dist/*
displayName: 'Upload Package'
You could create a .pypirc file in your home directory to store your token for authentication when uploading: https://learn.microsoft.com/en-us/azure/devops/pipelines/artifacts/pypi?view=azure-devops&tabs=yaml#authenticate-with-azure-artifacts
Or, define the Authentication token directly into the package URL:
python -m twine upload -r insite-packages --repository-url https://<your-feed-name>:$(System.AccessToken)#pkgs.dev.azure.com/kngwin/MyApp/_packaging/myapp-packages/pypi/upload/ --config-file $(PYPIRC_PATH) dist/*
Make sure your build service account has "Contributor" permission in the target Feed.
MY SOLUTION:
I was able to get it working by doing the following:
I removed the "repository url" and ran it like so and it worked:
- script: |
python -m twine upload --verbose -r myapp-packages --config-file $(PYPIRC_PATH) dist/*
displayName: 'Upload Package'
The following snapshot shows the file structure:
When I run on Gitlab CI, here is what I am seeing:
Why is this error occurring when Gitlab runs it but not when I run locally?
Here is my .gitlab-ci.yml file.
Note that this had been working before.
I recently made win_perf_counters a Git submodule instead of being an actual subdirectory. (Again, it works locally.)
test:
before_script:
- python -V
- pip install virtualenv
- virtualenv venv
- .\venv\Scripts\activate.ps1
- refreshenv
script:
- python -V
- echo "******* installing pip ***********"
- python -m pip install --upgrade pip
- echo "******* installing locust ********"
- python -m pip install locust
- locust -V
- python -m pip install multipledispatch
- python -m pip install pycryptodome
- python -m pip install pandas
- python -m pip install wmi
- python -m pip install pywin32
- python -m pip install influxdb_client
- set LOAD_TEST_CONF=load_test.conf
- echo "**** about to run locust ******"
- locust -f ./src/main.py --host $TARGET_HOST -u $USERS -t $TEST_DURATION -r $RAMPUP -s 1800 --headless --csv=./LoadTestsData_VPOS --csv-full-history --html=./LoadTestsReport_VPOS.html --stream-file ./data/stream_jsons/streams_vpos.json --database=csv
- Start-Sleep -s $SLEEP_TIME
variables:
LOAD_TEST_CONF: load_test.conf
PYTHON_VERSION: 3.8.0
TARGET_HOST: http://10.10.10.184:9000
tags:
- win2019
artifacts:
paths:
- ./LoadTests*
- public
only:
- schedules
after_script:
- ls src -r
- mkdir .public
- cp -r ./LoadTests* .public
- cp metrics.csv .public -ErrorAction SilentlyContinue
- mv .public public
When I tried with changing the Gitlab CI file to use requirements.txt:
Probably the python libraries you are using in your local environment are not the same you are using in gitlab. Run a pip list or pip freeze in your local machine and see which versions do you have there. Then pip install those in your gitlab script. A good practice is to have a requirements.txt or a setup.py file with specific versions rather than pulling the latest versions every time.
Probably the module you are developing doesn't have the __init__.py file and thus it cannot be found when imported from the external.
What I tried:
placing 'pip install --user -r requirements.txt' in the second run command
placing 'pip install pytest' in the second run command along with 'pip install pytest-html'
both followed by,
pytest --html=pytest_report.html
I am new to CircleCI and using pytest as well
Here is the steps portion of the config.yml file
version: 2.1
jobs:
run_tests:
docker:
- image: circleci/python:3.9.1
steps:
- checkout
- run:
name: Install Python Dependencies
command:
echo 'export PATH=~$PATH:~/.local/bin' >> $BASH_ENV && source $BASH_ENV
pip install --user -r requirements.txt
- run:
name: Run Unit Tests
command:
pip install --user -r requirements.txt
pytest --html=pytest_report.html
- store_test_results:
path: test-reports
- store_artifacts:
path: test-reports
workflows:
build_test:
jobs:
- run_tests
--html is not one of the builtin options for pytest -- it likely comes from a plugin
I believe you're looking for pytest-html -- make sure that's listed in your requirements
it's also possible / likely that the pip install --user is installing another copy of pytest into the image which'll only be available at ~/.local/bin/pytest instead of whatever pytest comes with the circle ci image
disclaimer, I'm one of the pytest core devs
I am having an issue when deploying my application in that my tests are not running. Its a simple script but still code build is bypassing my test.
I have specified unittest and placed the path to my unittest-buildspec in the console
my app looks like:
-Chalice
--.chalice
-- BuildSpec
---- build.sh
---- unittest-buildspec.ym
-- Tests
---- test_app.py
---- test-database.py
-- app.py
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install -r requirements_test.txt
build:
commands:
- echo Build started on `date` ---
- pip install -r requirements_test.txt
- ./build.sh
- pytest --pep8 --flakes
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
discard-paths: yes
My build.sh is in the same folder as well
#!/bin/bash
pip install --upgrade awscli
aws --version
cd ..
pip install virtualenv
virtualenv /tmp/venv
. /tmp/venv/bin/activate
export PYTHONPATH=.
py.test tests/ || exit 1
There are a few issues in the buildspec you shared:
'install' and 'build' phases' indentation is incorrect. They should come under 'phases'.
Set '+x' on the build.sh before running it.
Fixed buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install -r requirements_test.txt
build:
commands:
- echo Build started on `date` ---
- pip install -r requirements_test.txt
- chmod +x ./build.sh
- ./build.sh
- pytest --pep8 --flakes
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
discard-paths: yes
Also please note that your 'build.sh' uses "/bin/bash" interpreter, while the script will work, the shell is not technically "bash", so any bash specific functionality will not work. The CodeBuild's shell is mysterious, it will run the usual scripts, it is just not bash.