The following snapshot shows the file structure:
When I run on Gitlab CI, here is what I am seeing:
Why is this error occurring when Gitlab runs it but not when I run locally?
Here is my .gitlab-ci.yml file.
Note that this had been working before.
I recently made win_perf_counters a Git submodule instead of being an actual subdirectory. (Again, it works locally.)
test:
before_script:
- python -V
- pip install virtualenv
- virtualenv venv
- .\venv\Scripts\activate.ps1
- refreshenv
script:
- python -V
- echo "******* installing pip ***********"
- python -m pip install --upgrade pip
- echo "******* installing locust ********"
- python -m pip install locust
- locust -V
- python -m pip install multipledispatch
- python -m pip install pycryptodome
- python -m pip install pandas
- python -m pip install wmi
- python -m pip install pywin32
- python -m pip install influxdb_client
- set LOAD_TEST_CONF=load_test.conf
- echo "**** about to run locust ******"
- locust -f ./src/main.py --host $TARGET_HOST -u $USERS -t $TEST_DURATION -r $RAMPUP -s 1800 --headless --csv=./LoadTestsData_VPOS --csv-full-history --html=./LoadTestsReport_VPOS.html --stream-file ./data/stream_jsons/streams_vpos.json --database=csv
- Start-Sleep -s $SLEEP_TIME
variables:
LOAD_TEST_CONF: load_test.conf
PYTHON_VERSION: 3.8.0
TARGET_HOST: http://10.10.10.184:9000
tags:
- win2019
artifacts:
paths:
- ./LoadTests*
- public
only:
- schedules
after_script:
- ls src -r
- mkdir .public
- cp -r ./LoadTests* .public
- cp metrics.csv .public -ErrorAction SilentlyContinue
- mv .public public
When I tried with changing the Gitlab CI file to use requirements.txt:
Probably the python libraries you are using in your local environment are not the same you are using in gitlab. Run a pip list or pip freeze in your local machine and see which versions do you have there. Then pip install those in your gitlab script. A good practice is to have a requirements.txt or a setup.py file with specific versions rather than pulling the latest versions every time.
Probably the module you are developing doesn't have the __init__.py file and thus it cannot be found when imported from the external.
Related
I have an app that I would like to deploy to AWS Lambda and for this reason it has to have Python 3.9.
I have the following in the pyproject.toml:
name = "app"
readme = "README.md"
requires-python = "<=3.9"
version = "0.5.4"
If I try to pip install all the dependencies I get the following error:
ERROR: Package 'app' requires a different Python: 3.11.1 not in '<=3.9'
Is there a way to specify the Python version for this module?
I see there is a lot of confusion about this. I simply want to specify 3.9 "globally" for my build. So when I build the layer for the lambda with the following command it runs:
pip install . -t pyhon/
Right now it has only Python 3.11 packaged. For example:
❯ ls -larth python/ | grep sip
siphash24.cpython-311-darwin.so
When I try to use the layer created this way it fails to load the required library.
There are multiple ways of solving this.
Option 1 (using pip's built in facilities to restrict Python version)
pip install . \
--python-version "3.9" \
--platform "manylinux2010" \
--only-binary=:all: -t python/
Another way of solving this is with Docker:
FROM python:3.9.16-bullseye
RUN useradd -m -u 5000 app || :
RUN mkdir -p /opt/app
RUN chown app /opt/app
USER app
WORKDIR /opt/app
RUN python -m venv venv
ENV PATH="/opt/app/venv/bin:$PATH"
RUN pip install pip --upgrade
RUN mkdir app
RUN touch app/__init__.py
COPY pyproject.toml README.md ./
RUN pip install . -t python/
This way there is no change to create a layer for AWS Lambda that is newer than Python 3.9.
What I tried:
placing 'pip install --user -r requirements.txt' in the second run command
placing 'pip install pytest' in the second run command along with 'pip install pytest-html'
both followed by,
pytest --html=pytest_report.html
I am new to CircleCI and using pytest as well
Here is the steps portion of the config.yml file
version: 2.1
jobs:
run_tests:
docker:
- image: circleci/python:3.9.1
steps:
- checkout
- run:
name: Install Python Dependencies
command:
echo 'export PATH=~$PATH:~/.local/bin' >> $BASH_ENV && source $BASH_ENV
pip install --user -r requirements.txt
- run:
name: Run Unit Tests
command:
pip install --user -r requirements.txt
pytest --html=pytest_report.html
- store_test_results:
path: test-reports
- store_artifacts:
path: test-reports
workflows:
build_test:
jobs:
- run_tests
--html is not one of the builtin options for pytest -- it likely comes from a plugin
I believe you're looking for pytest-html -- make sure that's listed in your requirements
it's also possible / likely that the pip install --user is installing another copy of pytest into the image which'll only be available at ~/.local/bin/pytest instead of whatever pytest comes with the circle ci image
disclaimer, I'm one of the pytest core devs
I am having an issue when deploying my application in that my tests are not running. Its a simple script but still code build is bypassing my test.
I have specified unittest and placed the path to my unittest-buildspec in the console
my app looks like:
-Chalice
--.chalice
-- BuildSpec
---- build.sh
---- unittest-buildspec.ym
-- Tests
---- test_app.py
---- test-database.py
-- app.py
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install -r requirements_test.txt
build:
commands:
- echo Build started on `date` ---
- pip install -r requirements_test.txt
- ./build.sh
- pytest --pep8 --flakes
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
discard-paths: yes
My build.sh is in the same folder as well
#!/bin/bash
pip install --upgrade awscli
aws --version
cd ..
pip install virtualenv
virtualenv /tmp/venv
. /tmp/venv/bin/activate
export PYTHONPATH=.
py.test tests/ || exit 1
There are a few issues in the buildspec you shared:
'install' and 'build' phases' indentation is incorrect. They should come under 'phases'.
Set '+x' on the build.sh before running it.
Fixed buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install -r requirements_test.txt
build:
commands:
- echo Build started on `date` ---
- pip install -r requirements_test.txt
- chmod +x ./build.sh
- ./build.sh
- pytest --pep8 --flakes
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
discard-paths: yes
Also please note that your 'build.sh' uses "/bin/bash" interpreter, while the script will work, the shell is not technically "bash", so any bash specific functionality will not work. The CodeBuild's shell is mysterious, it will run the usual scripts, it is just not bash.
I have Django application based on docker-compose file.
Somehow travis autoinstalls packages from requirements.txt in project repo and its failing my build cause I don't have gcc package.
I want to run all actions (tests, linters) in docker container not directly in project repo.
Here is my travis-ci.yml file:
---
dist: xenial
services:
- docker
language: python
python:
- "3.7"
script:
- docker compose up --build
- docker exec web flake8
- docker exec web mypy my_project
- docker exec web safety check -r requirements.txt
- docker exec web python -m pytest --cov my_project -vvv -s
And begin of travis log:
$ git checkout -qf bab09dee57a707a5cd0a353d6f50bb66fd90a095
0.01s$ source ~/virtualenv/python3.7/bin/activate
$ python --version
Python 3.7.1
$ pip --version
pip 19.0.3 from /home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/pip (python 3.7)
$ pip install -r requirements.txt
...
py_GeoIP.c:23:19: fatal error: GeoIP.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
...
Do you have any idea why travis behaves like this?
According to https://docs.travis-ci.com/user/languages/python/#dependency-management travisci automatically install requirements.txt dependencies.
To ommit this behaviour I had to add following line to travis.yml to overwrite it:
install: pip --version
you can also use: install: skip
On my project, I have issues with my test scripts on Travis CI. I installed the packages that I needed using conda, and then ran my test scripts. The build fails because of import errors. How can I fix this?
My travis YAML file is as such:
language: python
python:
- "2.7"
- "3.4"
- "nightly"
# command to install dependencies
# setup anaconda
before_install:
- wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh -O miniconda.sh
- chmod +x miniconda.sh
- bash miniconda.sh -b -p $HOME/miniconda
- export PATH="$HOME/miniconda/bin:$PATH"
- conda update -y conda
# install packages
install:
- conda install -y numpy scipy matplotlib networkx biopython
- echo $PATH
- which python
# command to run tests
script: py.test
One of my test scripts requires BioPython, and the test script fails because it cannot find biopython.
__________________ ERROR collecting test_genotype_network.py ___________________
test_genotype_network.py:1: in <module>
import genotype_network as gn
genotype_network.py:1: in <module>
from Bio import SeqIO
E ImportError: No module named 'Bio'
Is there a way to fix this?
Turns out the problem was py.test - I didn't install it in the conda environment. Adding pytest to the packages to install when creating the new environment made those import errors go away.