I am trying to run pytest in jenkins.
When i try to install pytest in build option in jenkins it says pip command not found. Even tried setting a virtual env but no success.
I AM RUNNING JENKINS IN DOCKER CONTAINER
#!/bin/bash
cd /usr/bin
pip install pytest
py.test test_11.py
#!/bin/bash
source env1/bin/activate
pip install pytest
py.test test_11.py
Dockerfile
FROM Jenkins
USER root
Errors:
Started by user admin
Running as SYSTEM
Building on master in workspace /var/jenkins_home/workspace/pyproject
[pyproject] $ /bin/bash /tmp/jenkins5312265766264018610.sh
/tmp/jenkins5312265766264018610.sh: line 4: pip: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Started by user admin
Running as SYSTEM
Building on master in workspace /var/jenkins_home/workspace/pyproject
[pyproject] $ /bin/bash /tmp/jenkins6002566555689593419.sh
/tmp/jenkins6002566555689593419.sh: line 4: pip: command not found
/tmp/jenkins6002566555689593419.sh: line 5: py.test: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
well, the error is daylight clear, pip is not installed in the running environment.
I did some digging myself and found out that jenkins image has only python2.7 installed, and pip is not installed.
I would start by installing pip first and continue from there, so modify Dockerfile to:
FROM jenkins
USER root
RUN apt-get update && apt-get install -y python-pip && rm -rf /var/lib/apt/lists/*
hope this helps you find your way.
more helpful information could be:
your jenkins pipeline script (at least until step 'Execute shell')
python version you intend to use.
how and where you run the virtual-env creation command.
Related
What I tried:
placing 'pip install --user -r requirements.txt' in the second run command
placing 'pip install pytest' in the second run command along with 'pip install pytest-html'
both followed by,
pytest --html=pytest_report.html
I am new to CircleCI and using pytest as well
Here is the steps portion of the config.yml file
version: 2.1
jobs:
run_tests:
docker:
- image: circleci/python:3.9.1
steps:
- checkout
- run:
name: Install Python Dependencies
command:
echo 'export PATH=~$PATH:~/.local/bin' >> $BASH_ENV && source $BASH_ENV
pip install --user -r requirements.txt
- run:
name: Run Unit Tests
command:
pip install --user -r requirements.txt
pytest --html=pytest_report.html
- store_test_results:
path: test-reports
- store_artifacts:
path: test-reports
workflows:
build_test:
jobs:
- run_tests
--html is not one of the builtin options for pytest -- it likely comes from a plugin
I believe you're looking for pytest-html -- make sure that's listed in your requirements
it's also possible / likely that the pip install --user is installing another copy of pytest into the image which'll only be available at ~/.local/bin/pytest instead of whatever pytest comes with the circle ci image
disclaimer, I'm one of the pytest core devs
Problem
I am trying to install python packages that have C dependencies on AWS Elastic Beanstalk (namely : fbprophet and xgboost)
Elastic Beanstalk python installs packages from requirements.txt by default with pip or pipenv on Amazon Linux 2
However, fbprophet and xgboost have dependencies in C that need to be compiled before installing them with pip. conda comes with these libraries precompiled so they are a lot easier to install with conda.
What I have tried
Here is my attempt at installing them with conda using a .config file in .ebextensions folder :
commands:
00_download_conda:
command: 'wget http://repo.continuum.io/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_reload_bash:
command: 'source ~/.bashrc'
03_create_home:
command: 'mkdir -p /home/wsgi'
04_conda_env:
command: 'conda env create -f environment.yml'
05_activate_env:
command: 'conda activate demo_forecast'
However this does not work and throws this error :
[2020-04-21T18:18:22.285Z] INFO [3699] - [Application update app-8acc-200421_201612#4/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_0_test_empty_dash/Command 03_conda_env] : Activity execution failed, because: /bin/sh: conda: command not found
(ElasticBeanstalk::ExternalInvocationError)
[2020-04-21T18:18:22.285Z] INFO [3699] - [Application update app-8acc-200421_201612#4/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_0_test_empty_dash/Command 03_conda_env] : Activity failed.
So it seems that sourcing the .bashrc does not create the conda alias
I am aware of this question and its answer , however it is a little bit old and does not provide enough details for my case, because it does not go through with installing packages using conda.
Another way would be to try and install and compile the C dependencies before pip installing the requirement, but I had no success there for now.
Thank you for the help !
I am working on a project which uses pyspark, and would like to set up automated tests.
Here's what my .gitlab-ci.yml file looks like:
image: "myimage:latest"
stages:
- Tests
pytest:
stage: Tests
script:
- pytest tests/.
I built the docker image myimage using a Dockerfile such as the following (see this excellent answer):
FROM python:3.7
RUN python --version
# Create app directory
WORKDIR /app
# copy requirements.txt
COPY local-src/requirements.txt ./
# Install app dependencies
RUN pip install -r requirements.txt
# Bundle app source
COPY src /app
However, when I run this, the gitlab CI job errors with the following:
/usr/local/lib/python3.7/site-packages/pyspark/java_gateway.py:95: in launch_gateway
raise Exception("Java gateway process exited before sending the driver its port number")
E Exception: Java gateway process exited before sending the driver its port number
------------------------------- Captured stderr --------------------------------
JAVA_HOME is not set
I understand that pyspark requires me to have JAVA8 or higher installed on my computer. I have this set up alright locally, but...what about during the CI process? How can I install Java so it works?
I have tried adding
RUN sudo add-apt-repository ppa:webupd8team/java
RUN sudo apt-get update
RUN apt-get install oracle-java8-installer
to the Dockerfile which created the image, but got the error
/bin/sh: 1: sudo: not found
.
How can I modify the Dockerfile so that tests using pyspark will work?
Solution that worked for me: add
RUN apt-get update
RUN apt-get install default-jdk -y
before
RUN pip install -r requirements.txt
It then all worked as expected with no further modifications needed!
EDIT
To make this work, I've had to update my base image to python:3.7-stretch
Write in your .bash_profile:
export JAVA_HOME=(the home directory in your jdk i.e. /Library/Java/JavaVirtualMachines/[yourjdk]/Contents/Home)
I have Django application based on docker-compose file.
Somehow travis autoinstalls packages from requirements.txt in project repo and its failing my build cause I don't have gcc package.
I want to run all actions (tests, linters) in docker container not directly in project repo.
Here is my travis-ci.yml file:
---
dist: xenial
services:
- docker
language: python
python:
- "3.7"
script:
- docker compose up --build
- docker exec web flake8
- docker exec web mypy my_project
- docker exec web safety check -r requirements.txt
- docker exec web python -m pytest --cov my_project -vvv -s
And begin of travis log:
$ git checkout -qf bab09dee57a707a5cd0a353d6f50bb66fd90a095
0.01s$ source ~/virtualenv/python3.7/bin/activate
$ python --version
Python 3.7.1
$ pip --version
pip 19.0.3 from /home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/pip (python 3.7)
$ pip install -r requirements.txt
...
py_GeoIP.c:23:19: fatal error: GeoIP.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
...
Do you have any idea why travis behaves like this?
According to https://docs.travis-ci.com/user/languages/python/#dependency-management travisci automatically install requirements.txt dependencies.
To ommit this behaviour I had to add following line to travis.yml to overwrite it:
install: pip --version
you can also use: install: skip
I'm trying to automate the deployment of my Python-Flask app on Ubuntu 18.04 using Bash by going through the motion of preparing all the necessary files/directories and cloning the source code from Github followed by creating the virtual environment, installing the pre-requisite modules and etc.
Now because I have to execute my Bash script using sudo, this means that the entire script will be executed as root except where I specify otherwise using sudo -u myuser and when it comes to activating my virtual environment, I get the following output: sudo: source: command not found and my subsequent pip installs are all installed outside of the virtual environment. Excerpts of my code below:
#!/bin/bash
...
sudo -u "$user" python3 -m venv .env
sudo -u $SUDO_USER source /srv/www/www.mydomain.com/.env/bin/activate
sudo -u "$user" pip install wheel
sudo -u "$user" pip install uwsgi
sudo -u "$user" pip install -r requirements.txt
...
Now for the life of me, I can't figure out how to activate the virtual environment in the context of the virtual environment if this makes any sense.
I've scoured the web and most of the questions/answers I found revolves around how to activate the virtual environment in a Bash script but not how to activate the virtual environment as a separate user within a Bash script that was executed as sudo.
That's because source is not an executable file, but a built-in bash command. It won't work with sudo, since the latter accepts a program name (i.e. executable file) as argument.
P.S. It's not clear why you have to execute the whole script as root. If you need to execute only a number of commands as root (e.g. for starting/stopping a service) and run a remaining majority as a regular user, you can use sudo only for these commands. E.g. the following script
#!/bin/bash
# The `whoami` command outputs the current username. Unlike `source`, this is
# a full-fledged executable file, not a built-in command
whoami
sudo whoami
sudo -u postgres whoami
on my machine outputs
trolley813
root
postgres
P.P.S. You probably don't need to activate an environment as root.