While working in an virtualenv [3.4] I was trying to run tox for running the tests when I got the error:
py34 develop-inst-nodeps: /home/horcrux/dir-sub/dir
py34 runtests: commands[0] | python -m nose2 -v
/home/horcrux/dir-sub/dir/.tox/py34/bin/python: No module named nose2
ERROR: InvocationError: '/home/horcrux/dir-sub/dir/.tox/py34/bin/python -m nose2 -v'
_____________________________ summary ______________________________________
ERROR: py34: commands failed
I've already tried to install nose2 using pip install nose2 but still the problem remains the same.
sudo clears the environment. You have to invoke the virtualenv inside of the sudo execution environment. Try:
sudo bash -c ". [venv/bin/activate] ; [tox]"
Replace [venv/bin/activate] with the path to your virtualenv activate script and replace [tox] with whatever command you are using to invoke it.
Related
i have a package (lets call it test-package) that has a pyproject.toml file but also has setup.py
if i install it locally with python -m pip install -e . --no-use-pep517 with pip>=20.1.3, the package is installed.python -c "import inspect, test_package; print(inspect.getfile(test_package))" works and points to local file
if i python -m pip install -e . in a virtual env, python -c "import inspect, test_package; print(inspect.getfile(test_package))" works
if i python -m pip install --no-build-isolation -e . python -c "import inspect, test_package; print(inspect.getfile(test_package))" works
but python -m pip install -e . is successful, yet python -c "import inspect, test_package; print(inspect.getfile(test_package))" results in an error ModuleNotFoundError: No module named 'test_package'.
I tried reading pep517 and pep518. i still don't understand how to reason about above behavior. i was wondering if someone can please explain to me
I have a sh script line (as part of a Jenkinsfile groovy script) which does
sh "python3 -m venv venv"
sh "source venv/bin/activate"
withCredentials([usernamePassword(credentialsId: XXXXXXX,
usernameVariable: 'XXXXXXX',
passwordVariable: 'XXXXXXX')]) {
sh "pip install --extra-index-url 'https://${XXXXXXX}:${XXXXXX}#atifactory-url-base/artifactory/api/pypi/pypi-release-local/simple' -e ."
}
sh "pip freeze >> requirements.txt"
However, above fails with
ERROR: file:///home/jenkins/workspace/XXXXXXXXXXX does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.
The project I have has no setup.py or requirements.txt file at the top level - how can I do this without adding the current python project for installation using -e?
I am trying to run pytest in jenkins.
When i try to install pytest in build option in jenkins it says pip command not found. Even tried setting a virtual env but no success.
I AM RUNNING JENKINS IN DOCKER CONTAINER
#!/bin/bash
cd /usr/bin
pip install pytest
py.test test_11.py
#!/bin/bash
source env1/bin/activate
pip install pytest
py.test test_11.py
Dockerfile
FROM Jenkins
USER root
Errors:
Started by user admin
Running as SYSTEM
Building on master in workspace /var/jenkins_home/workspace/pyproject
[pyproject] $ /bin/bash /tmp/jenkins5312265766264018610.sh
/tmp/jenkins5312265766264018610.sh: line 4: pip: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Started by user admin
Running as SYSTEM
Building on master in workspace /var/jenkins_home/workspace/pyproject
[pyproject] $ /bin/bash /tmp/jenkins6002566555689593419.sh
/tmp/jenkins6002566555689593419.sh: line 4: pip: command not found
/tmp/jenkins6002566555689593419.sh: line 5: py.test: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
well, the error is daylight clear, pip is not installed in the running environment.
I did some digging myself and found out that jenkins image has only python2.7 installed, and pip is not installed.
I would start by installing pip first and continue from there, so modify Dockerfile to:
FROM jenkins
USER root
RUN apt-get update && apt-get install -y python-pip && rm -rf /var/lib/apt/lists/*
hope this helps you find your way.
more helpful information could be:
your jenkins pipeline script (at least until step 'Execute shell')
python version you intend to use.
how and where you run the virtual-env creation command.
Trying to use tox to run tests before pushing, but I keep running into errors like:
ERROR: py26: InterpreterNotFound: python2.6
ERROR: py32: InterpreterNotFound: python3.2
ERROR: py34: InterpreterNotFound: python3.3
apt-cache search isn't offering any packages that look like they will help. How do you load all these versions of the interpreter for ubuntu14.04?
Obviously, Ubuntu doesn't ship all historic versions of Python. But you can use deadsnakes PPA which has everything from 2.3 to 3.4.
For one project I used drone.io CI service with, I had the following tox section I ran before actual test envs.
[testenv:setupdrone]
whitelist_externals = /bin/bash
commands =
bash -c "echo 'debconf debconf/frontend select noninteractive' | sudo debconf-set-selections"
bash -c "sudo add-apt-repository ppa:fkrull/deadsnakes &> /dev/null"
bash -c "sudo apt-get update &> /dev/null"
bash -c "sudo apt-get -y install python2.6 python3.4 &> /dev/null"
I recently stumbled upon some issue with running coverage measurements within virtual environment. I do not remember similar issues in the past, nor I was able to find solution on the web.
Basically, when I am trying to run test suite in virtualenv, it works fine. But as soon, as I try to do it using coverage, it fails because of lack of modules it requires. Based on some answer on StackOverflow I checked my script and found out that coverage uses different interpreter, even if running from inside the same virtualenv.
Here is how to reproduce it:
$ virtualenv --no-site-packages venv
New python executable in venv/bin/python
Installing Setuptools................................................done.
Installing Pip.......................................................done.
$ source venv/bin/activate
(venv)$ echo 'import sys; print(sys.executable)' > test.py
(venv)$ python test.py
/home/tadeck/testground/venv/bin/python
(venv)$ coverage run test.py
/usr/bin/python
The question is: how to make coverage work with virtual environment seamlessly? I could alter sys.path or install required modules system-wide, but there has to be a cleaner way.
I had to leave my virtualenv after installing coverage and reactivate it to get coverage to work.
[alex#gesa ~]$ virtualenv --no-site-packages venv
[alex#gesa ~]$ source venv/bin/activate
(venv)[alex#gesa ~]$ pip install coverage
(venv)[alex#gesa ~]$ deactivate
[alex#gesa ~]$ source venv/bin/activate
pip install coverage in your new venv
[alex#gesa ~]$ virtualenv venv
[alex#gesa ~]$ source venv/bin/activate
(venv)[alex#gesa ~]$ pip install coverage
(venv)[alex#gesa ~]$ echo 'import sys; print(sys.executable)' > test.py
(venv)[alex#gesa ~]$ python test.py
/home/alex/venv/bin/python
(venv)[alex#gesa ~]$ coverage run test.py
/home/alex/venv/bin/python
(venv)[alex#gesa ~]$