How do I get error messages from nosetests - python

The nosetest command is failing with no messages. If I cd to my home directory I get the message I would expect:
(base) raysalemi#RayProMac ~ % nosetests
----------------------------------------------------------------------
Ran 0 tests in 0.003s
OK
But if I cd to my tests directory I get this:
/Users/raysalemi/repos/pyuvm/tests/nosetests
(base) raysalemi#RayProMac nosetests % ls
__pycache__ pyuvm_unittest.py test_05_base_classes.py test_06_reporting_classes.py
(base) raysalemi#RayProMac nosetests % nosetests
(base) raysalemi#RayProMac nosetests % echo $?
1
This has been running for months so I'm not certain of the change, but I can't get an error message to check, only the exit status.
Suggestions?

The solution was to CD to my test directory and run python with the unittest module on the same command line:
(base) raysalemi#RayProMac nosetests % python -m unittest
No module named 'copyz'
No module named 'copyz'
EE
I had accidentally stuck a z into my code and the import was failing. The only way to get that message was to use unittest directly.

Related

How to write if condition in python for testcase

script: |
python3 -m coverage erase
python3 -m coverage run -m pytest tests/ -v --junitxml=junit/test-results.xml --capture=${{ parameters.pytest_capture }}
TEST_RESULT=$? # Create coverage report even if a test case failed
python3 -m coverage html
python3 -m coverage xml
python3 -m coverage report --show-missing --fail-under=${{ parameters.covFailUnder }} && exit $TEST_RESULT
Here in python3 -m coverage run -m pytest tests/ -v --junitxml=junit/test-results.xml --capture=${{ parameters.pytest_capture }}
tests are running from test folder, i need to make sure to check for both test and unittestcase folder.
so how do i write the script for reading both tests and **unittests **
I am struck in these, please provide the solution
Here is an example of how to
def test():
result = your_function()
if result == expected_result:
print("Passed")
else:
print("Fail")
Also you may need to know maybe this was answered previously: Whats the efficient way to test if-elif-else conditions in python
To run tests from both tests/ and unittests/ folders, you can modify the script as follows:
python3 -m coverage erase
python3 -m coverage run -m pytest tests/ unittests/ -v --junitxml=junit/test-results.xml --capture=${{ parameters.pytest_capture }}
TEST_RESULT=$? # Create coverage report even if a test case failed
python3 -m coverage html
python3 -m coverage xml
python3 -m coverage report --show-missing --fail-under=${{ parameters.covFailUnder }} && exit $TEST_RESULT
Here, the argument tests/ unittests/ specifies that pytest should search for tests in both tests/ and unittests/ folders.

Function import issue running pytest on Gitlab CI

I have a script 'interceptor.py' with a function
def isIPValid(string):
and a unit test script for pytest 'test_interceptor.py' with a test for this function:
import interceptor
def test_isIPValid():
ip1 = 'localhost'
ip2 = '192.168.4.52'
ip3 = '55..5.7.1'
ip4 = 'badip'
assert interceptor.isIPValid(ip1)
assert interceptor.isIPValid(ip2)
assert not interceptor.isIPValid(ip3)
assert not interceptor.isIPValid(ip4)
Running the tests locally either from PyCharm or cmd works well
pytest
or
coverage run -m pytest
work and the test passes
On Gitlab CI however, when running coverage run -m pytest (after cloning and installing pytest and coverage via pip) I get the following output:
> assert interceptor.isIPValid(ip1)
E AttributeError: module 'interceptor' has no attribute 'isIPValid'
test_interceptor.py:10: AttributeError
=========================== short test summary info ============================
FAILED test_interceptor.py::test_isIPValid - AttributeError: module 'interceptor' has no attribute 'isIPValid'
============================== 1 failed in 0.09s ===============================
I've tried importing the specific function or all of the functions from the module, always with the same error. Does anyone have a pointer where this issue could come from and why I am getting different behaviors on the Gitlab runner vs locally on Windows?
Some additional information:
The pipeline is running on a linux runner in hosted Gitlab
All requirements are installed from a requirements.txt file during the CI script, plus pytest and coverage
The file containing the function uses PySide6, but the tested function does not use any function that depends on it.
Edit:
As requested, here is my gitlab yaml
stages:
- test
image: python:3
unit-test-job: # This job runs in the test stage.
stage: test
script:
- python --version
- pip install -r requirements.txt
- pip install coverage
- pip install pytest
- pip install pytest-mock
- echo "Running unit tests"
- coverage run -m pytest
- coverage report
- coverage xml
... plus the upload part we don't actually reach
Edit 2:
There seems to be a serious and unaddressed bug in PySide6 when using GitlabCI, as pointed out in this thread:
Why does PySide6 on GitLab CI result in ImportError?
This solution, however, is still not working for me:
This
script:
- python -m venv .venv
- . .venv/bin/activate
- pip install -r requirements.txt
- pip install coverage
- pip install pytest
- pip install pytest-mock
- python -m pip install PySide6
- strip --remove-section=.note.ABI-tag .venv/lib/python3.11/site-packages/PySide6/Qt/lib/libQt6Core.so.6
- python -c 'import PySide6; print(PySide6.__version__)'
- ldd .venv/lib/python3.11/site-packages/PySide6/Qt/lib/libQt6Core.so.6
- python -c 'import PySide6.QtCore'
- echo "Running unit tests"
- coverage run -m pytest
- coverage report
- coverage xml
Still generates an error:
ImportError while importing test module '<mypath>test_interceptor.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/local/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
test_interceptor.py:1: in <module>
from Interceptor_module import isIPValid
Interceptor_module.py:5: in <module>
from PySide6 import QtCore, QtGui, QtWidgets
E ImportError: libGL.so.1: cannot open shared object file: No such file or directory
=========================== short test summary info ============================
ERROR test_interceptor.py
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.86s ===============================

Is there a way to map python nose2 to coverage plugin which is installed in custom location?

I have installed nose2 with the following command:
pip3 install nose2
And I have installed the coverage in the custom path with the following command:
pip3 install --target=/tmp/coverage_pkg coverage
I want to execute the test cases and generate the coverage report. Is there a way to map my coverage plugin installed in custom path to the nose2?
I tried to execute the below command:
nose2 --with-coverage
I got the following output:
Warning: you need to install "coverage_plugin" extra requirements to use this plugin. e.g. `pip install nose2[coverage_plugin]`
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
I want to generate the coverage report by using the coverage package installed in custom path.
Can someone please explain if I can achieve this?
I was able to achieve this using the following command:
cd <custom location where packages are installed>
python3 -m nose2 --with-coverage -s "path_to_source_dir" --coverage "path_to_source_dir"
You need to stay in the location where nose2 and coverage in installed(Custom dir/ Eg: /tmp/coverage_pkg).
For more configuration and generating junit report and coverage report we can use the unittest.cfg file and .coveragerc for controlling coverage report and unit testing.
Sample unittest.cfg:
[unittest]
plugins = nose2.plugins.junitxml
start-dir =
code-directories =
test-file-pattern = *_test.py
test-method-prefix = t
[junit-xml]
always-on = True
keep_restricted = False
path = nose2-junit.xml
test_fullname = False
[coverage]
always-on = True
coverage =
coverage-config = .coveragerc
coverage-report = term-missing
xml
Sample .coveragerc:
[run]
branch = True
[xml]
output =coverage.xml
[report]
show_missing = True
omit=
/test/*
/poc1.py
/bin/__init__.py
Use the below command for using config file and generate unittest report and coverage report.
python3 -m nose2 --config "<path to config file>/unittest.cfg"
The error message includes the exact command you need:
pip install nose2[coverage_plugin]

bash condition within Pipenv Pipfile script

I have a Pipfile with such script
[script]
tests = "pytest --cov-fail-under=40 tests/"
I wan to make the cov-fail-under parameter value depend on a env var. Out of the Pipfile script, the following command does the job:
pytest --cov-fail-under=$( case $VARin true ) echo 40 ;; * ) echo 80 ;; esac ) tests/
But when executed with the pipenv run tests the bash condition is shown as a string producing the following error:
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: argument --cov-fail-under: invalid validate_fail_under value: '$('
Is there any workaround to solve this?
You can spawn a shell with sh -c:
Pipfile
[scripts]
tests = "sh -c '[ \"${VAR}\" = \"true\" ] && mincov=40 ; pytest --cov-fail-under=\"${mincov:-80}\" tests'"

"./postinstall" failed with return code [Timeout]

when i run 'dotcloud push traing'... running postinstall script take a long time and get error below.
I created a new account.
cd to project and run command: 'dotcloud create training' and 'dotcloud push training' but nothing change.
anyone can help me?plz
Running postinstall script...
ERROR: deployment aborted due to unexpected command result: "./postinstall" failed with return code [Timeout]
postinstall
#!/bin/sh
#python createdb.py
python training/manage.py syncdb --noinput
python mkadmin.py
mkdir -p /home/dotcloud/data/media /home/dotcloud/volatile/static
python training/manage.py collectstatic --noinput
requirements.txt
Django==1.4
PIL==1.1.7
Try this as your postinstall. It may help with locating the error (expanding on Ken's advice):
#!/bin/bash
# set -e makes the script exit on the first error
set -e
# set -x will add debug trace information to all of your commands
set -x
echo "$0 starting"
#python createdb.py
python training/manage.py syncdb --noinput
python mkadmin.py
mkdir -p /home/dotcloud/data/media /home/dotcloud/volatile/static
python training/manage.py collectstatic --noinput
echo "$0 complete"
More debugging info available at http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_03.html
Any error messages like "./postinstall failed with return code" means that there is a problem with your postinstall script.
In order to debug postinstall executions easily on dotCloud, you can do the following:
Let's assume that your app is "ramen" and your service is "www".
$ dotcloud -A ramen run www
> ~/current/postinstall
It'll re-execute the postinstall but from your session this time, so you'll be able to easily update the postinstall code and re-run it without having to push again and again.
Once you found the root cause, fix it locally and repush your application.

Categories

Resources