Allure-behave not reporting failed suits as 'failed' - python

I have Python bdd tests in Behave. Using version 1.2.6
The issue I'm facing is that Allure-behave reports failed suits as "Passed", even if it does show a failed step, and report it as such.
I have a behave.ini in my features folder with:
[behave.formatters]
allure = allure_behave.formatter:AllureFormatter
I run my tests with:
behave -f allure_behave.formatter:AllureFormatter -o allure-results
I do see that the result files being created by allure-behave does say the suits has passed, and is then in turn reported as such on the Allure UI
Did I mis something while setting-up/running tests?
Maybe add something to the "after_scenario" method?

Already fixed in 2.3.3b1
Feel free to create issues in our repo

Related

SonarCLoud Code Coverage in Github action for Python not working

I am trying to implement code coverage in the sonar cloud using GitHub actions.
It shows it is integrated, But coverage is 0.0%
From the Analysis details, I found two warnings.
Could not find ref 'main' in refs/heads, refs/remotes/upstream or refs/remotes/origin. You may see unexpected issues and changes. Please make sure to fetch this ref before pull request analysis.
and
The following error(s) occurred while trying to import coverage report:
Cannot resolve 63 file paths, ignoring coverage measures for those files%nCannot
resolve the file path 'src/adapters/__init__.py' of the coverage report, ambiguity,
the file exists in several 'source'.
The GitHub action is
- name: Run tests
run: poetry run pytest tests/unit tests/integration --cov-report=xml --cov-branch --cov=adapters --cov=domain --cov=entrypoints --cov=ports --cov=schemas --cov=service_layer --cov=utils
Sonar properties are
sonar.python.coverage.reportPaths=coverage.xml
sonar.sources=src
sonar.tests=tests
Apologies, If I miss any important details

Pytest coverage.py errror

I unit testing python code and running command pytest --cov, test is running fine but coverage is not getting displayed and error is
INTERNALERROR>raise CoverageException("Couldn't use data file {!r}:{}".format(self.filename, msg))
INTERNALERROR> coverage.misc.CoverageException: Couldn't use data file'C:\\Users\\Desktop\\Pytest\\.coverage': Safety level may not be changed inside a transaction
Need help with this problem?
This has been mentioned a few times in the coverage.py issues, and the eventual discovery was that it's a bug in Python 3.6.0, but if you use 3.6.1 or later, you will be fine.
If that doesn't cover your case, feel free to open an issue with details of how to reproduce.
Could be related to https://github.com/nedbat/coveragepy/issues/883#issuecomment-650562896 if you're using multiple pytests in parallel, in which case specifying a distinct coverage file for each run fixes it, like:
export COVERAGE_FILE=.coverage.SOMETHING_SPECIFIC_FOR_EACH_RUN
Use coverage==6.3.1
This works for me
Link to certain version https://pypi.org/project/coverage/6.3.1/

pytest - Windows fatal exception: code 0x8001010d

I am trying to run a GUI test using pytest and pywinauto. When I run the code normally, it does not complain.
However, when I am doing it via pytest, it throws a bunch of errors:
Windows fatal exception: code 0x8001010d
Note that the code still executes without problems and the cases are marked as passed. It is just that the output is polluted with these weird Windows exceptions.
What is the reason for this. Should I be concerned?
def test_01():
app = Application(backend='uia')
app.start(PATH_TO_MY_APP)
main = app.window(title_re="MY_APP")
main.wait('visible', timeout=8) # error occurs here
time.sleep(0.5)
win_title = f"MY_APP - New Project"
assert win_title.upper() == main.texts()[0].upper() # error occurs here
This is an effect of a change introduced with pytest 5.0.0. From the release notes:
#5440: The faulthandler standard library module is now enabled by default to help users diagnose crashes in C modules.
This functionality was provided by integrating the external pytest-faulthandler plugin into the core, so users should remove that plugin from their requirements if used.
For more information see the docs: https://docs.pytest.org/en/stable/usage.html#fault-handler
You can mute these errors as follows:
pytest -p no:faulthandler
I had the same problem with Python 3.7.7 32-bit and pytest 5.x.x. It was solved by downgrading pytest to v.4.0.0:
python -m pip install pytest==4.0
Perhaps all Python versions are not compatible with the newest pytest version(s).
My workaround for now is to install pytest==4.6.11
With 5.0.0 the problem occurs the first time.
W10 box.
Have this problem using pytest 6.2.5 and pytest-qt 4.0.2.
I tried np8's idea: still got a horrible crash (without message).
I tried Felix Zumstein's idea: still got a horrible crash (without message).
Per this thread it appears the issue (in 'Doze) is a crap DLL.
What's strange is that pytest-qt and the qtbot fixture seem to work very well... until I get to this one test. So I have concluded that I have done something too complicated in terms of mocking and patching for this crap 'Doze DLL to cope with.
For example, I mocked out two methods on a QMainWindow subclass which is created at the start of the test. But removing these mocks did not solve the problem.
I have so far spent about 2 hours trying to understand what specific feature of this test is so problematic. I am in fact trying to verify the functioning of a method on my main window class which "manufactures" menu items (QWidgets.QAction) based on about 4 parameters.
At this stage I basically have no idea what this "problem feature" is, but it might be the business of inspecting and examining the returned QAction object.

Why does py.test pass a unittest when Pyunit and nosetest fail it (as expected)?

I'm a complete new starter when it comes to writing python unittests and I know it's far far better and easier to write the tests before the code but I thought I'd start by contributing to a github project and fixing a few small issues and giving it a go.
One issue I was fixing was that if a argument wasn't given to an method, it would actually remove the argument on the server. I should mention that this project is a client for a REST API. This was easy enough to fix but I thought it would be good to write a test for it.
The code of the broken method is included within a class:
def edit_device(self, device, nickname=None, model=None, manufacturer=None):
data = {"nickname": nickname}
iden = device.device_iden
r = self._session.post("{}/{}".format(self.DEVICES_URL, iden), data=json.dumps(data))
I then was going to use Mock to mock the responses of the REST API using their documentation as the guide (it includes example responses).
The issue I am having is I have written my test as:
#mock.patch('pushbullet.pushbullet.requests.Session.get', side_effect=mocked_requests_get)
#mock.patch('pushbullet.pushbullet.requests.Session.post', side_effect=mocked_requests_post)
class TestPushbullet(object):
def test_edit_device_without_nickname(self, mock_get, mock_post):
pb = pushbullet.Pushbullet("API_KEY")
device = pb.devices[0]
new_device = pb.edit_device(device)
assert new_device.nickname == device.nickname
This seems to work correctly, the methods mocked_requests_get and mocked_requests_post get called. If I run this test within eclipse PyDev using pyUnit to run the pytest - it fails, as I expect. If I run the tests using nose it also fails, again perfect. If I run the test using py.test on the command line, it passes.
If I use the pytest.set_trace() as the first line in mocked_requests_post I can print args and it shows that the nickname is in fact not None and still set to what it is on the server (i.e. device.nickname so the assertion passes)
I can't for the life of me work out why py.test is not picking up the change in json from the self._session.post. If I change the iden in the URL format, it does indeed pick up that change, however changing the data body it does not.
Am I doing something intrinsically wrong? I can't see why py.test would pass and nose would fail on the same code.
EDIT: on the command line I'm running py.test path/to/single_test_file.py and the test file has only the single test method I've pasted above.
Finally worked it out, I must have installed the client package into python previous to deciding to write the tests. It looks like the command line py.test was therefore reading the client package from the true source (via pip install) whereas the IDE was reading it from the development package I had in eclipse.
Solution is to remove the package: pip uninstall <package>
Reinstall the development package: pip install -e . (when in the root development directory)

Overriding --errors-only=yes specified in rcfile

I use paver to run pylint as a task. In my rcfile(pylintrc) I have configured pylint to report only errors by setting errors-only=yes.
But I like to run paver pylint task with a verbose option to get it to report non-errors as well. How can I run pylint overriding the errors-only=yes setting?
Running with --errors-only=no gives an exception indicating that the --errors-only cannot be given a value. --enable=all also does not work.
This is an unexpected restriction that deserve an issue on the pylint's tracker (https://bitbucket.org/logilab/pylint/issues).
Though to get it works properly in your case, I would use a custom rc file for the task that wouldn't be used in my daily usage, eg pylint --rcfile=task.pylinrc ...

Categories

Resources