I'm using py.test for a somewhat unconventional application. Basically, I want to have user interaction in a test via print() and input() (this is Python 3.5). The ultimate goal is to have semi-automated testing for hardware and multi-layered software which cannot be automatically tested even in principle. Some test cases will ask the testing technician to do something (to be confirmed by enter or press any key or similar on the console) or ask them to take a simple measurement or visually confirm something (to be entered on the console).
Example of what I (naively) want to do:
def test_thingie():
thingie_init('red')
print('Testing the thingie.')
# Ask the testing technician to enter info, or confirm that he has set things up physically
x = int(input('Technician: How many *RED* widgets are on the thingie? Enter integer:'))
assert x == the_correct_number
This works with when the test file is called with pytest -s to prevent stdin and stdout capturing, but the documented means (with capsys.disabled()) in the py.test documentation don't work since they only affect stdout and stderr.
What's a good way to make this work using code in the py.test module, no command line options, and ideally per-test?
The platform, for what it's worth, is Windows and I'd prefer not to have this clobber or get clobbered by the wrapping of stdin/out/whatever that results from nested shells, uncommon shells, etc.
no command line options
Use pytest.ini option or env variable to avoid using command line option every time.
ideally per-test?
Use function scoped fixture to take user input. Example code:
# contents of conftest.py
import pytest
#pytest.fixture(scope='function')
def take_input(request):
val = input(request.param)
return val
#Content of test_input.py
import pytest
#pytest.mark.parametrize('prompt',('Enter value here:'), indirect=True)
def test_input(take_input):
assert take_input == "expected string"
Related
I would like to use data from a file called simdata.txt to run a simulation that I wrote in Python. I would like to ensure that when the user executes my program from the command line that the data is only being fed through the sys.stdin stream.
That is, I would like my program to be ran like this:
python3 simulation.py < simdata.txt
as opposed to this (fed through sys.argv as opposed to sys.stdin):
python3 simulation.py simdata.txt
How can I ensure that the user will always execute the program the first way as opposed to the second way? Usually I enforce this rule using this:
if len(sys.argv) > 2:
print("This is not how you use the program. Example of use:")
print("python3 simulation.py < simdata.txt")
exit(1)
But this seems problematic since I may want to add extra flags to my program that would change the behavior of the simulation. That is, maybe I would want to do this:
python3 simulation.py < simdata.txt --plot_data
Is there a better way to ensure that the simdata.txt is only fed through stdin without compromising my ability to add extra program flags?
Working with the argparse module turned out to be the better answer. That way I didn't have to have such strict requirements for input.
I am using Pycharm to run my pytest unit tests. I am testing a REST API, so I often have to validate blocks of JSON. When a test fails, I'll see something like this:
FAILED
test_document_api.py:0 (test_create_documents)
{'items': [{'i...ages': 1, ...} != {'items': [{'...ages': 1, ...}
Expected :{'items': [{'...ages': 1, ...}
Actual :{'items': [{'i...ages': 1, ...}
<Click to see difference>
When I click on the "Click to see difference" link, most of the difference is converted to points of ellipses, like so
This is useless since it doesn't show me what is different. I get this behavior for any difference larger than a single string or number.
I assume Pycharm and/or pytest tries to elide uninformative parts of differences for large outputs. However, it's being too aggressive here and eliding everything.
How do I get Pycharm and/or pytest to show me the entire difference?
I've tried adding -vvv to pytest's Additional Arguments, but that has no effect.
Since the original post I verified that I see the same behavior when I run unit tests from the command line. So this is an issue with pytest and not Pycharm.
After looking at the answers I've got so far I guess what I'm really asking is "in pytest is it possible to set maxDiff=None without changing the source code of your tests?" The impression I've gotten from reading about pytest is that the -vv switch is what controls this setting, but this does not appear to be the case.
If you look closely into PyCharm sources, from the whole pytest output, PyCharm uses a single line the to parse the data for displaying in the Click to see difference dialog. This is the AssertionError: <message> line:
def test_spam():
> assert v1 == v2
E AssertionError: assert {'foo': 'bar'} == {'foo': 'baz'}
E Differing items:
E {'foo': 'bar'} != {'foo': 'baz'}
E Use -v to get the full diff
If you want to see the full diff line without truncation, you need to customize this line in the output. For a single test, this can be done by adding a custom message to the assert statement:
def test_eggs():
assert a == b, '{0} != {1}'.format(a, b)
If you want to apply this behaviour to all tests, define custom pytest_assertrepr_compare hook. In the conftest.py file:
# conftest.py
def pytest_assertrepr_compare(config, op, left, right):
if op in ('==', '!='):
return ['{0} {1} {2}'.format(left, op, right)]
The equality comparison of the values will now still be stripped when too long; to show the complete line, you still need to increase the verbosity with -vv flag.
Now the equality comparison of the values in the AssertionError line will not be stripped and the full diff is displayed in the Click to see difference dialog, highlighting the diff parts:
Being that pytest integrates with unittest, as a workaround you may be able to set it up as a unittest and then set Test.maxDiff = None or per each specific test self.maxDiff = None
https://docs.pytest.org/en/latest/index.html
Can run unittest (including trial) and nose test suites out of the box;
These may be helpful as well...
https://stackoverflow.com/a/21615720/9530790
https://stackoverflow.com/a/23617918/9530790
Had a look in the pytest code base and maybe you can try some of these out:
1) Set verbosity level in the test execution:
./app_main --pytest --verbose test-suite/
2) Add environment variable for "CI" or "BUILD_NUMBER". In the link to
the truncate file you can see that these env variables are used to
determine whether or not the truncation block is run.
import os
os.environ["BUILD_NUMBER"] = '1'
os.environ["CI"] = 'CI_BUILD'
3) Attempt to set DEFAULT_MAX_LINES and DEFAULT_MAX_CHARS on the truncate module (Not recommending this since it uses a private module):
from _pytest.assertion import truncate
truncate.DEFAULT_MAX_CHARS = 1000
truncate.DEFAULT_MAX_LINES = 1000
According to the code the -vv option should work so it's strange that it's not for you:
Current default behaviour is to truncate assertion explanations at
~8 terminal lines, unless running in "-vv" mode or running on CI.
Pytest truncation file which are what I'm basing my answers off of: pytest/truncate.py
Hope something here helps you!
I have some getattr in assertion and it never shows anything after AssertionError.
I add -lv in the Additional Arguments field to show the local variables.
I was running into something similar and created a function that returns a string with a nice diff of the two dicts. In pytest style test this looks like:
assert expected == actual, build_diff_string(expected, actual)
And in unittest style
self.assertEqual(expected, actual, build_diff_string(expected, actual)
The downside is that you have to modify the all the tests that have this issue, but it's a Keep It Simple and Stupid solution.
In pycharm you can just put -vv in the run configuration Additional Arguments field and this should solve the issue.
Or at least, it worked on my machine...
I have a program that interacts with and changes block devices (/dev/sda and such) on linux. I'm using various external commands (mostly commands from the fdisk and GNU fdisk packages) to control the devices. I have made a class that serves as the interface for most of the basic actions with block devices (for information like: What size is it? Where is it mounted? etc.)
Here is one such method querying the size of a partition:
def get_drive_size(device):
"""Returns the maximum size of the drive, in sectors.
:device the device identifier (/dev/sda and such)"""
query_proc = subprocess.Popen(["blockdev", "--getsz", device], stdout=subprocess.PIPE)
#blockdev returns the number of 512B blocks in a drive
output, error = query_proc.communicate()
exit_code = query_proc.returncode
if exit_code != 0:
raise Exception("Non-zero exit code", str(error, "utf-8")) #I have custom exceptions, this is slight pseudo-code
return int(output) #should always be valid
So this method accepts a block device path, and returns an integer. The tests will run as root, since this entire program will end up having to run as root anyway.
Should I try and test code such as these methods? If so, how? I could try and create and mount image files for each test, but this seems like a lot of overhead, and is probably error-prone itself. It expects block devices, so I cannot operate directly on image files in the file system.
I could try mocking, as some answers suggest, but this feels inadequate. It seems like I start to test the implementation of the method, if I mock the Popen object, rather than the output. Is this a correct assessment of proper unit-testing methodology in this case?
I am using python3 for this project, and I have not yet chosen a unit-testing framework. In the absence of other reasons, I will probably just use the default unittest framework included in Python.
You should look into the mock module (I think it's part of the unittest module now in Python 3).
It enables you to run tests without the need to depened in any external resources while giving you control over how the mocks interact with your code.
I would start from the docs in Voidspace
Here's an example:
import unittest2 as unittest
import mock
class GetDriveSizeTestSuite(unittest.TestCase):
#mock.patch('path/to/original/file.subprocess.Popen')
def test_a_scenario_with_mock_subprocess(self, mock_popen):
mock_popen.return_value.communicate.return_value = ('Expected_value', '')
mock_popen.return_value.returncode = '0'
self.assertEqual('expected_value', get_drive_size('some device'))
In Robot Framework, the execution status for each test case can be either PASS or FAIL. But I have a specific requirement to mark few tests as NOT EXECUTED when it fails due to dependencies.
I'm not sure on how to achieve this. I need expert's advise for me to move ahead.
Until a SKIP status is implemented, you can use exitonfailure to stop further execution if a critical test failed, and then change the output.xml (and the tests results.html) to show those tests as "NOT_RUN" (gray color), rather than "FAILED" (red color).
Here's an example (Tested on RobotFramework 3.1.1 and Python 3.6):
First create a new class that extends the abstract class ResultVisitor:
class ResultSkippedAfterCritical(ResultVisitor):
def visit_suite(self, suite):
suite.set_criticality(critical_tags='Critical')
for test in suite.tests:
if test.status == 'FAIL' and "Critical failure occurred" in test.message:
test.status = 'NOT_RUN'
test.message = 'Skipping test execution after critical failure.'
Assuming you've already created the suite (for example with TestSuiteBuilder()), run it without creating report.html and log.html:
outputDir = suite.name.replace(" ", "_")
outputFile = "output.xml"
logger.info(F"Running Test Suite: {suite.name}", also_console=True)
result = suite.run(output=outputFile, outputdir=outputDir, \
report=None, log=None, critical='Critical', exitonfailure=True)
Notice that I've used "Critical" as the identifing tag for critical tests, and exitonfailure option.
Then, revisit the output.xml, and create report.html and log.html from it:
revisitOutputFile = os.path.join(outputDir, outputFile)
logger.info(F"Checking skipped tests in {revisitOutputFile} due to critical failures", also_console=True)
result = ExecutionResult(revisitOutputFile)
result.visit(ResultSkippedAfterCritical())
result.save(revisitOutputFile)
reportFile = 'report.html'
logFile = 'log.html'
logger.info(F"Generating {reportFile} and {logFile}", also_console=True)
writer = ResultWriter(result)
writer.write_results(outputdir=outputDir, report=reportFile, log=logFile)
It should display all the tests after the critical failure with grayed status = "NOT_RUN":
There is nothing you can do, robot only supports two values for the test status: pass and fail. You can mark a test as non-critical so it won't break the build, but it will still show up in logs and reports as having been run.
The robot core team has said they will not support this feature. See issue 1732 for more information.
Even though robot doesn't support the notion of skipped tests, you have the option to write a script that scans output.xml and removes tests that you somehow marked as skipped (perhaps by adding a tag to the test). You will also have to adjust the counts of the failed tests in the xml. Once you've modified the output.xml file, you can use rebot to regenerate the log and report files.
If you only need the change to be made for your log/report files you should take a look here for implementing a SuiteVisitor for the --prerebotmodifier option. As stated by Bryan Oakley, this might screw up your pass/fail count if you don't keep that in mind.
Currently it doesn't seem to be possible to actually alter the test-status before output.xml is created, but there are plans to implement it in RF 3.0. And there is a discussion for a skip status
Another more complex solution would be to create your own output file through implementing a listener to use with the --listener option that creates an output file as it fits your needs (possibly alongside with the original output.xml).
There is also the possibility to set tags during test execution, but im not familar with that yet so I can't really tell anything about that atm. That might be another possibility to account for those dependency-failures, as there are options to ignore certain tagged keywords for the log/report generation
I solved it this way:
Run Keyword If ${blabla}==${True} do-this-task ELSE log to console ${PREV_TEST_STATUS}${yellow}| NRUN |
test not executed and marked as NRUN
Actually, you can SET TAG to run whatever keyword you like (for sanity testing, regression testing...)
Just go to your test script configuration and set tags
And whenever you want to run, just go to Run tab and select check-box Only run tests with these tags / Skip tests with these tags
And click Start button :) Robot framework will select any keyword that match and run it.
Sorry, I don't have enough reputation to post images :(
Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run?
The -s switch disables per-test capturing (only if a test fails).
-s is equivalent to --capture=no.
pytest captures the stdout from individual tests and displays them only on certain conditions, along with the summary of the tests it prints by default.
Extra summary info can be shown using the '-r' option:
pytest -rP
shows the captured output of passed tests.
pytest -rx
shows the captured output of failed tests (default behaviour).
The formatting of the output is prettier with -r than with -s.
When running the test use the -s option. All print statements in exampletest.py would get printed on the console when test is run.
py.test exampletest.py -s
In an upvoted comment to the accepted answer, Joe asks:
Is there any way to print to the console AND capture the output so that it shows in the junit report?
In UNIX, this is commonly referred to as teeing. Ideally, teeing rather than capturing would be the py.test default. Non-ideally, neither py.test nor any existing third-party py.test plugin (...that I know of, anyway) supports teeing – despite Python trivially supporting teeing out-of-the-box.
Monkey-patching py.test to do anything unsupported is non-trivial. Why? Because:
Most py.test functionality is locked behind a private _pytest package not intended to be externally imported. Attempting to do so without knowing what you're doing typically results in the public pytest package raising obscure exceptions at runtime. Thanks alot, py.test. Really robust architecture you got there.
Even when you do figure out how to monkey-patch the private _pytest API in a safe manner, you have to do so before running the public pytest package run by the external py.test command. You cannot do this in a plugin (e.g., a top-level conftest module in your test suite). By the time py.test lazily gets around to dynamically importing your plugin, any py.test class you wanted to monkey-patch has long since been instantiated – and you do not have access to that instance. This implies that, if you want your monkey-patch to be meaningfully applied, you can no longer safely run the external py.test command. Instead, you have to wrap the running of that command with a custom setuptools test command that (in order):
Monkey-patches the private _pytest API.
Calls the public pytest.main() function to run the py.test command.
This answer monkey-patches py.test's -s and --capture=no options to capture stderr but not stdout. By default, these options capture neither stderr nor stdout. This isn't quite teeing, of course. But every great journey begins with a tedious prequel everyone forgets in five years.
Why do this? I shall now tell you. My py.test-driven test suite contains slow functional tests. Displaying the stdout of these tests is helpful and reassuring, preventing leycec from reaching for killall -9 py.test when yet another long-running functional test fails to do anything for weeks on end. Displaying the stderr of these tests, however, prevents py.test from reporting exception tracebacks on test failures. Which is completely unhelpful. Hence, we coerce py.test to capture stderr but not stdout.
Before we get to it, this answer assumes you already have a custom setuptools test command invoking py.test. If you don't, see the Manual Integration subsection of py.test's well-written Good Practices page.
Do not install pytest-runner, a third-party setuptools plugin providing a custom setuptools test command also invoking py.test. If pytest-runner is already installed, you'll probably need to uninstall that pip3 package and then adopt the manual approach linked to above.
Assuming you followed the instructions in Manual Integration highlighted above, your codebase should now contain a PyTest.run_tests() method. Modify this method to resemble:
class PyTest(TestCommand):
.
.
.
def run_tests(self):
# Import the public "pytest" package *BEFORE* the private "_pytest"
# package. While importation order is typically ignorable, imports can
# technically have side effects. Tragicomically, that is the case here.
# Importing the public "pytest" package establishes runtime
# configuration required by submodules of the private "_pytest" package.
# The former *MUST* always be imported before the latter. Failing to do
# so raises obtuse exceptions at runtime... which is bad.
import pytest
from _pytest.capture import CaptureManager, FDCapture, MultiCapture
# If the private method to be monkey-patched no longer exists, py.test
# is either broken or unsupported. In either case, raise an exception.
if not hasattr(CaptureManager, '_getcapture'):
from distutils.errors import DistutilsClassError
raise DistutilsClassError(
'Class "pytest.capture.CaptureManager" method _getcapture() '
'not found. The current version of py.test is either '
'broken (unlikely) or unsupported (likely).'
)
# Old method to be monkey-patched.
_getcapture_old = CaptureManager._getcapture
# New method applying this monkey-patch. Note the use of:
#
# * "out=False", *NOT* capturing stdout.
# * "err=True", capturing stderr.
def _getcapture_new(self, method):
if method == "no":
return MultiCapture(
out=False, err=True, in_=False, Capture=FDCapture)
else:
return _getcapture_old(self, method)
# Replace the old with the new method.
CaptureManager._getcapture = _getcapture_new
# Run py.test with all passed arguments.
errno = pytest.main(self.pytest_args)
sys.exit(errno)
To enable this monkey-patch, run py.test as follows:
python setup.py test -a "-s"
Stderr but not stdout will now be captured. Nifty!
Extending the above monkey-patch to tee stdout and stderr is left as an exercise to the reader with a barrel-full of free time.
According to pytest documentation, version 3 of pytest can temporary disable capture in a test:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')
pytest --capture=tee-sys was recently added (v5.4.0). You can capture as well as see the output on stdout/err.
Try pytest -s -v test_login.py for more info in console.
-v it's a short --verbose
-s means 'disable all capturing'
You can also enable live-logging by setting the following in pytest.ini or tox.ini in your project root.
[pytest]
log_cli = True
Or specify it directly on cli
pytest -o log_cli=True
pytest test_name.py -v -s
Simple!
I would suggest using -h command. There're quite interesting commands might be used for.
but, for this particular case: -s shortcut for --capture=no. is enough
pytest <test_file.py> -s
If you are using logging, you need to specify to turn on logging output in addition to -s for generic stdout. Based on Logging within pytest tests, I am using:
pytest --log-cli-level=DEBUG -s my_directory/
If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output.
If anyone wants to run tests from code with output:
if __name__ == '__main__':
pytest.main(['--capture=no'])
The capsys, capsysbinary, capfd, and capfdbinary fixtures allow access to stdout/stderr output created
during test execution. Here is an example test function that performs some output related checks:
def test_print_something_even_if_the_test_pass(self, capsys):
text_to_be_printed = "Print me when the test pass."
print(text_to_be_printed)
p_t = capsys.readouterr()
sys.stdout.write(p_t.out)
# the two rows above will print the text even if the test pass.
Here is the result:
test_print_something_even_if_the_test_pass PASSED [100%]Print me when the test pass.