I have a test that may crash if the server is currently unavailable.
Can you tell me how I can immediately restart the same test in case it crashes (Without waiting for the completion of other tests)? And to recognize it as really failed only in case of a second failure.
I saw a solution using the pytest-rerunfailures library, but it needs to be additionally installed, and this does not quite suit me
Also, I present a solution with the try-except construction, but it seems to me that there should be a more convenient solution
With the help of try-except I present the solution like this:
def test_first():
try:
assert 1 == 0 # The main part of the function
except:
assert 1 == 0 # In case of failure, just repeat the same code
Related
with Selenium Behave Python,
Is possible to force and reset the status of a scenario as skipped or untested if a step on it fails?
I know I can skip a scenario with #Skip tag, but I need to run the scenario and check, if one step fails I would like to have in the report that this scenario is skipped or Untested or something else but not as Failed or Passed.
example:
Feature test
Scenario: test something
Given the situation A
When the user presses the button
Then the page show something
And something else should happen
#then('the page show something')
def step_page_show_something(context):
try:
do something
#here the step can fail
except:
# if I use 'pass' the test will be Pass
# if I use an exception if will Failed
# there is any way to skip or set it as Untested or something else but not Failed and not Passed
Thanks for all the tips.
The only solution that I found was exactly in the post Skip a behave step in the step implementation
On my
def step_page_show_something(context):
try:
do something
#here the step can fail
except:
context.scenario.skip()
I'm pretty new to this whole "programming thing" but at age 34 I thought that I'd like to learn the basics.
I unfortunately don't know any python programmers. I'm learning programming due to personal interest (and more and more for the fun of it) but my "social habitat" is not "where the programmers roam" ;) .
I'm almost finished with Zed Shaws "Learn Python the Hard Way" and for the first time I can't figure out a solution to a problem. The last two days I didn't even stumble upon useful hints where to look when I repeatedly rephrased (and searched for) my question.
So stackoverflow seems to be the right place.
Btw.: I lack also the correct vocabular quite often so please don't hesitate to correct me :) . This may be one reason why I can't find an answer.
I use Python 2.7 and nosetests.
How far I solved the problem (I think) in the steps I solved it:
Function 1:
def inp_1():
s = raw_input(">>> ")
return s
All tests import the following to be able to do the things below:
from nose.tools import *
import sys
from StringIO import StringIO
from mock import *
import __builtin__
# and of course the module with the functions
Here is the test for inp_1:
import __builtin__
from mock import *
def test_inp_1():
__builtin__.raw_input = Mock(return_value="foo")
assert_equal(inp_1(), 'foo')
This function/test is ok.
Quite similar is the following function 2:
def inp_2():
s = raw_input(">>> ")
if s == '1':
return s
else:
print "wrong"
Test:
def test_inp_2():
__builtin__.raw_input = Mock(return_value="1")
assert_equal(inp_1(), '1')
__builtin__.raw_input = Mock(return_value="foo")
out = StringIO()
sys.stdout = out
inp_1()
output = out.getvalue().strip()
assert_equal(output, 'wrong')
This function/test is also ok.
Please don't assume that I really know what is happening "behind the scenes" when I use all the stuff above. I have some layman-explanations how this is all functioning and why I get the results I want but I also have the feeling that these explanations may not be entirely true. It wouldn't be the first time that how I think sth. works turns out to be different after I've learned more. Especially everything with "__" confuses me and I'm scared to use it since I don't really understand what's going on. Anyway, now I "just" want to add a while-loop to ask for input until it is correct:
def inp_3():
while True:
s = raw_input(">>> ")
if s == '1':
return s
else:
print "wrong"
The test for inp_3 I thought would be the same as for inp_2 . At least I am not getting error messages. But the output is the following:
$ nosetests
......
# <- Here I press ENTER to provoke a reaction
# Nothing is happening though.
^C # <- Keyboard interrupt (is this the correct word for it?)
----------------------------------------------------------------------
Ran 7 tests in 5.464s
OK
$
The other 7 tests are sth. else (and ok).
The test for inp_3 would be test nr. 8.
The time is just the times passed until I press CTRL-C.
I don't understand why I don't get error- or "test failed"-meassages but just an "ok".
So beside the fact that you may be able to point out bad syntax and other things that can be improved (I really would appreciate it, if you would do this), my question is:
How can I test and abort while-loops with nosetest?
So, the problem here is when you call inp_3 in test for second time, while mocking raw_input with Mock(return_value="foo"). Your inp_3 function runs infinite loop (while True) , and you're not interrupting it in any way except for if s == '1' condition. So with Mock(return_value="foo") that condition is never satisfied, and you loop keeps running until you interrupt it with outer means (Ctrl + C in your example). If it's intentional behavior, then How to limit execution time of a function call in Python will help you to limit execution time of inp_3 in test. However, in cases of input like in your example, developers often implement a limit to how many input attempts user have. You can do it with using variable to count attempts and when it reaches max, loop should be stopped.
def inp_3():
max_attempts = 5
attempts = 0
while True:
s = raw_input(">>> ")
attempts += 1 # this is equal to "attempts = attempts + 1"
if s == '1':
return s
else:
print "wrong"
if attempts == max_attempts:
print "Max attempts used, stopping."
break # this is used to stop loop execution
# and go to next instruction after loop block
print "Stopped."
Also, to learn python I can recommend book "Learning Python" by Mark Lutz. It greatly explains basics of python.
UPDATE:
I couldn't find a way to mock python's True (or a builtin.True) (and yea, that sounds a bit crazy), looks like python didn't (and won't) allow me to do this. However, to achieve exactly what you desire, to run infinite loop once, you can use a little hack.
Define a function to return True
def true_func():
return True
, use it in while loop
while true_func():
and then mock it in test with such logic:
def true_once():
yield True
yield False
class MockTrueFunc(object):
def __init__(self):
self.gen = true_once()
def __call__(self):
return self.gen.next()
Then in test:
true_func = MockTrueFunc()
With this your loop will run only once. However, this construction uses a few advanced python tricks, like generators, "__" methods etc. So use it carefully.
But anyway, generally infinite loops considered to be bad design solutions. Better to not getting used to it :).
It's always important to remind me that infinite loops are bad. So thank you for that and even more so for the short example how to make it better. I will do that whenever possible.
However, in the actual program the infinite loop is how I'd like to do it this time. The code here is just the simplified problem.
I very much appreciate your idea with the modified "true function". I never would have thought about that and thus I learned a new "method" how tackle programming problems :) .
It is still not the way I would like to do it this time, but this was the so important clue I needed to solve my problem with existing methods. I never would have thought about returning a different value the 2nd time I call the same method. It's so simple and brilliant it's astonishing me :).
The mock-module has some features that allows a different value to be returned each time the mocked method is called - side effect .
side_effect can also be set to […] an iterable.
[when] your mock is going to be
called several times, and you want each call to return a different
value. When you set side_effect to an iterable every call to the mock
returns the next value from the iterable:
The while-loop HAS an "exit" (is this the correct term for it?). It just needs the '1' as input. I will use this to exit the loop.
def test_inp_3():
# Test if input is correct
__builtin__.raw_input = Mock(return_value="1")
assert_equal(inp_1(), '1')
# Test if output is correct if input is correct two times.
# The third time the input is corrct to exit the loop.
__builtin__.raw_input = Mock(side_effect=['foo', 'bar', '1'])
out = StringIO()
sys.stdout = out
inp_3()
output = out.getvalue().strip()
# Make sure to compare as many times as the loop
# is "used".
assert_equal(output, 'wrong\nwrong')
Now the test runs and returns "ok" or an error e.g. if the first input already exits the loop.
Thank you very much again for the help. That made my day :)
Is it bad form to exit() from within function?
def respond_OK():
sys.stdout.write('action=OK\n\n')
sys.stdout.flush() # redundant when followed by exit()
sys.exit(0)
Rather than setting an exit code and exit()ing from the __main__ name space?
def respond_OK():
global exit_status
sys.stdout.write('action=OK\n\n')
sys.stdout.flush()
exit_status = 0
sys.exit(exit_status)
The difference is negligible from a function perspective, just wondered what the consensus is on form. If you found the prior in someone else's code, would you look at it twice?
I would prefer to see an exception raised and handled from a main entry point, the type of which is translated into the exit code. Subclassing exceptions is so simple in python it's almost fun.
As posted in this answer's comments: Using sys.exit also means that the point of termination needs to know the actual status code, as opposed to the kind of error it encountered. Though that could be solved by an set of constants, of course. Using exceptions has other advantages, though: if one method fails, you could try another without re-entry, or print some post-mortem debugging info.
It makes no difference in terms of functionality, but it will likely make your code harder to follow, unless you take appropriate steps, e.g. commenting each of the calls from the main namespace which could lead to an exit.
Update: Note #mgilson's answer re the effect of catching an exception [It is possible to catch the exception that system.exit raises, and thus prevent exit]. You could make your code even more confusing that way.
Update 2: Note #sapht's suggestion to use an exception to orchestrate an exit. This is good advice, if you really want to do a non-local exit. Much better than setting a global.
There are a few cases where it's reasonably idiomatic.
If the user gives you bad command-line arguments, instead of this:
def usage(arg0):
print ... % (arg0,)
return 2
if __name__ == '__main__':
if ...:
sys.exit(usage(sys.argv[0]))
You often see this:
def usage():
print ... % (sys.argv[0],)
sys.exit(2)
if __name__ == '__main__':
if ...:
usage()
The only other common case I can think of is where initializing some library (via ctypes or a low-level C extension module) fails unexpectedly and leaves you in a state you can't reason about, so you just want to get out as soon as possible (e.g., to reduce the chance of segfaulting or printing garbage) For example:
if libfoo.initialize() != 0:
sys.exit(1)
Some might object to that because sys.exit doesn't actually bail out of the interpreter as soon as possible (it throws and catches an exception), so it's a false sense of safety. But you still see it reasonably often.
Am on a project using txrdq, am testing (using trial) for a case where a queued job may fail, trial marks the testcase as failed whenever it hits a failure in a errback ..
The errback is a normal behaviour, since a queued job may fail to launch, how to test this case using trial without failing the test ?
here's an example of the test case:
from twisted.trial.unittest import TestCase
from txrdq.rdq import ResizableDispatchQueue
from twisted.python.failure import Failure
class myTestCase(TestCase):
def aFailingJob(self, a):
return Failure("This is a failure")
def setUp(self):
self.queue = ResizableDispatchQueue(self.aFailingJob, 1)
def tearDown(self):
pass
def test_txrdq(self):
self.queue.put("Some argument", 1)
It seems likely that the exception is being logged, since the error handler just raises it. I'm not exactly sure what the error handling code in txrdq looks like, so this is just a guess, but I think it's a pretty good one based on your observations.
Trial fails any unit test that logs an exception, unless the test cleans that exception up after it's logged. Use TestCase.flushLoggedErrors(exceptionType) to deal with this:
def test_txrdq(self):
self.queue.put("Some argument", 1)
self.assertEqual(1, len(self.flushLoggedErrors(SomeException)))
Also notice that you should never do Failure("string"). This is analogous to raise "string". String exceptions are deprecated in Python since a looooong time ago. Always construct a Failure with an exception instance:
class JobError(Exception):
pass
def aFailingJob(self, a):
return Failure(JobError("This is a failure"))
This makes JobError the exception type you'd pass to flushLoggedErrors.
Make sure that you understand whether queue processing is synchronous or asynchronous. If it is synchronous, your test (with the flushLoggedErrors call added) is fine. If it is asynchronous, your error handler may not have run by the time your test method returns. In that case, you're not going to be testing anything useful, and the errors might be logged after the call to flush them (making the flush useless).
Finally, if you're not writing unit tests '''for''' txrdq, then you might not want to write tests like this. You can probably unit test txrdq-using code without using an actual txrdq. A normal Queue object (or perhaps another more specialized test double) will let you more precisely target the units in your application, making your tests faster, more reliable, and easier to debug.
This issue has now (finally!) been solved, by L. Daniel Burr. There's a new version (0.2.14) of txRDQ on PyPI.
By the way, in your test you should add from txrdq.job import Job, and then do something like this:
d = self.queue.put("Some argument", 1)
return self.assertFailure(d, Job)
Trial will make sure that d fails with a Job instance. There are a couple of new tests at the bottom of txrdq/test/test_rdq.py that illustrate this kind of assertion.
I'm sorry this problem caused so much head scratching for you - it was entirely my fault.
Sorry to see you're still having a problem. I don't know what's going on here, but I have been playing with it for over an hour trying to...
The queue.put method returns a Deferred. You can attach an errback to it to do the flush as #exarkun describes, and then return the Deferred from the test. I expected that to fix things (having read #exarkun's reply and gotten a comment from #idnar in #twisted). But it doesn't help.
Here's a bit of the recent IRC conversation, mentioning what I think could be happening: https://gist.github.com/2177560
As far as I can see, txRDQ is doing the right thing. The job fails and the deferred that is returned by queue.put is errbacked.
If you look in _trial_temp/test.log after you run the test, what do you see? I see an error that says Unhandled error in Deferred and the error is a Failure with a Job in it. So it seems likely to me that the error is somewhere in txRDQ. That there is a deferred that's failing and it's passing on the failure just fine to whoever needs it, but also returning the failure - causing trial to complain. But I don't know where that is. I put a print into the init of the Deferred class just out of curiousity to see how many deferreds were made during the running of the test. The answer: 12!
Sorry not to have better news. If you want to press on, go look at every deferred made by the txRDQ code. Is one of them failing with an errback that returns the failure? I don't see it, and I've put print statements in all over the place to check that things are right. I guess I must be missing something.
Thanks, and thanks too #exarkun.
I have a python script that invokes the following command:
# make
After make, it also invokes three other programs. Is there a standard way of telling whether the make command was successful or not? Right now, if make is successful or unsuccessful, the program still continues to run. I want to raise an error that the make was not possible.
Can anyone give me direction with this?
The return value of the poll() and wait() methods is the return code of the process. Check to see if it's non-zero.
Look at the exit code of make. If you are using the python module commands, then you can get the status code easily. 0 means success, non-zero means some problem.
import os
if os.system("make"):
print "True"
else:
print "False"
Use subprocess.check_call(). That way you don't have to check the return code yourself - an Exception will be thrown if the return code was non-zero.