What does Fabric's `hide("everything")` actually hide? - python

When I use the hide("everything") context manager, and a fabric task fails, I still get a message. The docs read:
everything: Includes warnings, running, user and output (see above.) Thus, when turning off everything, you will only see a bare minimum of output (just status and debug if it’s on), along with your own print statements.
But this is not strictly true, right? -- I see status, debug, and abort messages.
If I really do want to hide everything, is there a better way than:
with hide("aborts"), hide("everything"):
...

When in doubt, look at the source:
https://github.com/fabric/fabric/blob/master/fabric/context_managers.py#L98
here is the actual declaration. everything is pretty much everything: warnings, running, user, output, exceptions
https://github.com/fabric/fabric/blob/master/fabric/state.py#L411
It's just a nice wrapper around output. Frankly i would stick to their build-in decorators since that has less chances of changing, plus you get the added value of more pythonic-readable code:
#task
def task1():
with hide('running', 'stdout', 'stderr'):
run('ls /var/www')
....
vs.
#task
def task1():
output['running'] = False
output['stdout'] = False
output['stderr'] = False
# or just output['everything'] = False
run('ls /var/www')
....
BUT, at the end of the day its the same thing.

This is what I have always used:
from fabric.state import output
output['everything'] = False

Related

how to call a callback function after certain amount of time

I'm using Twisted along with Txmongo lib.
In the following function, I want to invoke cancelTest() 5 secs later. But the code does not work. How can I make it work?
from twisted.internet import task
def diverge(self, d):
if d == 'Wait':
self.flag = 1
# self.timeInit = time.time()
clock = task.Clock()
for ip in self.ips:
if self.factory.dictQueue.get(ip) is not None:
self.factory.dictQueue[ip].append(self)
else:
self.factory.dictQueue[ip] = deque([self])
# self.factory.dictQueue[ip].append(self)
log.msg("-----------------the queue after wait")
log.msg(self.factory.dictQueue)
###############################HERE, this does not work
self.dtime = task.deferLater(clock, 5, self.printData)
#############################
self.dtime.addCallback(self.cancelTest)
self.dtime.addErrback(log.err)
else:
self.cancelTimeOut()
d.addCallback(self.dispatch)
d.addErrback(log.err)
def sendBackIP(self):
self.ips.pop(0)
log.msg("the IPs: %s" % self.ips)
d = self.factory.service.checkResource(self.ips)
d.addCallback(self.diverge) ###invoke above function
log.msg("the result from checkResource: ")
log.msg()
In general reactor.callLater() is the function you want. So if the function needs to be called 5 seconds later, your code would look like this:
from twisted.internet import reactor
reactor.callLater(5, cancelTest)
One thing that is strange is that your task.deferLater implementation should also work. However without seeing more of your code I don't think I can help you more other than stating that it's strange :)
References
https://twistedmatrix.com/documents/current/core/howto/defer.html#callbacks
http://twistedmatrix.com/documents/current/api/twisted.internet.base.ReactorBase.html#callLater
you're doing almost everything right; you just didn't get the Clock part correctly.
twisted.internet.task.Clock is a deterministic implementation of IReactorTime, which is mostly used in unit/integration testing for getting a deterministic output from your code; you shouldn't use that in production.
So, what should you use in production? reactor! In fact, all production reactor implementations implement the IReactorTime interface.
Just use the following import and function call:
from twisted.internet import reactor
# (omissis)
self.dtime = task.deferLater(reactor, 5, self.printData)
Just some sidenotes:
in your text above the snippet, you say that you want to invoke cancelTest after five seconds, but in the code you actually invoke printData; of course if printData just prints something, doesn't raise and returns an immediate value, this will cause the cancelTest function to be executed immediately after since it's a chained callcack; but if you want to actually be 100% sure, you should call cancelTest within deferLater, not printData.
Also, I don't understand if this is a kind of "timeout"; please be advised that such callback will be triggered in all situations, even if the tests take less than five seconds. If you need a cancelable task, you should use reactor.callLater directly; that will NOT return a deferred you can use, but will let you cancel the scheduled call.

Retry .complete() for WrapperTask

I am using Luigi to run several tasks, and then I need to bulk transfer the output to a standardized file location. I've written a WrapperTask with an overridden complete() method to do this:
from luigi.task import flatten
class TaskX(luigi.WrapperTask):
date = luigi.DateParameter()
client = luigi.s3.S3Client()
def requires(self):
yield TaskA(date=self.date)
yield TaskB(date=self.date)
def complete(self):
tasks_complete = all(r.complete() for r in flatten(self.requires()))
## at the end of everything, batch copy the files
if tasks_complete:
self.client.copy('current-old', 'current')
return True
else:
return False
if __name__ == "__main__":
luigi.run()
but I'm having trouble getting conditional part of complete() to be called when the process is actually finished.
I assume this is because of asynchronous behavior pointed out by others, but I'm not sure how to fix it.
I've tried running Luigi with these command-line parameters:
$ PYTHONPATH="" luigi --module x TaskX --worker-retry-external-task
But that doesn't seem to be working correctly. Is this the right approach to handle this type of task?
Also, I'm curious — has anyone had experience with the --worker-retry-external-task command? I'm having some trouble understanding it.
In the source code,
def _is_external(task):
return task.run is None or task.run == NotImplemented
is called to determine whether or not a LuigiTask has a run() method, which a WrapperTask does not. Thus, I'd expect the --retry-external-task flag to retry complete() for this until it's complete, thus performing the action. However, just playing around in the interpreter leads me to believe that:
>>> import luigi_newsletter_process
>>> task = luigi_newsletter_process.Newsletter()
>>> task.run
<bound method Newsletter.run of Newsletter(date=2016-06-22, use_s3=True)>
>>> task.run()
>>> task.run == None
False
>>> task.run() == None
True
This code snippet is not doing what it thinks it is.
Am I off-base here?
I still think that overriding .complete() should in theory have been able to do this, and I'm still not sure why it's not, but if you're just looking for a way to bulk-transfer files after running a process, a workable solution is just to have the transfer take place within a .run() method:
def run(self):
logger.info('transferring into current directory')
self.client.copy('current-old','current')

How to test while-loop (once) with nosetest (Python 2.7)

I'm pretty new to this whole "programming thing" but at age 34 I thought that I'd like to learn the basics.
I unfortunately don't know any python programmers. I'm learning programming due to personal interest (and more and more for the fun of it) but my "social habitat" is not "where the programmers roam" ;) .
I'm almost finished with Zed Shaws "Learn Python the Hard Way" and for the first time I can't figure out a solution to a problem. The last two days I didn't even stumble upon useful hints where to look when I repeatedly rephrased (and searched for) my question.
So stackoverflow seems to be the right place.
Btw.: I lack also the correct vocabular quite often so please don't hesitate to correct me :) . This may be one reason why I can't find an answer.
I use Python 2.7 and nosetests.
How far I solved the problem (I think) in the steps I solved it:
Function 1:
def inp_1():
s = raw_input(">>> ")
return s
All tests import the following to be able to do the things below:
from nose.tools import *
import sys
from StringIO import StringIO
from mock import *
import __builtin__
# and of course the module with the functions
Here is the test for inp_1:
import __builtin__
from mock import *
def test_inp_1():
__builtin__.raw_input = Mock(return_value="foo")
assert_equal(inp_1(), 'foo')
This function/test is ok.
Quite similar is the following function 2:
def inp_2():
s = raw_input(">>> ")
if s == '1':
return s
else:
print "wrong"
Test:
def test_inp_2():
__builtin__.raw_input = Mock(return_value="1")
assert_equal(inp_1(), '1')
__builtin__.raw_input = Mock(return_value="foo")
out = StringIO()
sys.stdout = out
inp_1()
output = out.getvalue().strip()
assert_equal(output, 'wrong')
This function/test is also ok.
Please don't assume that I really know what is happening "behind the scenes" when I use all the stuff above. I have some layman-explanations how this is all functioning and why I get the results I want but I also have the feeling that these explanations may not be entirely true. It wouldn't be the first time that how I think sth. works turns out to be different after I've learned more. Especially everything with "__" confuses me and I'm scared to use it since I don't really understand what's going on. Anyway, now I "just" want to add a while-loop to ask for input until it is correct:
def inp_3():
while True:
s = raw_input(">>> ")
if s == '1':
return s
else:
print "wrong"
The test for inp_3 I thought would be the same as for inp_2 . At least I am not getting error messages. But the output is the following:
$ nosetests
......
# <- Here I press ENTER to provoke a reaction
# Nothing is happening though.
^C # <- Keyboard interrupt (is this the correct word for it?)
----------------------------------------------------------------------
Ran 7 tests in 5.464s
OK
$
The other 7 tests are sth. else (and ok).
The test for inp_3 would be test nr. 8.
The time is just the times passed until I press CTRL-C.
I don't understand why I don't get error- or "test failed"-meassages but just an "ok".
So beside the fact that you may be able to point out bad syntax and other things that can be improved (I really would appreciate it, if you would do this), my question is:
How can I test and abort while-loops with nosetest?
So, the problem here is when you call inp_3 in test for second time, while mocking raw_input with Mock(return_value="foo"). Your inp_3 function runs infinite loop (while True) , and you're not interrupting it in any way except for if s == '1' condition. So with Mock(return_value="foo") that condition is never satisfied, and you loop keeps running until you interrupt it with outer means (Ctrl + C in your example). If it's intentional behavior, then How to limit execution time of a function call in Python will help you to limit execution time of inp_3 in test. However, in cases of input like in your example, developers often implement a limit to how many input attempts user have. You can do it with using variable to count attempts and when it reaches max, loop should be stopped.
def inp_3():
max_attempts = 5
attempts = 0
while True:
s = raw_input(">>> ")
attempts += 1 # this is equal to "attempts = attempts + 1"
if s == '1':
return s
else:
print "wrong"
if attempts == max_attempts:
print "Max attempts used, stopping."
break # this is used to stop loop execution
# and go to next instruction after loop block
print "Stopped."
Also, to learn python I can recommend book "Learning Python" by Mark Lutz. It greatly explains basics of python.
UPDATE:
I couldn't find a way to mock python's True (or a builtin.True) (and yea, that sounds a bit crazy), looks like python didn't (and won't) allow me to do this. However, to achieve exactly what you desire, to run infinite loop once, you can use a little hack.
Define a function to return True
def true_func():
return True
, use it in while loop
while true_func():
and then mock it in test with such logic:
def true_once():
yield True
yield False
class MockTrueFunc(object):
def __init__(self):
self.gen = true_once()
def __call__(self):
return self.gen.next()
Then in test:
true_func = MockTrueFunc()
With this your loop will run only once. However, this construction uses a few advanced python tricks, like generators, "__" methods etc. So use it carefully.
But anyway, generally infinite loops considered to be bad design solutions. Better to not getting used to it :).
It's always important to remind me that infinite loops are bad. So thank you for that and even more so for the short example how to make it better. I will do that whenever possible.
However, in the actual program the infinite loop is how I'd like to do it this time. The code here is just the simplified problem.
I very much appreciate your idea with the modified "true function". I never would have thought about that and thus I learned a new "method" how tackle programming problems :) .
It is still not the way I would like to do it this time, but this was the so important clue I needed to solve my problem with existing methods. I never would have thought about returning a different value the 2nd time I call the same method. It's so simple and brilliant it's astonishing me :).
The mock-module has some features that allows a different value to be returned each time the mocked method is called - side effect .
side_effect can also be set to […] an iterable.
[when] your mock is going to be
called several times, and you want each call to return a different
value. When you set side_effect to an iterable every call to the mock
returns the next value from the iterable:
The while-loop HAS an "exit" (is this the correct term for it?). It just needs the '1' as input. I will use this to exit the loop.
def test_inp_3():
# Test if input is correct
__builtin__.raw_input = Mock(return_value="1")
assert_equal(inp_1(), '1')
# Test if output is correct if input is correct two times.
# The third time the input is corrct to exit the loop.
__builtin__.raw_input = Mock(side_effect=['foo', 'bar', '1'])
out = StringIO()
sys.stdout = out
inp_3()
output = out.getvalue().strip()
# Make sure to compare as many times as the loop
# is "used".
assert_equal(output, 'wrong\nwrong')
Now the test runs and returns "ok" or an error e.g. if the first input already exits the loop.
Thank you very much again for the help. That made my day :)

Is it possible to remove a break point set with ipdb.set_trace()?

I used ipdb.set_trace() somewhere in my Python code. Is it possible to ignore this break point using a IPDB command?
clear tells me that it cleared all break points, but IPDB stops again when it stumbles upon the line with ipdb.set_trace().
disable 1 tells me: No breakpoint numbered 1
ignore 1 says: Breakpoint index '1' is not valid
To clarify: Of course I could simply remove the break point from my source code. But this would require to quit the debugger and to start it again. Often it needs a lot of work to get somewhere and restarting the debugger makes life more difficult. Also if there is a huge loop and you want inspect objects in the loop, the easiest is to place a break point in the loop directly after the object. How could I then skip the loop (and all thousands of calls set_trace()) and step through the code after the loop using next?
Well, you CAN take advantage of the fact that anything in Python is an object. While in the debugger, you can do something like this:
def f(): pass
ipdb.set_trace = f
set_trace will still be called, but it won't do anything.
Of course, it's somewhat permanent, but you can just do
reload ipdb
and you'll get the original behaviour back.
(why would you do this? when you accidentally put a breakpoint in an often-called function that is usually called under a try/except. Once you realize you're stopping 1000 times in this function, you try to ctrl-c, but that gets caught by the try/except and you're back in ipdb again. So, if you're in low-level code, make sure your set_traces have some context:
if myvar in ['some', 'sentinel', 'values']:
ipdb.set_trace()
etc.
After learning from Corley
ipdb.set_trace = lambda: None
Works for me.
Based on Corley and GLaDOS answers, you can use this trick to set_trace for several loops without overwriting the ipdb.set_trace()
import ipdb
dbg1 = ipdb.set_trace # BREAKPOINT
for i in range(10):
my_var2 = 10 / 3
dbg1() # BREAKPOINT
dbg1 = lambda: None
print(my_var2)
dbg2 = ipdb.set_trace # BREAKPOINT
for i in range(10):
my_var2 = 20 / 3
dbg2() # BREAKPOINT
dbg2 = lambda: None
print(my_var2)
Works for me like a charm.
Running the program should also tell you exactly where you've set your idb.set_trace() when it's being hit (otherwise, try the where or bt commands). You can then remove that line from the file, and restart the program.
Otherwise, you might find this useful, if you feel more experimental.

Is it bad form to exit() from a function?

Is it bad form to exit() from within function?
def respond_OK():
sys.stdout.write('action=OK\n\n')
sys.stdout.flush() # redundant when followed by exit()
sys.exit(0)
Rather than setting an exit code and exit()ing from the __main__ name space?
def respond_OK():
global exit_status
sys.stdout.write('action=OK\n\n')
sys.stdout.flush()
exit_status = 0
sys.exit(exit_status)
The difference is negligible from a function perspective, just wondered what the consensus is on form. If you found the prior in someone else's code, would you look at it twice?
I would prefer to see an exception raised and handled from a main entry point, the type of which is translated into the exit code. Subclassing exceptions is so simple in python it's almost fun.
As posted in this answer's comments: Using sys.exit also means that the point of termination needs to know the actual status code, as opposed to the kind of error it encountered. Though that could be solved by an set of constants, of course. Using exceptions has other advantages, though: if one method fails, you could try another without re-entry, or print some post-mortem debugging info.
It makes no difference in terms of functionality, but it will likely make your code harder to follow, unless you take appropriate steps, e.g. commenting each of the calls from the main namespace which could lead to an exit.
Update: Note #mgilson's answer re the effect of catching an exception [It is possible to catch the exception that system.exit raises, and thus prevent exit]. You could make your code even more confusing that way.
Update 2: Note #sapht's suggestion to use an exception to orchestrate an exit. This is good advice, if you really want to do a non-local exit. Much better than setting a global.
There are a few cases where it's reasonably idiomatic.
If the user gives you bad command-line arguments, instead of this:
def usage(arg0):
print ... % (arg0,)
return 2
if __name__ == '__main__':
if ...:
sys.exit(usage(sys.argv[0]))
You often see this:
def usage():
print ... % (sys.argv[0],)
sys.exit(2)
if __name__ == '__main__':
if ...:
usage()
The only other common case I can think of is where initializing some library (via ctypes or a low-level C extension module) fails unexpectedly and leaves you in a state you can't reason about, so you just want to get out as soon as possible (e.g., to reduce the chance of segfaulting or printing garbage) For example:
if libfoo.initialize() != 0:
sys.exit(1)
Some might object to that because sys.exit doesn't actually bail out of the interpreter as soon as possible (it throws and catches an exception), so it's a false sense of safety. But you still see it reasonably often.

Categories

Resources