I have a function in my GUI that takes a while to complete since it comunicates with another programm. Since I don't want to wait for it to finish everytime before resuming work with the GUI, I want to start this function as a thread.
I tried doing it like this:
threading.Thread(target=self.Sweep, args=Input).start()
but it's not doing anything, no exception, no results. If I start the function normaly it works fine
self.Sweep(Input)
what am i doing wrong here?
I don't know if it's enough to solve the problem, but at least, you should make your args
args=(Input,)
in order to match it with the "direct" call.
The args parameter for Thread() is expected to be a tuple with all the arguments for the target function. As you have one argument, Input, you must match this tuple to represent this.
The threading module is meant to be used in the same way as the Java equivalent.
I think you are trying to use thread. Try this:
thread.start_new_thread(someFunc, ())
You can get some help here about thread.start_new_thread.
looks to me like glglgl is right.
You should pass a tuple or list for "args", e.g. args=[1] and not args=1.
What happens is, you start your thread, and it immediately dies because it tries to open a sequence - args - and you pass something other than a sequence - and an exception TypeError is thrown.
I am suspicious about your logging - you should have seen this exception.
Related
Is it possible to return from a function and continue executing code from just under the function. I know that may sound vague but here is an example:
def sayhi():
print("hi")
continue_function() #makes this function continue below in stead of return
#the code continues here with execution and will print hey
print("hey")
sayhi()
when executing this the code should do this:
it prints "hey"
it calls the sayhi() function
it prints "hi"
it calls a function to make it continue after the function (in theory similar behavour could be achieve by using decorators)
it prints "hey" again
it calls sayhi() again
etc
i am fully aware of the fact that similar behaviour can be achieved by just using for loops but for the project i am working on this functionality is required and not achievable by using looping.
some solutions i have thought of (but i have no clue how i could execute them) are:
somehow clearing the stack python uses to return from one function to another
changing return values
changing python itself (just to make clear: it would solve the problem but it is something i do not want to do beacuse the project must be usable on non-altered versions of python
using some c extension to change python's behaviour from within python itself
Repetition without loops can be done with recursion:
def sayhi():
print("hey")
print("hi")
sayhi()
sayhi()
I assume you have some terminating condition to insert. If not, this code will give a RecursionError.
In a python script i do a gobject call. I need to know, when its finished. are there any possible ways to check this?
Are there Functions or so on to check?
My code is:
gobject.idle_add(main.process)
class main:
def process():
<-- needs some time to finish -->
next.call.if.finished()
I want to start start another object, pending on the first to finish.
I looked through the gobject reference, but i didn't find something necessary.
Thanks
I am pretty sure you can do something like this, but in your case, as I understood is simpler and you do not need the result from process(), you just need to use something like
main.event.wait() // next.call.if.finished()
I already had to use the very same approach from that link, including the necessity of the result, which is a plus.
An alternative is to start the idle function with a list of the objects you want to process, so instead of waiting for one object to finish and then starting another one, you can let the idle function re-run itself:
def process():
# process object
if any_objects_left:
# set up the next object
return True
return False # remove the idle callback
I have got stuck with a problem.
It goes like this,
A function returns a single result normally. What I want is it to return continuous streams of result for a certain time frame(optional).
Is it feasible for a function to repeatedly return results for a single function call?
While browsing through the net I did come across gevent and threading. Will it work if so any heads up how to solve it?
I just need to call the function carry out the work and return results immediately after every task is completed.
Why you need this is not specified in the question, so it is hard to know what you need, but I will give you a general idea, and code too.
You could return in that way: return var1, var2, var3 (but that's not what you need I think)
You have multiple options: either blocking or non-blocking. Blocking means your code will no longer execute while you are calling the function. Non-blocking means that it will run in parallel. You should also know that you will definitely need to modify the code calling that function.
That's if you want it in a thread (non-blocking):
def your_function(callback):
# This is a function defined inside of it, just for convenience, it can be any function.
def what_it_is_doing(callback):
import time
total = 0
while True:
time.sleep(1)
total += 1
# Here it is a callback function, but if you are using a
# GUI application (not only) for example (wx, Qt, GTK, ...) they usually have
# events/signals, you should be using this system.
callback(time_spent=total)
import thread
thread.start_new_thread(what_it_is_doing, tuple(callback))
# The way you would use it:
def what_I_want_to_do_with_each_bit_of_result(time_spent):
print "Time is:", time_spent
your_function(what_I_want_to_do_with_each_bit_of_result)
# Continue your code normally
The other option (blocking) involves a special kind of functions generators which are technically treated as iterators. So you define it as a function and acts as an iterator. That's an example, using the same dummy function than the other one:
def my_generator():
import time
total = 0
while True:
time.sleep(1)
total += 1
yield total
# And here's how you use it:
# You need it to be in a loop !!
for time_spent in my_generator():
print "Time spent is:", time_spent
# Or, you could use it that way, and call .next() manually:
my_gen = my_generator()
# When you need something from it:
time_spent = my_gen.next()
Note that in the second example, the code would make no sense because it is not really called at 1 second intervals, because there's the other code running each time it yields something or .next is called, and that may take time. But I hope you got the point.
Again, it depends on what you are doing, if the app you are using has an "event" framework or similar you would need to use that, if you need it blocking/non-blocking, if time is important, how your calling code should manipulate the result...
Your gevent and threading are on the right track, because a function does what it is programmed to do, either accepting 1 var at a time or taking a set and returning either a set or a var. The function has to be called to return either result, and the continuous stream of processing is probably taking place already or else you are asking about a loop over a kernel pointer or something similar, which you are not, so ...
So, your calling code which encapsulates your function is important, the function, any function, eg, even a true/false boolean function only executes until it is done with its vars, so there muse be a calling function which listens indefinitely in your case. If it doesn't exist you should write one ;)
Calling code which encapsulates is certainly very important.
Folks aren't going to have enough info to help much, except in the super generic sense that we can tell you that you are or should be within in some framework's event loop, or other code's loop of some form already- and that is what you want to be listening to/ preparing data for.
I like "functional programming's," "map function," for this sort of thing. I think. I can't comment at my rep level or I would restrict my speculation to that. :)
To get a better answer from another person post some example code and reveal your API if possible.
Am on a project using txrdq, am testing (using trial) for a case where a queued job may fail, trial marks the testcase as failed whenever it hits a failure in a errback ..
The errback is a normal behaviour, since a queued job may fail to launch, how to test this case using trial without failing the test ?
here's an example of the test case:
from twisted.trial.unittest import TestCase
from txrdq.rdq import ResizableDispatchQueue
from twisted.python.failure import Failure
class myTestCase(TestCase):
def aFailingJob(self, a):
return Failure("This is a failure")
def setUp(self):
self.queue = ResizableDispatchQueue(self.aFailingJob, 1)
def tearDown(self):
pass
def test_txrdq(self):
self.queue.put("Some argument", 1)
It seems likely that the exception is being logged, since the error handler just raises it. I'm not exactly sure what the error handling code in txrdq looks like, so this is just a guess, but I think it's a pretty good one based on your observations.
Trial fails any unit test that logs an exception, unless the test cleans that exception up after it's logged. Use TestCase.flushLoggedErrors(exceptionType) to deal with this:
def test_txrdq(self):
self.queue.put("Some argument", 1)
self.assertEqual(1, len(self.flushLoggedErrors(SomeException)))
Also notice that you should never do Failure("string"). This is analogous to raise "string". String exceptions are deprecated in Python since a looooong time ago. Always construct a Failure with an exception instance:
class JobError(Exception):
pass
def aFailingJob(self, a):
return Failure(JobError("This is a failure"))
This makes JobError the exception type you'd pass to flushLoggedErrors.
Make sure that you understand whether queue processing is synchronous or asynchronous. If it is synchronous, your test (with the flushLoggedErrors call added) is fine. If it is asynchronous, your error handler may not have run by the time your test method returns. In that case, you're not going to be testing anything useful, and the errors might be logged after the call to flush them (making the flush useless).
Finally, if you're not writing unit tests '''for''' txrdq, then you might not want to write tests like this. You can probably unit test txrdq-using code without using an actual txrdq. A normal Queue object (or perhaps another more specialized test double) will let you more precisely target the units in your application, making your tests faster, more reliable, and easier to debug.
This issue has now (finally!) been solved, by L. Daniel Burr. There's a new version (0.2.14) of txRDQ on PyPI.
By the way, in your test you should add from txrdq.job import Job, and then do something like this:
d = self.queue.put("Some argument", 1)
return self.assertFailure(d, Job)
Trial will make sure that d fails with a Job instance. There are a couple of new tests at the bottom of txrdq/test/test_rdq.py that illustrate this kind of assertion.
I'm sorry this problem caused so much head scratching for you - it was entirely my fault.
Sorry to see you're still having a problem. I don't know what's going on here, but I have been playing with it for over an hour trying to...
The queue.put method returns a Deferred. You can attach an errback to it to do the flush as #exarkun describes, and then return the Deferred from the test. I expected that to fix things (having read #exarkun's reply and gotten a comment from #idnar in #twisted). But it doesn't help.
Here's a bit of the recent IRC conversation, mentioning what I think could be happening: https://gist.github.com/2177560
As far as I can see, txRDQ is doing the right thing. The job fails and the deferred that is returned by queue.put is errbacked.
If you look in _trial_temp/test.log after you run the test, what do you see? I see an error that says Unhandled error in Deferred and the error is a Failure with a Job in it. So it seems likely to me that the error is somewhere in txRDQ. That there is a deferred that's failing and it's passing on the failure just fine to whoever needs it, but also returning the failure - causing trial to complain. But I don't know where that is. I put a print into the init of the Deferred class just out of curiousity to see how many deferreds were made during the running of the test. The answer: 12!
Sorry not to have better news. If you want to press on, go look at every deferred made by the txRDQ code. Is one of them failing with an errback that returns the failure? I don't see it, and I've put print statements in all over the place to check that things are right. I guess I must be missing something.
Thanks, and thanks too #exarkun.
I have a Python-based app that can accept a few commands in a simple read-eval-print-loop. I'm using raw_input('> ') to get the input. On Unix-based systems, I also import readline to make things behave a little better. All this is working fine.
The problem is that there are asynchronous events coming in, and I'd like to print output as soon as they happen. Unfortunately, this makes things look ugly. The "> " string doesn't show up again after the output, and if the user is halfway through typing something, it chops their text in half. It should probably redraw the user's text-in-progress after printing something.
This seems like it must be a solved problem. What's the proper way to do this?
Also note that some of my users are Windows-based.
TIA
Edit: The accepted answer works under Unixy platforms (when the readline module is available), but if anyone knows how to make this work under Windows, it would be much appreciated!
Maybe something like this will do the trick:
#!/usr/bin/env python2.6
from __future__ import print_function
import readline
import threading
PROMPT = '> '
def interrupt():
print() # Don't want to end up on the same line the user is typing on.
print('Interrupting cow -- moo!')
print(PROMPT, readline.get_line_buffer(), sep='', end='')
def cli():
while True:
cli = str(raw_input(PROMPT))
if __name__ == '__main__':
threading.Thread(target=cli).start()
threading.Timer(2, interrupt).start()
I don't think that stdin is thread-safe, so you can end up losing characters to the interrupting thread (that the user will have to retype at the end of the interrupt). I exaggerated the amount of interrupt time with the time.sleep call. The readline.get_line_buffer call won't display the characters that get lost, so it all turns out alright.
Note that stdout itself isn't thread safe, so if you've got multiple interrupting threads of execution, this can still end up looking gross.
Why are you writing your own REPL using raw_input()? Have you looked at the cmd.Cmd class? Edit: I just found the sclapp library, which may also be useful.
Note: the cmd.Cmd class (and sclapp) may or may not directly support your original goal; you may have to subclass it and modify it as needed to provide that feature.
run this:
python -m twisted.conch.stdio
You'll get a nice, colored, async REPL, without using threads. While you type in the prompt, the event loop is running.
look into the code module, it lets you create objects for interpreting python code also (shameless plug) https://github.com/iridium172/PyTerm lets you create interactive command line programs that handle raw keyboard input (like ^C will raise a KeyboardInterrupt).
It's kind of a non-answer, but I would look at IPython's code to see how they're doing it.
I think you have 2 basic options:
Synchronize your output (i.e. block until it comes back)
Separate your input and your (asyncronous) output, perhaps in two separate columns.