In a python script i do a gobject call. I need to know, when its finished. are there any possible ways to check this?
Are there Functions or so on to check?
My code is:
gobject.idle_add(main.process)
class main:
def process():
<-- needs some time to finish -->
next.call.if.finished()
I want to start start another object, pending on the first to finish.
I looked through the gobject reference, but i didn't find something necessary.
Thanks
I am pretty sure you can do something like this, but in your case, as I understood is simpler and you do not need the result from process(), you just need to use something like
main.event.wait() // next.call.if.finished()
I already had to use the very same approach from that link, including the necessity of the result, which is a plus.
An alternative is to start the idle function with a list of the objects you want to process, so instead of waiting for one object to finish and then starting another one, you can let the idle function re-run itself:
def process():
# process object
if any_objects_left:
# set up the next object
return True
return False # remove the idle callback
Related
I'm currently working on a project where I need to send data via Serial persistently but need to occasionally change that data based in new inputs. My issue is that my current loop only functions exactly when a new input is offered by raw_input(). Nothing runs again until another raw_input() is received.
My current (very slimmed down) loop looks like this:
while True:
foo = raw_input()
print(foo)
I would like for the latest values to be printed (or passed to another function) constantly regardless of how often changes occur.
Any help is appreciated.
The select (or in Python 3.4+, selectors) module can allow you to solve this without threading, while still performing periodic updates.
Basically, you just write the normal loop but use select to determine if new input is available, and if so, grab it:
import select
while True:
# Polls for availability of data on stdin without blocking
if select.select((sys.stdin,), (), (), 0)[0]:
foo = raw_input()
print(foo)
As written, this would print far more than you probably want; you could either time.sleep after each print, or change the timeout argument to select.select to something other than 0; if you make it 1 for instance, then you'll update immediately when new data is available, otherwise, you'll wait a second before giving up and printing the old data again.
How will you type in your data at the same time while data is being printed?
However, you can use multithreading if you make sure your source of data doesn't interfere with your output of data.
import thread
def give_output():
while True:
pass # output stuff here
def get_input():
while True:
pass # get input here
thread.start_new_thread(give_output, ())
thread.start_new_thread(get_input, ())
Your source of data could be another program. You could connect them using a file or a socket.
I know using os.startfile('....') or os.system('....') can run a file, for example, *.pdf, *.mp4 and so on, but it can't get the hwnd of that file. (I have to know the hwnd to control the window, for instance, move, resize, or close it)
Of course, I can get the hwnd by win32gui.FindWindow(None,"file name"), however, it can't get hwnd separately if there are two windows which have the same name.
Is there a function can run a file and get its hwnd in win32?
Like this:
hwnd=win32.function("file dir/file name") // run a file like os.startfile(...)
//hwnd=-1 if failed
//hwnd=1234567 if successful
and then I can run multiple files and get their hwnd without any problem.
Thanks in advance.
First, "the hwnd" is an ambiguous concept. A process can have no windows, or 3000 windows.
But let's assume you happen to be running a program that always has exactly 1 window, and you need to know which windows belongs to the process you actually launched rather than, say, another instance of the same process already running. (Otherwise you could just search by title and class.)
So, you need some way to refer the process. If you're using os.system or os.startfile, you have no way to do that, so you're stuck. This is just one of the many, many reasons to use the subprocess module instead:
p = subprocess.Popen(args)
pid = p.pid
Now, you just enumerate all top-level windows, then get the PID for each, and check which one matches.
Assuming you have pywin32 installed, and you're using Python 3.x, it looks like this:
def find_window_for_pid(pid):
result = None
def callback(hwnd, _):
nonlocal result
ctid, cpid = win32process.GetWindowThreadProcessId(hwnd)
if cpid == pid:
result = hwnd
return False
return True
win32gui.EnumWindows(callback, None)
return result
In Python 2.x, there's no nonlocal, so you need some other way to get the value from your callback to the outer function, like a closure around a mutable dummy variable (like result = [None], then set result[0] instead of result).
But note that this can easily fail, because when you first launch the process, it probably doesn't have a window until a few milliseconds later. Without some means of synchronizing between the parent and child, there's really no way around this. (You can hack it by, say, sleeping for a second, but that has the same problem as any attempt to sleep instead of synchronizing—most of the time, it'll be way too long, reducing the responsiveness/performance of your code for no reason, and occasionally, when the computer is busy, it'll be too short and fail.)
The only way to really solve this is to use pywin32 to create the process instead of using standard Python code. Then you have a handle to the process. This means you can wait for the child to start its window loop, then enumerate just that process's windows.
I have a function in my GUI that takes a while to complete since it comunicates with another programm. Since I don't want to wait for it to finish everytime before resuming work with the GUI, I want to start this function as a thread.
I tried doing it like this:
threading.Thread(target=self.Sweep, args=Input).start()
but it's not doing anything, no exception, no results. If I start the function normaly it works fine
self.Sweep(Input)
what am i doing wrong here?
I don't know if it's enough to solve the problem, but at least, you should make your args
args=(Input,)
in order to match it with the "direct" call.
The args parameter for Thread() is expected to be a tuple with all the arguments for the target function. As you have one argument, Input, you must match this tuple to represent this.
The threading module is meant to be used in the same way as the Java equivalent.
I think you are trying to use thread. Try this:
thread.start_new_thread(someFunc, ())
You can get some help here about thread.start_new_thread.
looks to me like glglgl is right.
You should pass a tuple or list for "args", e.g. args=[1] and not args=1.
What happens is, you start your thread, and it immediately dies because it tries to open a sequence - args - and you pass something other than a sequence - and an exception TypeError is thrown.
I am suspicious about your logging - you should have seen this exception.
I have got stuck with a problem.
It goes like this,
A function returns a single result normally. What I want is it to return continuous streams of result for a certain time frame(optional).
Is it feasible for a function to repeatedly return results for a single function call?
While browsing through the net I did come across gevent and threading. Will it work if so any heads up how to solve it?
I just need to call the function carry out the work and return results immediately after every task is completed.
Why you need this is not specified in the question, so it is hard to know what you need, but I will give you a general idea, and code too.
You could return in that way: return var1, var2, var3 (but that's not what you need I think)
You have multiple options: either blocking or non-blocking. Blocking means your code will no longer execute while you are calling the function. Non-blocking means that it will run in parallel. You should also know that you will definitely need to modify the code calling that function.
That's if you want it in a thread (non-blocking):
def your_function(callback):
# This is a function defined inside of it, just for convenience, it can be any function.
def what_it_is_doing(callback):
import time
total = 0
while True:
time.sleep(1)
total += 1
# Here it is a callback function, but if you are using a
# GUI application (not only) for example (wx, Qt, GTK, ...) they usually have
# events/signals, you should be using this system.
callback(time_spent=total)
import thread
thread.start_new_thread(what_it_is_doing, tuple(callback))
# The way you would use it:
def what_I_want_to_do_with_each_bit_of_result(time_spent):
print "Time is:", time_spent
your_function(what_I_want_to_do_with_each_bit_of_result)
# Continue your code normally
The other option (blocking) involves a special kind of functions generators which are technically treated as iterators. So you define it as a function and acts as an iterator. That's an example, using the same dummy function than the other one:
def my_generator():
import time
total = 0
while True:
time.sleep(1)
total += 1
yield total
# And here's how you use it:
# You need it to be in a loop !!
for time_spent in my_generator():
print "Time spent is:", time_spent
# Or, you could use it that way, and call .next() manually:
my_gen = my_generator()
# When you need something from it:
time_spent = my_gen.next()
Note that in the second example, the code would make no sense because it is not really called at 1 second intervals, because there's the other code running each time it yields something or .next is called, and that may take time. But I hope you got the point.
Again, it depends on what you are doing, if the app you are using has an "event" framework or similar you would need to use that, if you need it blocking/non-blocking, if time is important, how your calling code should manipulate the result...
Your gevent and threading are on the right track, because a function does what it is programmed to do, either accepting 1 var at a time or taking a set and returning either a set or a var. The function has to be called to return either result, and the continuous stream of processing is probably taking place already or else you are asking about a loop over a kernel pointer or something similar, which you are not, so ...
So, your calling code which encapsulates your function is important, the function, any function, eg, even a true/false boolean function only executes until it is done with its vars, so there muse be a calling function which listens indefinitely in your case. If it doesn't exist you should write one ;)
Calling code which encapsulates is certainly very important.
Folks aren't going to have enough info to help much, except in the super generic sense that we can tell you that you are or should be within in some framework's event loop, or other code's loop of some form already- and that is what you want to be listening to/ preparing data for.
I like "functional programming's," "map function," for this sort of thing. I think. I can't comment at my rep level or I would restrict my speculation to that. :)
To get a better answer from another person post some example code and reveal your API if possible.
I have a class that looks like this:
class A:
def __init__(self, filename, sources):
# gather info from file
# info is updated during lifetime of the object
def close(self):
# save info back to file
Now, this is in a server program, so it might be shutdown without prior notice by a signal. Is it safe to define this to make sure the class saves it's info, if possible?
def __del__(self):
self.close()
If not, what would you suggest as a solution instead?
Waiting until later is just not the strategy to making something reliable. In fact, you have to go the complete opposite direction. As soon as you know something that should be persistent, you need to take action to persist it. In fact, if you want to make it reliable, you need to first write to disk the steps needed to recover from the failure that might happen while you are trying to commit the change. pseudopython:
class A:
def __init__(self, filename, sources):
self.recover()
# gather info from file
# info is updated during lifetime of the object
def update_info(self, info):
# append 'info' to recovery_log
# recovery_log.flush()
# write 'info' to file
# file.flush()
# append 'info-SUCCESS' to recover_log
# recovery_log.flush()
def recover(self):
# open recovery_log
# skip to last 'info-SUCCESS'
# read 'info' from recover_log
# write 'info' to file
# file.flush()
# append 'info-SUCCESS' to recover_log
# recovery_log.flush()
The important bit is that recover() happens every time, and that every step is followed by a flush() to make sure data makes it out to disk before the next step occurs. another important thing is that only appends ever occur on the recover log itself. nothing is overwritten in such a way that the data in the log can become corrupted.
No. You are NEVER safe.
If the opearting system wants to kill you without prior notice, it will. You can do nothing about it. Your program can stop running after any instruction, at any time, and have no opportunity to execute any additional code.
There is just no way of protecting your server from a killing signal.
You can, if you want, trap lesser signals and manually delete your objects, forcing the calls to close().
For orderly cleanup you can use the sys.atexit hooks. Register a function there that calls your close method. The destructor of on object may not be called at exit.
The __del__ method is not guaranteed to ever be called for objects that still exist when the interpreter exits.
Even if __del__ is called, it can be called too late. In particular, it can occur after modules it wants to call have been unloaded. As pointed out by Keith, sys.atexit is much safer.