I need to have a custom timing for a component of my program (essentially i'm counting turns, at the rate of around 20 turns per second). Each turn i need to process some information.
However, I have to do this so that it could work with PyGTK. Any ideas on how to accomplish this?
The simplest solution is to use glib.timeout_add, which can periodically run code in the GLib main thread.
If your calculation is time-consuming and needs to be run in a different thread, you can use Python's threading.Timer instead. When you're ready to update the GUI, use glib.idle_add.
Related
Premise:
I've created a mainwindow. One of the drop down menu's has an 'ProcessData' item. When it's selected, I create a QProgressDialog. I then do a lot of processing in the main loop and periodically update the label and percentage in the QProgressDialog.
My processing looks like: read a large amount of data from a file (numpy memmapped array), do some signal processing, write the output to a common h5py file. I iterate over the available input files, and all of the output is stored in a common h5py hdf5 file. The entire process takes about two minutes per input file and pins one CPU to 100%.
Goal:
How do I make this process non-blocking, so that the UI is still responsive? I'd still like my processing function to be able to update the QProgressDialog and it's associated label.
Can I extend this to process more than one dataset concurrently and retain the ability to update the progressbar info?
Can I write into h5py from more than one thread/process/etc.? Will I have to implement locking on the write operation?
Software Versions:
I use python 3.3+ with numpy/scipy/etc. UI is in PyQt4 4.11/ Qt 4.8, although I'd be interested in solutions that use python 3.4 (and therefore asyncio) or PyQt5.
This is quite a complex problem to solve, and this format is not really suited to providing complete answers to all your questions. However, I'll attempt to put you on the right track.
How do I make this process non-blocking, so that the UI is still responsive? I'd still like my processing function to be able to update the QProgressDialog and it's associated label.
To make it non-blocking, you need to offload the processing into a Python thread or QThread. Better yet, offload it into a subprocess that communicates progress back to the main program via a thread in the main program.
I'll leave you to implement (or ask another question on) creating subprocesses or threads. However, you need to be aware that only the MainThread can access GUI methods. This means you need to emit a signal if using a QThread or use QApplication.postEvent() from a python thread (I've wrapped the latter up into a library for Python 2.7 here. Python 3 compatibility will come one day)
Can I extend this to process more than one dataset concurrently and retain the ability to update the progressbar info?
Yes. One example would be to spawn many subprocesses. Each subprocess can be configured to send messages back to an associated thread in the main process, which communicates the progress information to the GUI via the method described for the above point. How you display this progress information is up to you.
Can I write into h5py from more than one thread/process/etc.? Will I have to implement locking on the write operation?
You should not write to a hdf5 file from more than one thread at a time. You will need to implement locking. I think possibly even read access should be serialised.
A colleague of mine has produced something along these lines for Python 2.7 (see here and here), you are welcome to look at it or fork it if you wish.
I'm creating RSS app in PyQt and I'm trying to find a good way to program updates. I found this Executing periodic actions in Python but maybe there is Qt specific way to do this things.
I know update period for each feed so I want to run update at specific time(hh:mm).
Making 10 minute loop that will check current time and run a update if its grater than next predicted feed update seems missing the point of knowing specific time to run it.
You should use QTimer in Qt applications. Usually you don't need to care about specific update time, as the goal is regular periodic check. So the most straightforward approarch is to create a timer for each feed and set the update interval of each timer (e.g. 10 minutes).
If you for some reason really want to make an update at specific time, you can use something like QDateTime::currentDateTime().msecsTo(targetTime) to calculate timer interval, use QTimer::setSingleShot to make the timer non-periodic and set another timer when the first one is expired.
It may be reasonable to do timer->setTimerType(Qt::VeryCoarseTimer) because you don't need much accuracy and Qt can optimize performance and power consuming in some cases.
Note that you generally cannot use Python's means to set up timers because Qt has its own event loop and won't allow other libraries to run something in the middle of it.
i wrote actionscript and javascript. add callback to invoke a piece of code is pretty normal in everyday life.
but cames to python it seems not quit an easy job. i can hardly see things writing in callback style.i mean real callback,not a fake one,here's a fake callback example:
a list of file for download,you can write:
urls = []
def downloadfile(url,callback):
//download the file
callback()
def downloadNext():
if urls:
downloadfile(urls.pop(),downloadNext)
downloadNext()
this works but would finally meet the maximum recursion limit.while a really callback won't.
A real callback,as far as i understand,can't not come from program, it's must come from physics,like CPU clock,or some hardware IO state change,this would invoke some interception to CPU ,CPU interrupt current operating flow and check if the runtime registered any code about this int,if has,run it,the OS wrapped it as signal or event or something else ,and finally pass it to application.(if i'm wrong ,please point it out)thus would avoid the function calling stack pile up to overflow,otherwise you'll drop into infinite recursion .
there was something like coroutine in python to handle multi tasks,but must be very carefully.if in any of the routine you are blocked,all tasks would be blocked
there's some third party libs like twisted or gevent,but seems very troublesome to get and install,platform limited,not well supported in python 3,it's not good for writing a simple app and distribute.
multiprocessing, heavy and only works on linux
threading,because of GIL, never be the first choice,and it seems a psuedo one.
why not python give an implementation in standard libraries?and is there other easy way to get the real callback i want?
Your example code is just a complicated way of sequentially downloading all files.
If you really want to do asyncronous downloading, using a multiprocessing.Pool, especially the Pool.map_async member function. is the best way to go. Note that this uses callbacks.
According to the documentation for multiprocessing:
"It runs on both Unix and Windows."
But it is true that multiprocessing on ms windows has some extra restrictions.
I've made a GUI to set up and start a numerical integrator using PyQT4, Wing, QT, and Python 2.6.6, on my Mac. The thing is, when I run the integrator form the GUI, it takes very many times longer than when I crudely run the integrator from the command line.
As an example, a 1000 year integration took 98 seconds on the command line and ~570 seconds from the GUI.
In the GUI, the integration runs from a thread and then returns. It uses a a queue to communicate back to the GUI.
Does anyone have any ideas as to where the bottleneck is? I suspect that others may be experiencing something like this just on a smaller scale.
t = threading.Thread(target=self.threadsafe_start_thread, args=(self.queue, self.selected))
t.start()
In general it is not a good idea to use python threads within a pyqt application. Instead use a QThread.
Both python and QThreads call the same underlying mechanisms, but they don't play well together. I don't know if this will solve your problem or not, but it might be part of the issue.
Is your thread code mostly Python code? If yes, then you might be a victim of the Global Interpreter Lock.
Question for Python 2.6
I would like to create an simple web application which in specified time interval will run a script that modifies the data (in database). My problem is code for infinity loop or some other method to achieve this goal. The script should be run only once by the user. Next iterations should run automatically, even when the user leaves the application. If someone have idea for method detecting apps breaks it would be great to show it too. I think that threads can be the best way to achive that. Unfortunately, I just started my adventure with Python and don't know yet how to use them.
The application will have also views showing database and for control of loop script.
Any ideas?
You mentioned that you're using Google App Engine. You can schedule recurring tasks by placing a cron.yaml file in your application folder. The details are here.
Update: It sounds like you're not looking for GAE-specific solutions, so the more general advice I'd give is to use the native scheduling abilities of whatever platform you're using. Cron jobs on a *nix host, scheduled tasks on Windows, cron.yaml on GAE, etc.
In your other comments you've suggested wanting something in Python that doesn't leave your script executing, and I don't think there's any way to do this. Some process has to be responsible for kicking off whatever it is you need done, so either you do it in Python and keep a process executing (even if it's just sleeping), or you use the platform's scheduling tools. The OS is almost guaranteed to do a better job of this than your code.
i think you'd want to use cron. write your script, and have cron run it every X minutes / hours.
if you really want to do this in Python, you can do something like this:
while(True):
<your app logic here>
sleep(TIME_INTERVAL)
Can you use cron to schedule the job to run at certain intervals? It's usually considered better than infinite loops, and was designed to help solve this sort of problem.
There's a very primitive cron in the Python standard library: import sched. There's also threading.Timer.
But as others say, you probably should just use the real cron.