Numeric GUI bottleneck - python

I've made a GUI to set up and start a numerical integrator using PyQT4, Wing, QT, and Python 2.6.6, on my Mac. The thing is, when I run the integrator form the GUI, it takes very many times longer than when I crudely run the integrator from the command line.
As an example, a 1000 year integration took 98 seconds on the command line and ~570 seconds from the GUI.
In the GUI, the integration runs from a thread and then returns. It uses a a queue to communicate back to the GUI.
Does anyone have any ideas as to where the bottleneck is? I suspect that others may be experiencing something like this just on a smaller scale.
t = threading.Thread(target=self.threadsafe_start_thread, args=(self.queue, self.selected))
t.start()

In general it is not a good idea to use python threads within a pyqt application. Instead use a QThread.
Both python and QThreads call the same underlying mechanisms, but they don't play well together. I don't know if this will solve your problem or not, but it might be part of the issue.

Is your thread code mostly Python code? If yes, then you might be a victim of the Global Interpreter Lock.

Related

LLDB python debugger register read

I need to trace program execution, so I decided to make infinite loop, and read pc register and make step.
Platform: IOS
In such way I want to trace program's execution flow.
Question is - how should i get $pc register through LLDB python API?
Your program will likely have more than one thread, and each thread will have a different PC. So you would start with your SBProcess object, then it has a "threads" property for iterating over threads - represented by the SBThread object. The SBThread has a "frames" property which is an array of all the "SBFrames", and frames[0] is the bottom-most frame. The SBFrame has "pc" property which is the pc. This table of the Python SB API's might help you out:
LLDB Python APIs
However, what you are trying to do won't work under Xcode - which is generally the only way to do debugging on iOS. Xcode and Python currently fight over who gets to control process execution, and at some point the wrong actor wins and execution stalls.
You can do this sort of thing using a stand-alone Python driver, an example of which is:
Process Events Example
But since you can't really attach to an iOS process from stand-alone lldb, this is hard to use for iOS development.
BTW, I've occasionally done what you are describing on Mac OS X, and it is also really really slow. You would only want to do this when you are desperate.
You can sometimes get the same effect by putting breakpoints on every function entry point, which you can do on the lldb command line using:
(lldb) break set -r .
and if you only care about tracing through some given modules, you can add the --shlib option one or more times to the "break set" line to restrict the breakpoints to those libraries. Then write a breakpoint command (which you can do in Python) to gather the requisite information. This will still be slow, but is closer to useable.

Python hanging in a loop

Thanks to a lot of help from some people, I've got a threadsafe PyQt gui, where sys.stdout prints to a QTextEdit and works fine. Except when a large loop is run in the slave thread.
In a 300,000 iteration loop, I just calculate sqrt, power, and logs, and print the results, but the application just stops and hangs (on my own 64 bit Windows 7 machine, it's after 79%, on an older Mac running Lion it's after ~60%).
Running the loop directly in the python terminal results in the program finishing normally.
I'm not sure I know where to start debugging - is it likely just to be a memory issue, or is there some subtle problem with the threading?
As implied in the comments, modifying the QTextEdit to a QPlainTextEdit fixed the issue; QTextEdit is not designed for handling very large paragraphs , which is effectively what I was creating. I didn't find it necessary to specify the maximumBlockCount

how to get a real callback from os in python3

i wrote actionscript and javascript. add callback to invoke a piece of code is pretty normal in everyday life.
but cames to python it seems not quit an easy job. i can hardly see things writing in callback style.i mean real callback,not a fake one,here's a fake callback example:
a list of file for download,you can write:
urls = []
def downloadfile(url,callback):
//download the file
callback()
def downloadNext():
if urls:
downloadfile(urls.pop(),downloadNext)
downloadNext()
this works but would finally meet the maximum recursion limit.while a really callback won't.
A real callback,as far as i understand,can't not come from program, it's must come from physics,like CPU clock,or some hardware IO state change,this would invoke some interception to CPU ,CPU interrupt current operating flow and check if the runtime registered any code about this int,if has,run it,the OS wrapped it as signal or event or something else ,and finally pass it to application.(if i'm wrong ,please point it out)thus would avoid the function calling stack pile up to overflow,otherwise you'll drop into infinite recursion .
there was something like coroutine in python to handle multi tasks,but must be very carefully.if in any of the routine you are blocked,all tasks would be blocked
there's some third party libs like twisted or gevent,but seems very troublesome to get and install,platform limited,not well supported in python 3,it's not good for writing a simple app and distribute.
multiprocessing, heavy and only works on linux
threading,because of GIL, never be the first choice,and it seems a psuedo one.
why not python give an implementation in standard libraries?and is there other easy way to get the real callback i want?
Your example code is just a complicated way of sequentially downloading all files.
If you really want to do asyncronous downloading, using a multiprocessing.Pool, especially the Pool.map_async member function. is the best way to go. Note that this uses callbacks.
According to the documentation for multiprocessing:
"It runs on both Unix and Windows."
But it is true that multiprocessing on ms windows has some extra restrictions.

PyGTK custom timing

I need to have a custom timing for a component of my program (essentially i'm counting turns, at the rate of around 20 turns per second). Each turn i need to process some information.
However, I have to do this so that it could work with PyGTK. Any ideas on how to accomplish this?
The simplest solution is to use glib.timeout_add, which can periodically run code in the GLib main thread.
If your calculation is time-consuming and needs to be run in a different thread, you can use Python's threading.Timer instead. When you're ready to update the GUI, use glib.idle_add.

How can I profile a multithreaded program?

I have a program that is performing waaaay under par, and I would like to profile it. However, it is multithreaded, so I can't seem to find a good way to profile this thing. Any advice?
I've tried yappi, but it segfaults on OS X :(
EDIT: This is in python, sorry for putting it under profiling...
Are you multithreading or multiprocessing? If you are just multithreading, then that is the problem. Python currently has problems with multithreading on a multiprocessor system because of the Global Interpreter Lock (GIL). They are working on fixing it for Python 3.2 - at least so that your program will run as fast on a single core as on multiple cores.
If you aren't convinced take a look at the shootout results for the thread-ring program. Running with a single core is faster than running with quad cores.
Now, if you use multiprocessing instead, profiling can be difficult as well because then you have to run CProfiler from each separate process. There are some questions that point you in the right direction though.
Depending on how far you've come in your troubleshooting, there are some tools that might point you in the right direction.
"top" is a helpful start to show you if your problem is burning CPU time or simply waiting for stuff.
"dtruss -c" can show you where you spend time and what system calls takes most of your time.
Both these can give you a hint without knowing anything about python.
If you just want to use yappi, it isn't too much work to set up a virtualbox and install some sort of Linux on your machine. I find myself doing that from time to time when I want to try something.
There might of course be things I don't know about that makes it impossible or not worth the effort. Also, profiling on another OS running virtualized might not give the exact same results, but it might still be helpful.

Categories

Resources