Thanks to a lot of help from some people, I've got a threadsafe PyQt gui, where sys.stdout prints to a QTextEdit and works fine. Except when a large loop is run in the slave thread.
In a 300,000 iteration loop, I just calculate sqrt, power, and logs, and print the results, but the application just stops and hangs (on my own 64 bit Windows 7 machine, it's after 79%, on an older Mac running Lion it's after ~60%).
Running the loop directly in the python terminal results in the program finishing normally.
I'm not sure I know where to start debugging - is it likely just to be a memory issue, or is there some subtle problem with the threading?
As implied in the comments, modifying the QTextEdit to a QPlainTextEdit fixed the issue; QTextEdit is not designed for handling very large paragraphs , which is effectively what I was creating. I didn't find it necessary to specify the maximumBlockCount
Related
I have a python script, which is used to perform a lab measurement using several devices. The whole setup is rather involved, including communication over serial devices, API calls as well as the use of self-written and commercial drivers. In the end, however, everything boils down to two nested loops, which vary some parameters, collect data and write it to a file.
My problem is that I observe random occurences of a MemoryError, typically after 10 hours, equivalent to ~15k runs of the loops. At the moment, I don't have an idea, where it comes from or how I can trace it further. So I would be happy for suggestions, how to work on my problem. My observations up to this moment are as follows.
The error occurs at random states of the program. Different runs will throw the MemoryError at different lines of my script.
There is never any helpful error message. Python only says MemoryError without any error string. The traceback leads me to some point in the script, where memory is needed (e.g. when building a list), but it appears to be no specific instruction, which is the problem.
My RAM is far from full. The python process in question typically consumes some ten MB of RAM when viewed in the task manager. In addition, the RAM usage appears to be stable for hours. Usually, it increases slowly for some time, just to drop to down to the previous level quickly, which I interpret as the garbage collector kicking in periodically.
So far I did not find any indications for a memory leak. I used memory_profiler to trace the memory usage of my functions and found it to be stable. In addition, I followed this blog entry to observe what the garbage collector does in detail. Again, I could not find any hints for undeleted objects.
I am stuck to Win7 x86 due to a driver, which will only work on a 32bit system. So I cannot follow suggestions like this to go to a 64 bit version of Windows. Anyway, I do not see, how this would help in my situation.
The iPython console, from which the script is being launched, often behaves strange after the error occurred. Sometimes, a new MemoryError is thrown even for very simple operations. Often, the console is marked by Windows as "not responding" after some time. A menu pops up, where besides the usual options to wait for the process or to terminate it, there is a third option to "restore" the program (whatever that means). Doing so usually causes the console to work normal again.
At this point, I am somewhat out of ideas on how to proceed. The general receipe to comment out parts of the script until it works is highly undesirable in my case. As stated above, each test run will take several hours, meaning a potential downtime of weeks for my lab equipment. Going that direction, appears unfeasable to me. Is there any more direct approach to learn, what is crashing behind the scenes? How can I understand that python apparently fails to malloc?
I am working with Python 3.2 on an HP desktop running Windows 7. For awhile I have had issues with Python crashing my system, causing a blue screen of death, and I can't find any pattern in when it happens.
It has happened before when I closed out of the Python Shell, but mainly it happens when I initiate a run of the program out of IDLE.
It has also happened upon running several different programs, including larger applications using about a dozen modules, as well as on simple 6-liner test code with no imports.
There have even been times when I have been running a tkinter GUI several times in a row changing nothing but label formatting inbetween runs, and it will work fine for five or six times in a row, then suddenly bam: BSOD.
I've looked through the python docs a little but don't see anything about a bug that may be causing this. I know I haven't given much to go off of, but any ideas for what might be happening would be greatly appreciated!
Update:
The following is from the bugcheck analysis of dump file created when this happens.
CRITICAL_OBJECT_TERMINATION (f4)
A process or thread crucial to system operation has unexpectedly exited or been
terminated.
Several processes and threads are necessary for the operation of the
system; when they are terminated (for any reason), the system can no
longer function.
Arguments:
Arg1: 00000003, Process
Arg2: 8725ed40, Terminating object
Arg3: 8725eeac, Process image file name
Arg4: 82e11ca0, Explanatory message (ascii)
Debugging Details:
------------------
----- ETW minidump data unavailable-----
PROCESS_OBJECT: 8725ed40
IMAGE_NAME: mfehidk.sys
DEBUG_FLR_IMAGE_TIMESTAMP: 50296571
MODULE_NAME: mfehidk
FAULTING_MODULE: 00000000
PROCESS_NAME: pythonw.exe
BUGCHECK_STR: 0xF4_pythonw.exe
CUSTOMER_CRASH_COUNT: 1
DEFAULT_BUCKET_ID: WIN7_DRIVER_FAULT
CURRENT_IRQL: 0
ANALYSIS_VERSION: 6.3.9600.17029 (debuggers(dbg).140219-1702) x86fre
LAST_CONTROL_TRANSFER: from 82ef3edb to 82cf118c
I don't know enough about the system for this to mean much to me, but hopefully it helps.
I have a relatively simple (no classes) python 2.7 program. The first thing the program does is read an sqlite dbase into a dictionary. The database is large, but not huge, around 90Meg on disk. It takes about 20 seconds to read in. After reading in the database I initialize some variables, e.g.
localMax = 0
localMin = 0
firstTime = True
When I debug this program in Eclipse-3.7.0/pydev - even these simple lines - each single-step in the debugger eats up 100% of a core, and takes between 5 and 10 seconds. I can see the python process goes to 100% cpu for 10 seconds. Single-step... wait 10 seconds... single-step... wait 10 seconds... If I debug at the command line just using pdb, no problems. If I'm not debugging at all, the program runs at "normal" speed, nothing strange like in Eclipse.
I've reproduced this on a dual core Win7 PC w/ 4G memory, my 8 core Ubuntu box w/ 8G of memory, and even my Mac Air. How's that for multi-platform development! I kept thinking it would work somewhere. I'm never even close to running out of memory at any time.
On each Eclipse single-step, why does the python process jump to 100% CPU, and take 10 seconds?
Here is a good enough workaround, based on Mikko Ohtamaa's hint. I just verified the following on my Mac Air:
If I simply close the 'Variables' window in the Eclipse GUI, I can single step through the code at normal speed. Which is great, but, uh, I don't have the Variables window.
For any variable I want to see, I can hover my cursor over the variable and see the value. I didn't attempt to hover over my large dictionary that is the culprit here.
I can also right-click on any variable and add a 'Watch', which brings up an 'Expressions' window. In this case the variable is just a degenerate case (very simple case) of an 'expression.
So, the workaround for me is to close the Eclipse Variable window, and use the Expressions window to selectively view variables. A pain, but for the debugging I'm doing it is better than pdb.
I simply commented this line out:
np.set_printoptions(threshold = 'nan')
It seems eclipse is trying to keep up with too much information.
I've made a GUI to set up and start a numerical integrator using PyQT4, Wing, QT, and Python 2.6.6, on my Mac. The thing is, when I run the integrator form the GUI, it takes very many times longer than when I crudely run the integrator from the command line.
As an example, a 1000 year integration took 98 seconds on the command line and ~570 seconds from the GUI.
In the GUI, the integration runs from a thread and then returns. It uses a a queue to communicate back to the GUI.
Does anyone have any ideas as to where the bottleneck is? I suspect that others may be experiencing something like this just on a smaller scale.
t = threading.Thread(target=self.threadsafe_start_thread, args=(self.queue, self.selected))
t.start()
In general it is not a good idea to use python threads within a pyqt application. Instead use a QThread.
Both python and QThreads call the same underlying mechanisms, but they don't play well together. I don't know if this will solve your problem or not, but it might be part of the issue.
Is your thread code mostly Python code? If yes, then you might be a victim of the Global Interpreter Lock.
I'm currently using an application in python which works quite well but when I'm converting it with py2exe, the application seems to be suspended at the first "reactor.iterate"
Each time I press Ctrl+C to stop the application, the error is always the same and the application seems to be bloqued on a "reactor.iterate(4)"
This problem never occur with normal python interpreter.
Have you got an idea ?
The typical use of the reactor is not to call reactor.iterate. It's hard to say why exactly you're getting the behavior you are without seeing your program, but for a wild guess, I'd say switching to reactor.run might help.