Memory leaks in PyQt w/ QImage in a loop - python

I'm working on a Python application right now that uses PyQt5 and CFFI bindings to libgphoto2.
I have this section of code that will poll the camera every 1/60 of a second to get a preview image and then schedule to draw it on the screen.
def showPreview(self):
# Do we have a camera loaded at the moment?
if self.camera:
try:
# get data from the camera & turn it into a pixmap
self.__buffer = self.camera.getPreview().scaled(self.size(), Qt.KeepAspectRatio) # Scale it
# Schedule a redraw
self.update()
# Setup another show preview in 1/60 of a second
QTimer.singleShot(1000 // 60, self.showPreview)
except GPhoto2Error as e:
# Ignore any errors from libgphoto2
pass
The getPreview() method returns a QImage type.
When I was running this with a camera connected to my application, I noticed that my system's memory usage kept on going up and up. Right I've had it run for about 10 minutes. It started at 0.5% usage and now is up to near 20%.
Correct me if I'm wrong, but shouldn't Python's GC be kicking in and getting rid of the old QImage objects. I suspect that they are lingering on longer than they should be.

In case it helps, I had a similar memory leak on an application using QImage and QPixmap. Memory was increasing at a rate of 2% every time I uploaded an image. By using QPixmap.scaled (...., Qt.FastTransformation) I achieved a 0.2% increase on every image. The problem is still there, but 10 times smaller. Nothing else was changed in my code. So it must be related to the destructor of QImage/QPixmap.

Related

Is there any way to speed up the draw rate of a Scrollable Canvas in Tkinter?

So after posting another question about this issue I realised the problem was only happening with my custom resize bind. When I resize the window by the default edges of the window the Issue does not happen and the contents of the canvas is drawn accurately - however when using my custom resize bind the contents of the canvas is laggy and lags behind the true position of the window.
import tkinter as tk
def resize(self, event=None):
y=root.winfo_height()
x1 = root.winfo_pointerx()
x0 = root.winfo_rootx()
root.geometry("%sx%s" % (x1-x0,y))
root=tk.Tk()
canvas = tk.Canvas(root,bg='red')
canvas.pack(padx=20,pady=20)
inside=tk.Frame(root)
inside.pack(fill='both')
for i in range(10):
x=tk.Label(inside,text='Lsdkfjhaskfjhskljdfhaskjfhksfhkjasdhf')
x.pack()
g=tk.Button(root,text='Drag to resize')
g.bind('<B1-Motion>',resize)
g.pack()
canvas.create_window(5,5,window=inside,anchor='nw')
Original content:
Screenshot while resizing window with manual bind via button - as you can see the content is not visible while resizing the window and lags behind where the canvas is.
The issue is fixed if I call root.update() at the start of my resize function however this then causes a recursion depth error to the thousands of calls being made to update in such a small period of time.
Finally to repeat. When resizing with the default window Resize nudges at the edge of the window the canvas resizes perfectly with perfect draw rate and the content stays visible all the time. Why is my binding not acting the same way?
Geometry calculations are quite complex in general (and Tk has some fairly sophisticated solvers behind the scenes for handling them) so they only run when the GUI is considered to be "idle". Idleness occurs when the event queue would otherwise block, waiting for the next message from the external world, i.e., when there's nothing else queued up for immediate attention. Tk also postpones redrawing (another actually really expensive operation) the same way; it makes the UI seem far faster than it really is. This was critical back 20 years ago and more, and yet is still really useful now.
But it breaks if there there is a total flood of events coming in, and drags can do that, especially when there's resizing involved. That's why you can do update idletasks (in Tcl/Tk notation) and update_idletasks() (in Tkinter notation). That that does is immediately process all the pending idle event callbacks without taking further real events, running the pending resize computations and redraws. It is far less likely to trigger reentrant event processing than a full update(), which is the problem you were hitting (and why update is considered harmful under normal circumstances).
Note that it is important to let the event loop itself have some time to process still, as parts of handling widget resizes are inevitably done via real events (as they impact upon the OS/GUI). By far the best way to do that is to just make sure to return normally from your event handlers as soon as practical.
Are you using the right bind event? Try ''
g.bind('<ButtonRelease-1>',resize)

issue displaying multiple images wx.python

Okay so I'm seriously lost here and any guidance would be appreciated. I have a program that displays all the wafer maps within a given time range.[image shown below] when I input the the dates 9/1/18- 9/15/18 it outputs fine . When I do 9/15/18-9/30/18 it also works fine but when I query the whole month I get an error. I'm starting to think this might be a memory related issues but I'm not too knowledgeable about memory. I do know that python handles memory by itself. Also I do have 16GB of RAM and working on a 64bit architecture. The setup is a GUI that allows you to pick a file and pick 2 dates then another wx.frame appears displaying the wafer maps.
the error that I receive when I query a larger date is the following
image = bitmap.ConvertToImage()
wx._core.wxAssertionError: C++ assertion "hbmp" failed at ..\..\src\msw\dib.cpp(139) in wxDIB::Create(): wxDIB::Create(): invalid bitmap
here's the parent function where that is being called from
def SetBitmapLabel(self, bitmap, createOthers=True):
"""
Set the bitmap to display normally.
This is the only one that is required.
If `createOthers` is ``True``, then the other bitmaps will be generated
on the fly. Currently, only the disabled bitmap is generated.
:param wx.Bitmap `bitmap`: the bitmap for the normal button appearance.
.. note:: This is the bitmap used for the unselected state, and for all other
states if no other bitmaps are provided.
"""
self.bmpLabel = bitmap
if bitmap is not None and createOthers:
image = bitmap.ConvertToImage()
imageutils.grayOut(image)
self.SetBitmapDisabled(wx.Bitmap(image))
and here's where that function above is called
def SetBitmap(self, bmp):
"""
Sets the bitmap representation of the current selected colour to the button.
:param wx.Bitmap `bmp`: the new bitmap.
"""
self.SetBitmapLabel(bmp)
self.Refresh()
any help would be greatly appreciated because at this point I have no idea where to go from here. Maybe the GUI im working w/ is only operating at 32bit ? not sure. Not sure if the image is needed but here is below
EDIT
thanks to the guys below I discovered that the reason this is happening is because my GDI objects reach 10,000 for the script which is the limitations set by windows. Now I have to find a work around for this. Will probably post another question to dive into this
It looks like you could be running out of GDI resources, i.e. creating the actual bitmap object (HBITMAP) fails. Unfortunately there is not too much that can be done in this case other than obvious: create fewer (and/or smaller) bitmaps.
Also check that you don't leak any bitmaps, i.e. if the problem only starts appearing after you run your application for some time, this could well be the case. Many diagnostic tools under Windows (e.g. Process Explorer) can show you GDI resources consumed by your process, check if they don't grow while the program is running.
As VZ said, it seems that you are running out of GDI resources (i.e. bitmap), which has nothing to do with available RAM, but with OS.
If this is the case, I'd go this route:
Store your images in wxImage's objects. They will use RAM, but not GDI resources.
Handle each window paint event. In this handler, convert the wxImage to a bitmap and blit it to the window. The bitmap can be released now.
Encounter the same problem. Simple solution solves the problem:
import subprocess, time
while True:
cd = "py img_get_xp.py"
subprocess.Popen(cd)
time.sleep(600)
subprocess.Popen('tasklist>task.txt', shell=True).communicate()
tasks = open('task.txt')
lines = tasks.read()[::-1]
pos = lines.find("exe.nohtyp")
pid = lines[pos - 22:pos - 18][::-1]
subprocess.call(['taskkill', '/F', '/T', '/PID', pid])

Determine the camera trigger delay of see3cam

I have an application where I need to take a picture at a very specific moment when another program gives the command. As the camera needs to be very light, I bought the see3cam_CU135 from e-con systems. I was planning on running the camera with python and having python wait for the command from the other program. The delay between triggering and the actual exposure is essential to know and so far, I haven't been very successful in finding out what it is. Here's my setup to figure out the delay:
I run a separate script that acts as stopwatch, printing the clock of my system to my screen:
import time
while True:
print(time.time())
time.sleep(0.001)
Then I run my actual script that takes the picture of the output of the first script.
import cv2
import time
vc = cv2.VideoCapture(1)
vc.set(cv2.CAP_PROP_FRAME_WIDTH,4208)
vc.set(cv2.CAP_PROP_FRAME_HEIGHT,3120)
vc.set(cv2.CAP_PROP_EXPOSURE,-2)
if vc.isOpened(): # try to get the first frame
t1=time.time()
while int(time.time()) == int(t1):
a=0
rval, frame = vc.read()
print(t1)
cv2.imwrite("photo.png", frame)
else:
rval = False
vc.release()
If I start the script at a time of, let's say 1512638235.3549826, the program should stay in the while loop until the next full second starts, 1512638236, and then trigger the picture, right? So the time on the picture minus the full second after t1 should give me the delay.
So far so good, but here's what's weird: yesterday I ran it, and t1 was 1512579170.079795. So the script should wait almost a second, then trigger the picture. However, the picture of the stopwatch showed 1512579170.588795 (half a second before the trigger command would have been sent). Is it possible that vc.read does not actually trigger a frame, but just reads whatever frame is currently in the buffer of the camera and, therefore, returned an older frame? If so, how can I trigger a frame manually exactly when I want it?
A second question I have here is the white balance issue with OpenCV. Apparently it is not possible (yet?) to manually control white balance. I don't really care, as long as it is reproducible. How can I guarantee that auto white balance is off? I need all my pictures to be taken with exactly the same settings, as I need to be able to compare the absolute intensity in different light conditions. I can't have some auto exposure or auto white balance change the settings all the time.
Oh, one more comment: I'm not married to python or to openCV, I'm open to do it completely differently. However, the other program that will ultimately send a command to my script to take a picture has to run under windows.
I'd be really thankful for some suggestions!

Running Tkinter dependent code alongside mainloop without GUI freeze

I am writing a simple image viewer that lets the user flip very quickly through tens of thousands of images, about 100 at a time. The images are files on disk.
In order for the viewer to function, it must continuously preload images ahead of the user's current one (or the viewer would be unusably sluggish).
The basic recipe that I'm using to display the images in a grid of Tkinter labels, is the following (this has been tested and works):
def load_image(fn):
image = Image.open(fn)
print "Before photoimage"
img = ImageTk.PhotoImage(image)
print "After photoimage"
label.config(image=load_image("some_image.png")
I need the ImageTk.PhotoImage instance to display the image on a label. I have implemented two different approaches, each with an associated problem.
First approach:
Launch a separate thread which pre-loads the images:
def load_ahead():
for fn in images:
cache[fn] = load_image()
threading.Thread(target=load_ahead).start()
top.mainloop()
This works quite well on my Linux machine. However, on another machine (which happens to be running Windows, and compiled with pyinstaller), a deadlock seems to happen. "Before Photoimage" is printed, and then the program freezes, which suggests that the loader thread gets stuck at creating the ImageTk.PhotoImage object. Musst the creation of an ImageTk.PhotoImage object happen within the main (Tkinter mainloop's) thread? Is the creation of PhotoImage computationally expensive, or is negligible compared to actually loading the image from disk?
Second approach:
In order to circumvent this possible requirement of PhotoImage objects being created from within Tkiner's mainloop thread, I resorted to Tk.after:
def load_some_images():
#load only 10 images. function must return quickly to prevent freezing GUI
for i in xrange(10):
fn = get_next_image()
cache[fn] = load_image(fn)
top.after_idle(load_some_images)
top.after_idle(load_some_images)
The problem with this is that, appart from creating additional overhead (ie the image-loading procedure must be broken up into very small chunks since it is competing with the GUI) that it periodically freezes the GUI for the duration of the call, and it seems to consume any keyboard events that happened during its execution.
Third approach
Is there a way I can detect pending user events? How can I accomplish something like this?
def load_some_images():
while True:
try: top.pending_gui_events.get_nowait()
except: break
#user is still idle! continuing caching of images
fn = get_next_image()
cache[fn] = load_image(fn)
top.after_idle(load_some_images)
top.after(5,load_some_images)
Edit: I have tried using top.tk.call('after','info') to check pending keyboard events. This doesn't always reliably, and the interface is still sluggish/unresponsive.
Thanks in advance for any ideas
I recommend creating an load_one_image function rather than a load_some_images function. It will be less likely to interfere with the event loop.
Also, as a rule of thumb, a function called via after_idle shouldn't reschedule it self with after_idle. The reason is that after_idle will block until the idle event queue is drained. If you keep adding stuff on to the queue while the queue is being processed, it never gets completely drained. This could be the reason why your GUI seems to hang once in a while with your second approach.
Try after(5, ...) rather than after_idle(...). If your system can create an image in less than 5ms, you can process 100 images in about half a second, which is probably fast enough to give a pretty snappy interface. You can tweak the delay to see how it affects the overall feel of the app.

Why is WxPythons motion detection so slow?

I set the on_motion to handle EVT_MOTION. I want the mouse location for interactively generating a coordinate-specific image, but WxPython has a ~400ms delay in registering successive motion events. Which makes the interface sluggish.
Why is EVT_MOTION so slow and how do I fix it? I tried it in Ubuntu 11.10 and WinXP and the delays are comparable?
I need fast response times for selecting a portion from an image like the picture shows. As it stands, the "cross-hairs" follow the mouse too slowly.
Here is the code which I tried EVT_MOTION:
def on_motion(self, event):
"""mouse in motion"""
#pt = event.GetPosition()
self.mouseover_location = event.GetPosition()
self.t2 = time.time()
print "delay",self.t2 - self.t1
self.t1 = self.t2
delay 0.379776954651
delay 0.00115919113159
delay 0.421130895615
delay 0.416938066483
delay 0.376848936081
delay 0.387464046478
delay 0.40311384201
delay 0.392899036407
delay 0.385301113129
delay 0.422554969788
delay 0.355197906494
The question as it stands is incomplete, as there is no sample app to demonstrate the problem. However, I would say that the motion handler has got nothing to do with your problem, because most likely you are doing some expensive operation between subsequent motion handlers (like refreshing your entire drawing canvas).
If this is the case (and you can easily check if your paint routine is called between mouse motion events), I would suggest the following:
If your drawing that stuff yourself, ensure that you are using double buffering (via wx.BufferedPaintDC);
If your paint routine is indeed called between mouse motions, try and refresh only the damaged portion of your plot (via RefreshRect);
Use wx.Overlay to draw your rectangular selection (there are a few demos available on how to do that);
Post a small, runnable sample app that demonstrate the problem.
The EVT_MOTION is fired every time the mouse is moved! If you then call event.GetPosition() on every movement and also process the data, this will slow down performance.
How would it be to use EVT_LEFT_DOWN or something similar, and then get the position and process that data.
This will be much more efficient since you are only looking for a certain area of the image.
We'll really need to see what else is going on in the application to be able to give you any meaningful answers, although many people are able to solve the problems themselves in the process of creating a small sample demonstrating the problem to share with others.
http://wiki.wxpython.org/MakingSampleApps
Optimizing how you are drawing the cross-hairs and/or how you are refreshing the main content of the window is probably your best bet, but until you share more details all we can do is guess.

Categories

Resources