I'm writing a GUI using python and pyQt to read in data packets through UDP socket, processing it through OpenCV, then finally showing it as real time images using Qt. I created a UDP socket outside the while loop, while using the sock.recvfrom method to read in data packets inside the while loop. Within the same while loop, i processed the data and put it into OpenCV format and use OpenCV imshow() method to show the real time video for experimenting. Everything's great and working smooth, but when i try to show the video through QLabel using QImage and QPixmap, things went bizarre. If OpenCV imshow() exist, the code works fine with additional QPixmap shown in the QLabel on top of the OpenCV cv2.imshow() window. However, if i take out the OpenCV imshow(), the UI will freeze and nothing showed leading "python not responding". I've not yet come up with a good reason why this is happening, and i also tried keeping/changing cv2.waitkey() time without succeeding. Any help would be appreciated.
import socket
import cv2
from PyQt4 import QtCore, QtGui, uic
while True:
data, addr = self.sock.recvfrom(10240)
# after some processing on data to get im_data ...
self.im_data_color_resized = cv2.resize(im_data, (0, 0), interpolation = True)
# using OpenCV to show the video (the entire code works with cv2.imshow but not without it)
cv2.imshow('Real Time Image', self.im_data_color_resized)
cv2.waitKey(10)
# using QLabel to show the video
qtimage = cv2.cvtColor(self.im_data_color_resized, cv2.COLOR_BGR2RGB)
height, width, bpc = qtimage.shape
bpl = bpc * width
qimage = QtGui.QImage(qtimage.data, width, height, bpl, QtGui.QImage.Format_RGB888)
self.imageViewer_label.setPixmap(QtGui.QPixmap.fromImage(qimage))
You need to refresh the event queue, so that your GUI can be updated. Add QtGui.QApplication.processEvents() after the setPixamp function.
It works with cv2.waitKey() because it internally already refreshes the painting events allowing the Qt GUI to be refreshed. But I recommend not to rely on this hack, and explicitly refresh the Qt events with processEvents.
You may also want to put this processing loop in its own thread to leave the GUI/Main thread responsive.
Related
I would like to get this functionality with OpenCV 4.5.3 and Python 3.7 on Raspberry Pi 4 with 2021-05-07-raspios-buster-armhf-full:
cv::imshow("window", img);
do_something_while_img_is_displayed();
cv::destroyWindow("window");
I tried 2 options after a call to cv::imshow:
cv::waitKey(10)
cv::pollKey()
both of which display only a window frame without an image
What is the way to accomplish it?
NOTE
My intent is to have an image persistently displayed on the screen and be able to call other OpenCV functions at that time, e.g. capture and process images from a camera. I don't care about event processing loop - there will be no UI interaction at that time.
This snippet works on Ubuntu 18.04 and on 2021-05-07-raspios-buster-armhf-full:
import cv2, sys, time
image = cv2.imread("t1-4.png")
cv2.imshow("window", image)
cv2.waitKey(1000) # this delay has to be long on RPi
time.sleep(3) # substitute for do something
cv2.destroyWindow("window")
Equivalent C++ code works on both OSes as well.
I'm working on a Python application right now that uses PyQt5 and CFFI bindings to libgphoto2.
I have this section of code that will poll the camera every 1/60 of a second to get a preview image and then schedule to draw it on the screen.
def showPreview(self):
# Do we have a camera loaded at the moment?
if self.camera:
try:
# get data from the camera & turn it into a pixmap
self.__buffer = self.camera.getPreview().scaled(self.size(), Qt.KeepAspectRatio) # Scale it
# Schedule a redraw
self.update()
# Setup another show preview in 1/60 of a second
QTimer.singleShot(1000 // 60, self.showPreview)
except GPhoto2Error as e:
# Ignore any errors from libgphoto2
pass
The getPreview() method returns a QImage type.
When I was running this with a camera connected to my application, I noticed that my system's memory usage kept on going up and up. Right I've had it run for about 10 minutes. It started at 0.5% usage and now is up to near 20%.
Correct me if I'm wrong, but shouldn't Python's GC be kicking in and getting rid of the old QImage objects. I suspect that they are lingering on longer than they should be.
In case it helps, I had a similar memory leak on an application using QImage and QPixmap. Memory was increasing at a rate of 2% every time I uploaded an image. By using QPixmap.scaled (...., Qt.FastTransformation) I achieved a 0.2% increase on every image. The problem is still there, but 10 times smaller. Nothing else was changed in my code. So it must be related to the destructor of QImage/QPixmap.
I am creating a project with python and Raspberry Pi. I am trying to use my Webcam, as I, unfortunately burned my Camera Module. I was following along: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/robot/image_processing/
Everything is working fine, except for one problem. I am not able to save the image file that is captured. I would like to take the photo I have created and turn it into a .jpg image. Code I have currently:
from imgproc import *
import cv2
# open the webcam
my_camera = Camera(320, 240)
# grab an image from the camera
my_image = my_camera.grabImage()
# open a view, setting the view to the size of the captured image
my_view = Viewer(my_image.width, my_image.height, "ETSBot Look")
# display the image on the screen
my_view.displayImage(my_image)
# wait for 5 seconds, so we can see the image
waitTime(0)
Can someone please help me with this problem?
Thanks in advance!
-Saurish Srivastava
Custom Library: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/robot/downloads/
UPDATE: It does not have to just use this type of code. You can give me an example with a different software. Just tell me how to use it properly so I don't mess up.
Adding the following in your code should save the image in the array my_image as picture.jpg
cv2.imwrite('picture.jpg', my_image)
For details on configuring raspberry pi-http://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/
I am streaming and writing an image to a particular location in a raspberry pi. Every time a new image comes it overwrites the previous one. Now if i keep that image file open, it does not get automatically updated.I have to close and reopen it for the update to happen.Is there anyway i can automatically refresh it.
I tried implementing a python code to continuously read and show the image. But still i have to refresh the window for the image to get updated. Below is the code that i used.
img = cv2.imread("Filename",1)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Please suggest any alternatives. I just need to preview the stream.
You could use a simple loop as you can call imshow repeatedly without destroying it.
while True: #Find something to get out of here
img = cv2.imread("Filename",1)
cv2.imshow('image', img)
cv2.waitKey(1)
cv2.destroyAllWindows()
due to opencv documenation:
For example, waitKey(0) will display the window infinitely until any keypress (it is suitable for image display). waitKey(25) will display a frame for 25 ms, after which display will be automatically closed.
So you have to set time in milliseconds instead of 0.
I'm sending a stream of JPEG images from a Raspberry Pi to my MBP via a simple socket programme in Python 2.7.
When I read the image from the stream on my MBP, it opens up in Preview and opens a new Preview window for every separate image. I have an fps of about 2/3 and obviously 2/3 new windows per second is impossible to work with.
How can I go about only opening one Preview window and simply overwriting the displayed image? Would OpenCV be the best way to go? If so I am unsure how to.
Here is how I read the stream and display the images:
image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
if not image_len:
break
image_stream = io.BytesIO()
image_stream.write(connection.read(image_len))
image_stream.seek(0)
image = Image.open(image_stream)
image.show()
OS X Preview seems to automatically reload open images at intervals (always when the window receives focus), but Image.show saves a new temporary file each time you use it. I suggest saving each new frame to the same file and then using subprocess.call with the OS X open command.
This being said, the documentation notes that Image.show is primarily for debugging purposes. For a video with more than a few FPS, you probably want something else. One solution would be an HTML interface with WebSockets, perhaps using something like AutoBahn.