How to fix rainbow like video with python-opencv? - python

I am trying to create a video monitoring system on my Raspberry Pi. The OS version is the latest Raspbian Buster.
I decided to move recently on Buster since it has just come out. Before I was on Stretch. It was working really fine. The only problem I got was that when I tried to increase my window size, the FPS was decreasing leading to a small latency. So that is why I moved to Buster to see if it was working better or if was a limitation due to the Raspberry Pi.
So here is my code:
import cv2
cap = cv2.VideoCapture()
ret, im = cap.read()
while True:
ret, im = cap.read()
cv2.namedWindow('frame', cv2.WINDOW_NORMAL)
cv2.imshow('frame', im)
key = cv2.waitKey(10)
This is what my expected output should looks like:
This is my actual output:
Do you have any idea of what the problem could be? And how to solve it?

Related

Pi Camera exposure control using OpenCV

I am using a Raspberry Pi V2.1 camera. I wanted to control the camera’s exposure time, shutter speed, etc using OpenCV. I am following the OpenCV flags for video I/O documentation. The link is here:
https://docs.opencv.org/3.4/d4/d15/group__videoio__flags__base.html
For ex:
I have tried using
cv2.CAP_PROP_AUTO_EXPOSURE = 0.25 and 0.75
It seems like auto exposure is turning on and off. But when I try to set the value manually using
cv2.CAP_PROP_EXPOSURE = -1 to -13 (according to some online blogs)
the camera is not responding.
The same goes for other flags. Most of them do not seem to be responding at all.
I have read the online documentation and get to know that flags are camera dependent. The OpenCV documentation, in this case, is not helpful at all.
So my question is How can I find out which flags are useful for the Pi camera and What are the valid values of these flags?
Thank you in advance.
I'm no expert on that topic but I managed to manually set the exposure for an RPi 4 with a camera v2.1.
I set the CAP_PROP_AUTO_EXPOSURE to 0.75 and CAP_PROP_EXPOSURE to 0. That left me with a black frame (as expected I guess). Increasing the exposure value gives gradually brighter images. For values above something like 80 it stopped getting any brighter.
This code gradually increases the exposure after each displayed frame and works for me:
import cv2
# Open Pi Camera
cap = cv2.VideoCapture(0)
# Set auto exposure to false
cap.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.75)
exposure = 0
while cap.isOpened():
# Grab frame
ret, frame = cap.read()
# Display if there is a frame
if ret:
cv2.imshow('Frame', frame)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
# Set exposure manually
cap.set(cv2.CAP_PROP_EXPOSURE, exposure)
# Increase exposure for every frame that is displayed
exposure += 0.5
# Close everything
cap.release()
cv2.destroyAllWindows()
Cheers,
Simon

How do I split my 800x480 5-inch screen into 2 parts for making VR Headset [duplicate]

I am building a stand-alone VR headset using Raspberry Pi 3 model b. I am having a problem with splitting the screen as we see on our phone. I am still learning Python so I don't have much idea on how to do this.
Here in this code, I have tried to solve the above-mentioned problem but when I run this code on Raspbian an error occurs that ImageGrab function works only on Windows and Mac. I tried to use pyscreenshot module also, although it works on my PC screen fairly when I connect it with my 5-inch screen, a black window opens and I see nothing.
import numpy as np
from PIL import ImageGrab
import cv2
import time
while(True):
screen = np.array(ImageGrab.grab(bbox=(920,420,1320,900)))
frame = cv2.cvtColor(screen, cv2.COLOR_BGR2RGB)
frame = cv2.resize(frame, (0, 0), None, 1, .83)
numpy_horizontal = np.hstack((frame,frame))
#cv2.namedWindow("window", cv2.WND_PROP_FULLSCREEN)
#cv2.setWindowProperty("window", cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
cv2.imshow('window',numpy_horizontal)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
Your problem is not splitting a screen, but to display an image on the screen. So you need a library to do that. In your example you are using OpenCV. This is usually a bad choice and only usefull for some simple debugging. You need a proper GUI library.
Here you have a gazillion of options. If you are into games, I would look into moderngl and moderngl-window. This is based on PySide2, and as far as I have seen Raspberry Pi now supports this.

Problem with Stereo Camera on one usb connect with OpenCV 4 and Python 3.7

I have this cam in test (https://www.amazon.de/gp/product/B01JLU20C0/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1) for a stero vision project.
The module has two cameras which are connected to the computer via a USB port. With this I would like to test depth detection for a project. If I only take photos, it works very well. Only the live stream doesn't work the same for both cameras. I already tried all possible resolutions, unfortunately no success. Does anyone have an idea?
THX
Windows 10, Python 3.7, CV4
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 320)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
# Second Cam
cap2 = cv2.VideoCapture(0)
cap2.set(cv2.CAP_PROP_FRAME_WIDTH, 320)
cap2.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
ret2, frame2 = cap2.read()
# Display the resulting frame
cv2.imshow('frame 1',frame)
cv2.imshow('frame 2',frame2)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I have now switched to two other cameras, each of which is connected specifically to USB. Now everything works fine. I'll install them in a housing and then Stereo Vision really gets going. As soon as I'm finished there will be everything on GitHub and Youtube..but it will take some time ;-)

How to real-time character recognition with python and camera?

I'm making kivy app to recognize character with camera on real-time.
However, there is no document except recognizing face.
I think there is a way because picamera is almost doing similar thing (creating opencv file from camera).
Would someone tell me to achieve this?
* PS *
I'm on the way to capture image when camera sees the number, but I don't know how to make that trigger.
Now this is the code, and I want to know when to break true sentence.
cap = cv2.VideoCapture(0)
while(True):
ret, image = cap.read()
cv2.imshow('image', image)
results = tes.image_to_string(image, boxes=True)
if results:
break
cap.release()
cv2.destroyAllWindows()
print(results)
but this one is too slow
How about using pytesseract 0.2.4 for that purpose? It looks like best solution, according to dr. Google

OpenCV + OS X + external webcam = very slow

I'm using openCV on OS X with my external webcam (Microsoft Cinema HD Lifecam) and its performance is very low even with the simplest camera readout code.
import cv2
cap = cv2.VideoCapture(1)
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow("Output", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I tried the same webcam with Photo Booth and it works well with high FPS. Also, I tried the same code with the built-in Facetime camera of my mac and it worked pretty fast. So, it seems like that I have some kind of configuration issue in OpenCV.
Has somebody ever experienced something like this?
Thanks for your answers.
It seems like I could solve my problem.
I just had to decrease the resolution of the camera.
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
I think Photo Booth sets the resolution automatically in order to increase the speed or the readout, however, one has to set this manually in OpenCV. Not sure about correctness of this explanation tough.
Try to enforce specific reader implementation, see here. Options to try CAP_QT and CAP_AVFOUNDATION, full list is here . Note, that OpenCV has to be built to support reader implementations.

Categories

Resources