Unable to change frame height, width in OpenCV - python

I'm using the OpenCV python bindings to put together a quick script/prototype, but for some odd reason,
camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 640.0)
...does nothing at all. By this, I mean it returns True, but the frame height is constant. No, returning a constant is not a fault, as camera.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT) will return 240.0 as the default value.
I don't have any clue on why is this failing. Any ideas?
For what it's worth, I'm running this code on windows 8.1.

It's often not possible to change the camera settings through openCV.
It depends on how well the camera implements the interface to Microsoft's Directshow.
Since Directshow is difficult to understand, poorly documented and hard to test and cameras are cheaply made.....

You have to set both WIDTH and HEIGHT in order to change camera resolution. Some says that changing the height automatically adjusts the width, but this did not worked for me.
See my other answer on this topic.

Related

OpenCV changing VideoCapture resolution causes colour issues and glitches

I want to capture 1920x1080 video from my camera but I've run into two issues
When I initialize a VideoCapture, it changes the width/height to 640/480
When I try to change the width/height in cv2, the image becomes messed up
Images
When setting 1920x1080 in cv2, the image becomes blue and has a glitchy bar at the bottom
cap = cv2.VideoCapture('/dev/video0')
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
Here's what's happening according to v4l2-ctl. The blue image doesn't seem to be a result of a pixelformat change (eg. RGB to BGR)
And finally, here's an example of an image being captured at 640x480 that has the correct colouring. The only difference in the code is that width/height is not set in cv2
Problem:
Actually the camera you are using has 2 mode:
640x480
1920x1080
One is for main stream, one is for sub stream. I also met this problem couple of times and here is the possible reasons why it doesn't work.
Note: I assume you tried different ways to run on full resolution(1920x1080) such as cv2.VideoCapture(0) , cv2.VideoCapture(-1) , cv2.VideoCapture(1) ...
Possible reasons
First reason could be that the camera doesn't support the resolution you desire but in your case we see that it supports 1920x1080 resolution. So this can not be the reason for your isssue.
Second reason which is general reason is that opencv backend doesn't support your camera driver. Since you are using VideoCaptureProperties of opencv, Documentation says:
Reading / writing properties involves many layers. Some unexpected result might happens along this chain. Effective behaviour depends from device hardware, driver and API Backend.
What you can do:
In this case, if you really need to reach that resolution and make compatible with opencv, you should use the SDK of your camera(if it has).

Code changes - Python - Piphone - Raspberry Pi

Right now I'm working hard to finish a project named Pihpone; I've been following the adafruit tutorial and I've also bought all the items that were suggested by them
The problem is that..the code was written for 2,8" while I have a 3.5" screen
I've succeeded in making some changes like modifying the 320x240 with 480x320
Still not enough but I dont know what to do further; pls come with any suggestion;
Here are the screenshots:
Before
After
https://github.com/climberhunt/Piphone/archive/master.zip
From there you can download the code made by Adafruit; you can find the code in piphone.py.
The code in piphone.py appears to be using the pygame module to do the graphics. The problem is all the hardcoded coordinates and sizes for things like the Buttons. To fix this, the values must be computed at run-time and depend on the display resolution. Line 255 sets the display mode.
screen = pygame.display.set_mode(modes[0], FULLSCREEN, 16)
After doing that, you can get a video display information object from pygame.display.Info() and obtain the width and height of the current video mode, then use those values to scale and position the buttons.
You may also need to create different sets of image files for the various sizes of display you want the program to support.

How to have scalable widgets in Tkinter?

Question
I have created several GUI projects so far, but they all have one fatal mistake. When I make a window smaller, (which usually uses only one frame.) several of the widgets will disappear. Is there anyway to make the widgets 'aware' of the size of their frame?
What I have tried so far
I have tried to use this:
w, h = root.winfo_screenwidth(), root.winfo_screenheight()
to specify the size of the window, but since many widgets use values other than pixels, it never works. I am also unsure as to whether it updates constantly or only when the window is spawned. (Text uses the size of the characters... etc)
Specs
Python 2.7.3
Windows 7/ Mac OSX Lion
This is generally quite easy, and often just happens by default. It all depends on how you use pack and grid. Without seeing your code it's going to be hard to give you useful information.
Can you show us a really small program that illustrates the problem and is an accurate indication of how you lay out your GUIs?

PIL - Dithering desired, but restricting color palette causes problems

I am new to Python, and trying to use PIL to perform a parsing task I need for an Arduino project. This question pertains to the Image.convert() method and the options for color palettes, dithering etc.
I've got some hardware capable of displaying images with only 16 colors at a time (but they can be specified RGB triplets). So, I'd like to automate the task of taking an arbitrary true-color PNG image, choosing an "optimum" 16-color palette to represent it, and converting the image to a palettized one containing ONLY 16 colors.
I want to use dithering. The problem is, the image.convert() method seems to be acting a bit funky. Its arguments aren't completely documented (PIL documentation for Image.convert()) so I don't know if it's my fault or if the method is buggy.
A simple version of my code follows:
import Image
MyImageTrueColor = Image.new('RGB',100,100) # or whatever dimension...
# I paste some images from several other PNG files in using MyImageTrueColor.paste()
MyImageDithered = MyImageTrueColor.convert(mode='P',
colors=16,
dither=1
)
Based on some searches I did (e.g.: How to reduce color palette with PIL) I would think this method should do what I want, but no luck. It dithers the image, but yields an image with more than 16 colors.
Just to make sure, I removed the "dither" argument. Same output.
I re-added the "dither=1" argument and threw in the Image.ADAPTIVE argument (as shown in the link above) just to see what happened. This resulted in an image that contained 16 colors, but NO dithering.
Am I missing something here? Is PIL buggy? The solution I came up with was to perform 2 steps, but that seems sloppy and unnecessary. I want to figure out how to do this right :-) For completeness, here's the version of my code that yields the correct result - but it does it in a sloppy way. (The first step results in a dithered image with >16 colors, and the second results in an image containing only 16 colors.)
MyImage_intermediate = MyImageTrueColor.convert(mode='P',
colors=16
)
MyImageDithered = MyImage_intermediate.convert(mode='P',
colors=16,
dither=1,
palette=Image.ADAPTIVE
)
Thanks!
Well, you're not calling things properly, so it shouldn't be working… but even if we were calling things right, I'm not sure it would work.
First, the "official" free version of the PIL Handbook is both incomplete and out of date; the draft version at http://effbot.org/imagingbook/image.htm is less incomplete and out of date.
im.convert(“P”, **options) ⇒ image
Same, but provides better control when converting an “RGB” image to an
8-bit palette image. Available options are:
dither=. Controls dithering. The default is FLOYDSTEINBERG, which
distributes errors to neighboring pixels. To disable dithering, use
NONE.
palette=. Controls palette generation. The default is WEB, which is
the standard 216-color “web palette”. To use an optimized palette, use
ADAPTIVE.
colors=. Controls the number of colors used for the palette when
palette is ADAPTIVE. Defaults to the maximum value, 256 colors.
So, first, you can't use colors without ADAPTIVE—for obvious reason: the only other choice is WEB, which only handles a fixed 216-color palette.
And second, you can't pass 1 to dither. That might work if it happened to be the value of FLOYDSTEINBERG, but that's 3. So, you're passing an undocumented value; who knows what that will do? Especially since, looking through all of the constants that sound like possible names for dithering algorithms, none of them have the value 1.
So, you could try changing it to dither=Image.FLOYDSTEINBERG (along with palette=Image.ADAPTIVE) and see if that makes a difference.
But, looking at the code, it looks like this isn't going to do any good:
if mode == "P" and palette == ADAPTIVE:
im = self.im.quantize(colors)
return self._new(im)
This happens before we get to the dithering code. So it's exactly the same as calling the (now deprecated/private) method quantize.
Multiple threads suggest that the high-level convert function was only intended to expose "dither to web palette" or "map to nearest N colors". That seems to have changed slightly with 1.1.6 and beyond, but the documentation and implementation are both still incomplete. At http://comments.gmane.org/gmane.comp.python.image/2947 one of the devs recommends reading the PIL/Image.py source.
So, it looks like that's what you need to do. Whatever Image.convert does in Image.WEB mode, you want to do that—but with the palette that would be generated by Image.quantize(colors), not the web palette.
Of course most of the guts of that happens in the C code (under self.im.quantize, self.im.convert, etc.), but you may be able to do something like this pseudocode:
dummy = img.convert(mode='P', paletter='ADAPTIVE', colors=16)
intermediate = img.copy()
intermediate.setpalette(dummy.palette)
dithered = intermediate._new(intermediate.im.convert('P', Image.FLOYDSTEINBERG))
Then again, you may not. You may need to look at the C headers or even source to find out. Or maybe ask on the PIL mailing list.
PS, if you're not familiar with PIL's guts, img.im is the C imaging object underneath the PIL Image object img. From my past experience, this isn't clear the first 3 times you skim through PIL code, and then suddenly everything makes a lot more sense.

OpenCV (via python) on Linux: Set frame width/height?

I'm using openCV via python on linux (ubuntu 12.04), and I have a logitech c920 from which I'd like to grab images. Cheese is able to grab frames up to really high resolutions, but whenever I try to use openCV, I only get 640x480 images. I have tried:
import cv
cam = cv.CaptureFromCAM(-1)
cv.SetCaptureProperty(cam,cv.CV_CAP_PROP_FRAME_WIDTH,1920)
cv.SetCaptureProperty(cam,cv.CV_CAP_PROP_FRAME_WIDTH,1080)
but this yields output of "0" after each of the last two lines, and when I subsequently grab a frame via:
image = cv.QueryFrame(cam)
The resulting image is still 640x480.
I've tried installing what seemed to be related tools via (outside of python):
sudo apt-get install libv4l-dev v4l-utils qv4l2 v4l2ucp
and I can indeed apparently manipulate the camera's settings (again, outside of python) via:
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1
v4l2-ctl --set-parm=30
and observe that:
v4l2-ctl -V
indeed suggests that something has been changed:
Format Video Capture:
Width/Height : 1920/1080
Pixel Format : 'H264'
Field : None
Bytes per Line : 3840
Size Image : 4147200
Colorspace : sRGB
But when I pop into the python shell, the above code behaves exactly the same as before (printing zeros when trying to set the properties and obtaining an image that is 640x480).
Being able to bump up the resolution of the capture is pretty mission critical for me, so I'd greatly appreciate any pointers anyone can provide.
From the docs,
The function cvSetCaptureProperty sets the specified property of video capturing. Currently the function supports only video files: CV_CAP_PROP_POS_MSEC, CV_CAP_PROP_POS_FRAMES, CV_CAP_PROP_POS_AVI_RATIO .
NB This function currently does nothing when using the latest CVS download on linux with FFMPEG (the function contents are hidden if 0 is used and returned).
I had the same problem as you. Ended up going into the OpenCV source and changing the default parameters in modules/highgui/src/cap_v4l.cpp, lines 245-246 and rebuilding the project.
#define DEFAULT_V4L_WIDTH 1920
#define DEFAULT_V4L_HEIGHT 1080
This is for OpenCV 2.4.8
It seems to be variable by cammera.
AFIK, Logitech cameras have particularly bad linux support (though It;s gotten better) Most of their issues are with advanced features like focus control. i would advise sticking with basic cameras (IE manual focus Logitech cameras) just to play it safe.
My built in laptop camera has no issue and displays at normal resolution.
My external logitech pro has issues initalizing.
However, I can overcome the resolution issue with these two lines.
Yes, they are the same as you used.
cv.SetCaptureProperty(self.capture,cv.CV_CAP_PROP_FRAME_WIDTH, 1280)
cv.SetCaptureProperty(self.capture,cv.CV_CAP_PROP_FRAME_HEIGHT, 720)
My Logitech still throws errors but the resolution is fine.
Please make sure the resolution you set is a supported by your camera or v4l will yell at you. If I set an unsupported native resolution, I have zero success.
Not sure if it works, but you can try to force the parameters to your values after you instantiate camera object:
import cv
cam = cv.CaptureFromCAM(-1)
os.system("v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1")
os.system("v4l2-ctl --set-parm=30")
image = cv.QueryFrame(cam)
That's a bit hacky, so expect a crash.
## Sets up the camera to capture video
cap = cv2.VideoCapture(device)
width = 1280
height = 720
#set the width and height
cap.set(3,width)
cap.set(4,height)

Categories

Resources