I want to capture the motion of a red paper on a white background in Linux using Python? I will be using OpenCV and an image library to create images at 30fps. Is there a way I can detetmine the position of the red paper (or a point on it) without going through every pixel in the image, since that would make it horribly slow. Or is there a better way of doing this altogether?
The code for reading the webcam is posted here.
Here is the full code, but for yellow paper. Change color range in line 18 for red color.
And it works only if single yellow paper is present.
And here is another code for the same, but now it works even if more than one yellow paper is present. Again change it to red yourself.
Related
how can I extract white paper(that contains Dots 'braille language') from image in image processing ?
I tried a lot of things but I want to extract it completely so I can do threshold on the result.
One way you might implement braille detection is using a Hough transform from OpenCV's library.
There is a function cv.HoughCircles() that allows you to specify a radius for circles you are looking for, assuming each sheet of braille paper will be the same distance from your camera you have a known tolerance for that.
I would just be worried about the clarity of white braille bumps on white background paper, which perhaps could be fixed using a perpendicular light source to create some shadows for each bump.
I'm doing a real time image processing at about 20 FPS.
I'm trying to filter 3 colors (using opencv, python). Those are: red, blue and yellow from a frame in which the light is not always constant (meaning sometimes the ambiental light can make my red pixel (B=119, G=84, R=199), and sometimes (R=60, G=0, B=0) - this is when the light comes right in front of the camera).
I know about a formula like so : if 2R-B-G is grater or equal to 0, then the pixel will go high (it remains unchanged). If the pixel goes negative, i should make it black. The problem is that this involves a pixel by pixel processing and i'm not sure it is what i'm searching for. I would like to try this approach but i don't know how could i do it a little bit faster (masking and cv2.bitwise_and() after each pixel checked is the only ideea that I have, and it will take ages)
For the moment, i am just creating a binary_mask
cv2.inRange(bgr_sliced_img, (low_blue, low_green, low_red), (upper_blue, upper_green, upper_red))
The problem with this is that it is impossible to adjust the parameters so that i get the red /blue/ yellow values visible in darker images as well as in bright images without seeing other wrong objects.
I've read this and i don't really know how could i use the median intensity in my frame. I think this is close to what i want, but i have no ideea how could i implement this.
I will provide 3 pictures that ilustrates the problem . The traffic stop sign is red, but because of ambiental light, its red is changed.
I would like to find some algorithm that will "see" the red color in each 3 pictures without the need of manual modifying parameters.
*I should mention that i'm searching for the traffic sign in a region of interest of about 100x100 pixels, not in the entire image
I need to keep only red text from images like this one.
Image with text
I have tried turning all pixels with less than 210-240 value for red in RGB to black, but it depends much on the image and the light of it, so it is not a good solution and is not always working.
I would need to make it using Python.
How to remove the shadows of the seeds? Also I would like to know if there is a way to change the color of all the seeds to red colour?
It seems rather easy to detect the seeds since your background is homogeneous. You can start by some simple image processing (contrast enhancement, thresholding, contour detection) to detect the seeds and then you can plot red blobs (with the same area as the detected regions) on the original image. As for the shadows, you can check this question (How to remove the shadow in image by using openCV?).
I think you can solve with this paper and it will make you interesting.
The algorithm described there works quite well and this will be a good example for you in using opencv.
And you can find the source code here
Regards.
My ideas are:
1.0. [unsolved, hard image-detection] Breaking image into squares and removing borders, surely other techniques!
1.1. [unsolved] Imagemagick: crop (instructions here), remove
certain borders -- this may take a
lot of time to locate the grid, image detection
problem (comparing white/black here) -- or there may be some magic wand style filter.
1.2. [unsolved] Python: you probably need thisfrom PIL import Image.
Obivously, Gimp's eraser is the wrong way to solve this problem since it's slow and error-prone. How would you remove the grid programmatically?
P.s. Casual discussion about this problem in Graphics.SE here that contains more physical and mechanical hacks.
If all images consist of black lines over a gray grid, you could adjust the white threshold to remove the grid (e.g. with ImageMagick):
convert -white-threshold 80% with-grid.png without-grid.png
You will probably have to experiment with the exact threshold value. 80% worked for me with your sample image. This will make the lines pixelated. But perhaps resampling can reduce that to an acceptable amount, e.g. with:
convert -resize 200% -white-threshold 80% -resize 50% with-grid.png without-grid.png
In your image the grid is somewhat lighter than the drawing, so we can set a threshold, and filter the image such that all 'light' pixels are set to white. Using PIL it could look like this:
import Image
def filter(x):
#200 is our cutoff, try adjusting it to see the difference.
if x > 200:
return 255
return x
im = Image.open('bird.png')
im = im.point(filter)
im.show()
Processing your uploaded image with this code gives:
Which in this case is a pretty good result. Provided your drawing is darker than the grid, you should be able to use this method without too many problems.
Feedback to the answers: emulbreh and fraxel
The python -version utilizes the ImageMagick so let's consider the ImageMagick. It does not work with colored version like the below due to different color-channel -profiles. Let's investigate this a bit further.
$ convert -white-threshold 0% bird.png without.png
This picture shows the amount of noise in the original scanned picture.
Puzzle: removing the right -hand corner as an example
I inversed the colors $ convert -negate whiteVersion.png blackVersion.png to make it easier to vizualise. Now with the below black photo, I wanted to remove the blue right corner i.e. make it black -- it means that I want to set BG channels to 0 of BG with 100% channel -value.
$ convert -channel BG -threshold 100% bbird.png without.png
Now the only thing left is of course Red -channel, I removed GB but white still have Red left. Now how can I remove just the right-hand -corner? I need to specify area and then do the earlier -operations.
How can I get this working with arbitrary photo where you want to remove certain color but leave some colors intact?
I don't know an easy way. The first problem is color-detection problem -- you specify some condition for colors (R,G,B) with some inequality. If the condition is true, you remove it in just the part. Now you do this for all basic colors i.e. when (R,G,B)=(100%,0,0), (R,G,B)=(0,100%,0) and (R,G,B)=(0,0,100%). Does there exist some ready implementation for this? Probably but it is much nicer to do it yourself, puzzle set!
Prerequisite knowledge
Tutorials here and here about Imagemagick.
In order to understand this topic, we need to know some basic physics: white color is a mixture of all colors and black consists of no colors.