I'm trying to isolate the sky region from a series of grayscale images in OpenCV. All of the images are fairly similar: the top of the image is always a sky region, and is always a bright, gray-white colour. I've attempted contour-based approaches, and written my own algorithm to extract the line of the horizon and divide the image into two masks accordingly. However, I've noticed that the reliability of the magic wand tool in Photoshop on this image set is MUCH more accurate.
Here's the image that I'm processing:
and the result that I hope to achieve:
How can this be imitated in OpenCV?
I think what you're looking for is the grabcut algorithm
Related
I am currently learning opencv to process images in Python.
I have some pictures and I should detect which part of the picture represents the sky: then, I should calculate the number of pixels that belong to the sky over the total.
In your opinion, is there a way to do this with opencv or should I train a neural network to recognize the sky in a series of pictures?
Any help would be greatly appreciated. Thank you.
I tried thresholding, countouring, background subtraction in opencv
imho, a CNN for that is a bit overpowered. Personally, I would try to select all the correlated pixel, starting from a targeted one that surely represent the sky. (something like the luminance/color picker implemented in adobe camera raw)
Is it possible to dash the blurred part of the image?
Right now I am using python with OpenCV. I know only how to load images and display if the image is blurred.
My input is a blurred image:
I would like to get:
I do not have:
original/unblurred image.
Output can have still blurred parts but dashed.
Thanks a lot for help!
You could try by computing the "Variance of the Laplacian" on parts of the image to detect the regions that have a low variation in greyscales (= assumed blurry) and which regions have a high variation in greyscale (= assumed non-blurry).
There is a nice tutorial on how to check if an image is blurry, it can be found here
There is also a post here that explains the theory behind it.
It ain't a complete solution, but it might be a way to start.
I'm currently learning about computer vision OCR. I have an image that needs to be scan. I face a problem during the image cleansing.
I use opencv2 in python to do the things. This is the original image:
image = cv2.imread(image_path)
cv2.imshow("imageWindow", image)
I want to cleans the above image, the number at the middle (64) is the area I wanted to scan. However, the number got cleaned as well.
image[np.where((image > [0,0,105]).all(axis=2))] = [255,255,255]
cv2.imshow("imageWindow", image)
What should I do to correct the cleansing here? I wanted to make the screen where the number 64 located is cleansed coz I will perform OCR scan afterwards.
Please help, thank you in advance.
What you're trying to do is called "thresholding". Looks like your technique is recoloring pixels that fall below a certain threshold, but the LCD digit darkness varies enough in that image to throw it off.
I'd spend some time reading about thresholding, here's a good starting place:
Thresholding in OpenCV with Python. You're probably going to need an adaptive technique (like Adaptive Gaussian Thresholding), but you may find other ways that work for your images.
I am new to openCV and python both. I am trying to count people in an image. The image is supposed to be captured with an overhead camera or the way a CCTV camera is placed.
I have converted the colored image into binary image and then inverted the binary image. Then I used bitwise OR on original and inverted binary image so that the background is white and the people are colored.
How to count these people? Is it necessary to use a classifier or can i just count the contours ,if yes then how to count them?
Plus there are some issues with the technique I'm using.
Faces of people are light in color so sometimes only hair are getting extracted.
The dark objects other than people also get extracted.
If the floor is dark it won't give the binary image that is needed.
So is there any other method to achieve what I'm trying to do here?
Not sure but it may worth to check there.
It explain how to perform face recognition using openCV and python in pictures and extand it to webcam here, it's not quite what your looking for but may give you some clue/
I'm having a problem with binarization of image (perhaps blurry in general)
I have this image:
and after I've done binarization I get
How can I do better binarization? My goal is to have just black background and white letters and nothing else. I used adaptive threshold binarization
cv2.adaptiveThreshold(image_gs,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY ,41,3)
and I also have
kernel=np.ones(1,1)
Does anyone have idea how to do that?
You should try deblurring methods, see these:
Deblurring image by deconvolution using opencv
Experiments with deblurring using OpenCV
Try out the following:
1.De-noise your image,first, by using either a Median,Bilateral,Gaussian or Adaptive Smooth Filter (Gaussian filter works pretty well when it comes to images with textual content).
2.De-blur the image by referring to http://www.pyimagesearch.com/2015/09/07/blur-detection-with-opencv/ or https://github.com/tvganesh/deconv
3.Check out Adaptive Gaussian thresholding,instead.In case its a scene text image,you can use Otsu's algorithm after shadow removal.
The 'Image Processing in OpenCV' tutorials have a detailed documentation on Image Thresholding.
The Image Filtering — OpenCV 3.0.0-dev documentation explains the implementation of the Median Blur, applied to an image.